icon-facebook icon-instagram icon-pinterest icon-soundcloud icon-twitter icon-youtube

These Guidelines have been developed to assist those conducting litigation in the Supreme Court.

Introduction

These Guidelines have been developed to assist those conducting litigation in the Supreme Court of Victoria. They are designed to assist both legal practitioners and self-represented litigants.  

An appendix to these Guidelines identifies common terms and defines those terms as they are used in these Guidelines.
  
Artificial intelligence (AI) is a broad concept encompassing many ways in which computer systems collate, synthesize, catalogue and generate selected information. Already AI supplements many computer based search engines and information management systems including those used by the legal profession and courts. 
 
The assistance of computers in information management is an important tool for the efficient conduct of litigation. The Court recognises that generative AI tools are already in use in legal settings. The capacity and use of such tools is rapidly increasing. 

Principles for use of AI by litigants

  1. Parties and practitioners who are using AI tools in the course of litigation should ensure they have an understanding of the manner in which those tools work, as well as their limitations.
  2. Parties and practitioners should be aware that the privacy and confidentiality of information and data provided to an external program that provides answers generated by AI may not be guaranteed and the information may not be secure.
  3. The use of AI programs by a party must not indirectly mislead another participant in the litigation process (including the Court) as to the nature of any work undertaken or the content produced by that program. Ordinarily parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken. Where appropriate (for example, where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents), the use of AI should be disclosed to other parties and the court.
  4. The use of AI programs to assist in the completion of legal tasks must be subject to the obligations of legal practitioners in the conduct of litigation, including the obligation of candour to the Court and, where applicable, to obligations imposed by the Civil Procedure Act 2010, by which practitioners and litigants represent that documents prepared and submissions made have a proper basis.
  5. Self represented litigants (and witnesses) who use generative AI to prepare documents are encouraged to identify this by including a statement as to the AI tool used in the document that is to be filed or the report that is prepared. This will not detract from the contents of the document being considered by the relevant judicial officer on its merits but will provide useful context to assist the judicial officer. For example it will assist in forming a more accurate assessment about the level of legal knowledge or experience possessed by a self-represented party.

    Application of principles

  6. AI that can search and identify relevant matters in a closed category of information is presently used with good effect in the Court. An illustration is the use of Technology Assisted Review (TAR) which employs machine learning for large scale document review. Practitioners should consider the use of such options to improve productivity and efficiency consistent with the expectation that use of common technologies is a core skill for lawyers (Practice Note SC Gn 5). Use of such technology with co-operation of the parties and appropriate management by the Court is to be encouraged.
  7. Specialised legally focused AI tools appear more likely to be useful and reliable to parties and practitioners involved in litigation than general-purpose tools. Access to specialised legal databases, some of which will make use of AI technologies, is available through the Law Library of Victoria.
  8. Generative AI and Large Language Models create output that is not the product of reasoning. Nor are they a legal research tool. They use probability to predict a given sequence of words. Output is determined by the information provided to it and is not presumed to be correct. The use of commercial or freely available public programs such as ChatGPT and Google Gemini, is more likely to produce results that are inaccurate for the purpose of current litigation. Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court. AI generated text should be checked so as not to be:
    (a) out of date, in that the model used may only have been trained on data up to a certain point in time, and therefore will be unaware of any more recent jurisprudence or other developments in the law that may be relevant to a case;
    (b) incomplete, in that the tool may not generate material addressing all arguments that a party is required to make or all issues that would be in a party’s interests to cover, and summaries generated by such tools may not contain all relevant points;
    (c) inaccurate or incorrect, in that the tool may not produce factually or legally correct output (for example in some situations, users have been adversely affected by placing reliance on made-up cases or incorrect legal propositions);
    (d) inapplicable to the jurisdiction, as the data used to train the underlying model might be drawn from other jurisdictions with different substantive laws and procedural requirements; or
    (e) biased, given the model will have been created based on data that the user is unaware of, but which may over- or under-represent certain demographics or otherwise prefer certain viewpoints over others in a way that will not be transparent to users.
  9. A party or practitioner signing or certifying a document, filing a document with the Court, or otherwise relying on a document’s contents in a proceeding, remains responsible for accuracy of the content. Whether a court document is signed by an individual or on behalf of a firm, the act of signing a document that is filed with the Court is a representation that the document is considered by those preparing it to be accurate and complete. Reliance on the fact that a document was prepared with the assistance of a generative AI tool is unlikely to be an adequate response to a document that contains errors or omissions.
  10. Particular caution needs to be exercised if generative AI tools are used to assist in the preparation of affidavit materials, witness statements or other documents created to represent the evidence or opinion of a witness. The relevant witness should ensure that documents are sworn/affirmed or finalised in a manner that reflects that person’s own knowledge and words. Similar considerations arise in the use and identification of such tools in compiling data in the preparation of any expert reports or opinions, and compliance with the Expert Witness Code of Conduct.

    Use of AI by Judicial Officers

  11. The Australian Institute of Judicial Administration has produced a guide for courts addressing the challenges and opportunities presented by AI tools.1 The guide emphasises the importance of ensuring that the use of AI tools is consistent with core judicial values of open justice, accountability, impartiality and equality before the law, procedural fairness, access to justice and efficiency.
  12. AI is not presently used for decision making nor used to develop or prepare reasons for decision because it does not engage in a reasoning process nor a process specific to the circumstance before the court.

Glossary

Artificial intelligence (AI) A term describing a range of technologies and techniques used to computationally generate outputs that typically require human intelligence to produce.
Generative AI AI systems that are able to produce new output such as text or images, usually based on text prompts provided as an input.
Machine learning (ML) An area of AI involving systems making decisions based on their analysis of data initially provided to them, and then learning from their experience in order to produce better results.
Large language models (LLMs) A type of model generated from training on an extremely large amount of text data, which can be used to understand and generate natural-language text.
Training data The ‘inputs’ used for a machine learning algorithm with which a model is developed to perform classifications or make decisions about other data subsequently. The training data that is used heavily affects the outputs that a model can be used to produce, meaning any biases or undesirable features present in the training data can therefore also be present in the outputs produced by a system that uses the resulting model.
Model The information used by an AI system to draw inferences or make decisions. The model is generated from the application of machine learning algorithms to a set of training data.
Technology-assisted review (TAR) An application of machine learning that is sometimes used in large discovery exercises in litigation, in which a computer system is used to identify and classify documents likely to be relevant to the issues in dispute in a case based on a smaller set of initial human-reviewed documents. The system subsequently ‘learns’ and refines its decision-making based on further human review of its outputs, enabling the semi-automated review of large volumes of data while only requiring human decision-making over a smaller subset of that material.

 

[1] https://aija.org.au/publications/ai-decision-making-and-the-courts-a-guide-for-judges-tribunal-members-and-court-administrators/, published June 2022, and updated in December 2023.

Author
Supreme Court of Victoria
Publisher
Supreme Court of Victoria
Date of publication