Table of contents
Summarising emails, drafting documents, meeting transcription – there’s no denying the productivity boosts businesses and individuals experience as they implement AI solutions into their daily routines.
But when it comes to your or your businesses legal matters, what should you be putting into Generative AI, and what should you be preparing yourself or leaving up to your lawyer? First, we will take a look at some of the risks of inappropriate use of AI in your legal matter, followed by some of the situations where it might be appropriate.
The Risks
Risk 1: Inaccurate information
We’ve all heard of GenAI platforms “hallucinating”, and the law is no exception. The NSW case of May v Costaras [2025] NSWCA 178 recently dealt with a self-represented litigant who delivered oral submissions which were admittedly prepared using Generative AI. The submissions included ‘non-sensical statements’, irrelevant cases and case law which simply did not exist. The judgement highlighted the need to be able to check the accuracy, veracity and relevance of what has been generated.
Risk 2: Loss of legal privilege and confidentiality
Client legal privilege is a fundamental principle of our legal system to protect legal advice sought and obtained from disclosure to third parties. This privilege only exists in confidential communications between a lawyer and client, their agents or a lawyer or client and a third party, in each case brought into existence for the dominant purpose of obtaining legal advice or use in litigation.
Because Generative AI is not a lawyer and may store and use the data inputted into it (meaning that data may no longer be confidential), any inputs into and outputs by Generative AI may not be privileged and may therefore be discoverable in legal proceedings. The Federal Court of Australia in Helmold & Mariya (No 2) [2025] FedCFamC1A 163 recently identified potential loss of privilege as a risk from disclosure to AI.
Similar risks exist for commercially sensitive information, such as financial or market-sensitive information, trade secrets or information relating to patentable inventions prior to filing. Where confidential information is input into Generative AI and is stored and used by the platform, it may lose its confidentially.
Risk 3: ChatGPT is not a lawyer (or a human)
At the end of the day, Generative AI is not a lawyer, or even a person. It cannot exercise discretion, consider your unique circumstances or understand nuance like a human, instead drawing conclusions from an algorithm. It is also not bound to the same ethical standards and professional obligations that apply to legal practitioners in Australia. The ultimate goal of any Generative AI is to give you the answer that you want to hear, so that you continue to use the service, feeding in information to train its algorithm and/or pay the ongoing subscription to generate revenue for its shareholders. Your lawyer’s paramount duty is to the court and the administration of justice – which yes, sometimes means giving you the answer you don’t want to hear.
Understanding these risks, when might it be appropriate to use Generative AI? We’ve set out some ground rules and some scenarios of what would likely be appropriate use:
- Actually read the terms & conditions
Yep, we mean it. And its privacy policy. And any other policy the service provider makes available to help you understand when, how and why they may use your data. And read them in full! Understand who owns both the input and the output, and consider the potential consequences of what you are inputting and how you intend to use the outputs. Know where your data is stored (which country, and what laws apply in that jurisdiction?), how long it is retained for, how they are allowed to use your data and any exclusions or carve-outs in the policies.
- Only use platforms which don’t use your data and/or content to train its models
Remember those terms and conditions and privacy policy you just read? They should have told you if they’re training their models on you. Some platforms (e.g. ChatGPT) allow you to “opt-out” of training on your content, while others go further by encrypting data, only using data as instructed and guaranteeing data isn’t used to train foundation models – such as the enterprise data protection provided by Microsoft Copilot.
- Never send your lawyer or the court anything without a thorough proof-read & review
“AI slop” – it’s low-quality content that’s fast, easy and inexpensive to create but often with little regard for accuracy, and it makes your lawyer’s job much harder than you think. Often content created by Generative AI is excessively long, includes inaccurate, misstated or incomplete information, and full of fluffy language. This all distracts your lawyer from finding the key information to progress your mater. Channel your inner year-10 English teacher and review every output with a heavy-handed red pen and make sure it sounds like something written by a human, before it hits your lawyer’s inbox to save you unnecessary legal fees.
- Generative AI isn’t a second opinion
Don’t input any advice, letters or other documents that your lawyer has provided you into Generative AI. Not only is Generative AI unqualified to provide a second opinion, but you also risk waiving privilege and confidentiality in those materials and harming your relationship with your lawyer. You may risk infringing your lawyer’s intellectual property rights in those materials. If you have any uncertainties about your lawyer’s advice, you should ask them directly in the first instance. If you’re still unsure, seek a second opinion from another human lawyer.
- Be open and honest about the use of Generative AI
If you’ve used Generative AI, it’s usually best to disclose this to your lawyer or the person receiving the correspondence. The Supreme Court of Victoria has provided guidelines that the use of AI should be disclosed to other parties and the court in the course of legal proceedings.
You should also ensure you always have the consent of anyone present in a meeting before using AI-generated transcription.
How to use a trusted platform for general administrative tasks
You’ve read the terms and conditions; you’ve double-checked that the model won’t train on or disclose your data and you’ve got your red pen ready. Now, here are some use cases where it might be appropriate to use Generative AI:
- Proofreading material you’ve drafted yourself
Suggested prompt: “proofread and show suggestions/changes in table form, ensuring spelling and grammar are consistent with Australian or British English language.“
- Redrafting a clunky sentence that you’re struggling with
Suggested prompt: “How do I reword this sentence so that it clearly explains X”
- Searching your internal files for a document that you don’t remember where you saved
Suggested prompt: “Saved in my files there should be a document which contains X. Help me find the original or any related documents”
- Transcription and notetaking of meetings (but only with the express consent of everyone else on the call, and where you are not relying solely on transcription)
This one is usually a button/integration to begin transcription, but you may also want to use a prompt to prepare a summary based on the transcription.
Suggested prompt: “Summarise what was discussed in the meeting, and identify any actions for me to complete with the relevant timeline”
For now, Generative AI probably shouldn’t be used for decision making, and they certainly aren’t replacing your lawyer any time soon. Despite this, when used intentionally and responsibly, there are certainly some instances where it may be appropriate and beneficial in progressing your legal matter, to the benefit of both you and your lawyer!
We are proud to say that this article was prepared by a human, without the assistance of Generative AI (em dashes and all).
This article is not legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought before taking any action based on this publication.








