AI, Accuracy and Advocacy: A Timely Reminder for Barristers
Recent comments from the Supreme Court provide a stark reminder of the risks associated with the uncritical use of artificial intelligence in litigation.
The Court has affirmed that responsibility rests with the person filing submissions to ensure that all authorities cited are genuine and accurately referenced.
It cautioned that;
“reliance on false citations, including the unverified outputs of AI applications, may in serious cases amount to obstruction of justice or contempt of court.”
In Jones v Family Court [2026] NZSC 1, the applicant filed submissions citing a number of authorities that appeared to have been hallucinated by an artificial intelligence (AI) application.
The Court observed that misuse of AI in proceedings has serious implications for the administration of justice and for public confidence in the justice system. Those filing submissions must therefore verify that all authorities relied upon are authentic and correctly cited.
Although the Court’s remarks were made in the context of a self‑represented litigant, they serve as a timely and pointed reminder for the profession.
Barristers are already subject to well‑established professional and ethical obligations that apply irrespective of the tools used to prepare submissions. The increasing availability of generative AI does not dilute those obligations; if anything, it sharpens them.
The Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Lawyers
The Guidelines, issued by the Artificial Intelligence Advisory Group in December 2023, set out the judiciary’s expectations in this area. The guidelines emphasise that lawyers remain fully responsible for the accuracy of all material placed before the court, including legal citations and authorities generated with the assistance of AI.
They warn expressly of the risk that generative AI tools may fabricate cases, misstate the law, rely on overseas material that does not apply in New Zealand, or present convincing but entirely false information.
The guidelines also remind lawyers of their overriding duty as officers of the court not to mislead, their obligation to take reasonable steps to ensure accuracy, and the need to protect confidential, privileged and suppressed information.
Generative AI may be a useful drafting or research aid, but it is not a search engine and cannot be treated as a reliable source of legal authority without independent verification by an appropriately qualified lawyer.
For barristers, the message is clear.
The use of AI in legal practice is not prohibited, nor does it require routine disclosure to the court. But its use must be cautious, informed and disciplined. Ultimate accountability for submissions rests, as it always has, with counsel.
The Supreme Court’s comments underline that failures in this area are not merely technical errors; in serious cases, they may have professional and legal consequences.
Barristers who use generative AI are encouraged to familiarise themselves with the Guidelines and ensure their practices reflect both the opportunities and the limits of this rapidly evolving technology.
This article was produced by AI and reviewed by staff.
Related News
Employment Relations Act Amendment
23 AprilOn 21 February 2026, the Employment Relations Amendment Act …
Read moreWelcome legal reforms in the family violence space
22 AprilIn recent times, there have been some significant statutory …
Read moreKeeping your Private Life Private
09 AprilAs a private investigator, I am often asked to find people f…
Read moreIBA report calls for urgent reforms to support women in the legal profession
20 MarchA new International Bar Association (IBA) report has identif…
Read more