Beware of Use of AI in Legal Proceedings

Legal submissions and citations are not exempt from the phenomenon of AI hallucinations. This occurs when an AI model generates an incorrect or misleading result. Cases from various jurisdictions demonstrate the dangers of using AI when preparing submissions or filings. Our Commercial Disputes team examines several case law examples.
As Generative AI (Gen AI) becomes increasingly prevalent, so too has the phenomenon of AI hallucinations. This occurs when an AI model generates an incorrect or misleading result. Case law from various jurisdictions, including Ireland, demonstrates that AI hallucinations are becoming a real issue. These hallucinations result in incorrect citations, quotations or legal concepts. In some instances, these errors are appearing in submissions and filings relied upon by parties in Court. Given the importance of legal decisions, the risk of unchecked AI reliance should not be underestimated. These decisions affect not only the litigants in a particular case, but also have potential implications for the rule of law and society as a whole.
AI in legal proceedings
The use of AI in legal submissions, and not always with accurate results, has already been flagged in Irish cases. In Reddan v An Bord Pleanala[1] the applicant sought leave to bring judicial review proceedings relying on a number of grounds. The applicant was acting as a litigant in person, meaning he represented himself in proceedings. The court ruled against him on all nine grounds. However, in one ground the applicant had referenced a particular legal term / phrase. The judge said it was “not familiar in Irish jurisprudence” and suggested it was Scottish or American. Although the applicant said he came across it doing online research, the judge commented: “This sounds like something that derived from an artificial intelligence source. It has all the hallmarks of ChatGPT, or some similar AI tool.” Given the term involved a serious allegation of a criminal offence, the court was critical that it had been raised at all without strong evidence. In Coulston & Others v Elliott & Elliott[2] the defendants were also litigants in person. In making submissions to the court, a new claim was raised for the first time in the proceedings. The explanation given was that a friend was asked to prepare the submissions, but the defendant could not himself explain the argument in court. The judge concluded that the defendants either went to someone who was purporting to be a lawyer or they had used GenAI. Regarding the latter, he stated that “if they used a generative AI program, they have been fooled”. He went on to warn that although AI can sound persuasive, it can be flawed and here the argument was “fatally flawed”. He cautioned more generally that the “general public should be warned against the use of generative AI devices and programs in matters of law.”
The US appears from the reports to have had the greatest volume of instances of AI hallucinations in submissions and citations. In some cases, the consequences were substantive and impacted directly on the outcome of the case. In one case involving a lay litigant,[3] their appeal was dismissed due to “fatal briefing deficiencies”. Those deficiencies included 22 false citations obtained through GenAI. As a mark of its displeasure, and to “promote the integrity of the judicial process”, the court ordered the appellant to pay $10,000 in damages towards the respondent’s appellate legal costs. In another case involving a disputed will,[4] a response submission was struck out because the lawyer had not verified sources taken from a website that used GenAI. Although the judge said he was dubious about using AI to prepare legal documents, he specifically observed: “It is not necessarily the use of AI in and of itself that causes such offense and concern, but rather the attorney's failure to review the sources produced by AI without proper examination and scrutiny.”
By contrast, in other cases, the courts have focused on sanctioning practitioners where they have failed to comply with their professional obligations and have effectively misled the court. In a high-profile case against an airline for damages, Mata v Avianca[5], the plaintiff’s lawyers had put in submissions containing non-existent judicial decisions with fake quotes and citations generated by AI. The judge acknowledged that “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance”. However, he noted that “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings”. In that particular case, the lawyers involved had “abandoned their responsibilities” and acted in bad faith in standing over the fake opinions after being questioned. They were fined and required to write to the judges to who the false judgments were attributed. In another case,[6] a lawyer who had in submissions used AI generated non-existent case citations and quotations, was fined and ordered to attend an education course on GenAI. In another,[7] the court fined a lawyer for filing pleadings with unchecked AI hallucinations as case citations. The court stressed that, if AI is used, any AI derived content must be checked for accuracy. It also advised against relying on a defence based on ignorance as GenAI becomes more widespread.
The UK has also had experience of apparently AI-related incorrect citations. In one case related to a challenge to strike off a solicitor,[8] the applicant cited more than 20 cases which either did not exist or were incorrect. Although claimed to be a result of Google searches and not AI, the citations were never withdrawn or properly explained. To mark its displeasure, the court struck out the case as an abuse of process. In another case,[9] multiple non-existent cases were cited. Although the court made no finding on whether AI had been used, the applicant’s solicitor and barrister were each ordered to pay £2,000 to the respondent in wasted costs. The court noted that if AI had been used that would have been sufficient to ground a negligence action and said that they should each have reported themselves to the relevant professional bodies for failing to meet the appropriate standards. That case, and another, were referred to a King’s Bench Divisional Court sitting[10] where the regulatory guidance was reviewed. Its publication alone was described as “insufficient to address the misuse of artificial intelligence” and “more needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court.” In noting that a copy of the judgment would be sent to the Bar Council and Law Society, President of the King’s Bench Division, Dame Victoria Sharp invited them “to consider as a matter of urgency what further steps they should now take.”
The issue of incorrect AI generated citations has also arisen in Australia. In one family law case,[11] the lawyer used an AI research tool within legal software. A list of non-existent citations and summaries was generated. The lawyer claimed he did not fully understand how the software worked. He apologised for his conduct, but the court referred the matter to the state legal services board for investigation in the public interest. In another case,[12] GenAI was used and created case citations that did not actually exist. The practitioner amended the submissions and apologised for his conduct but was also referred to the regulator.
Conclusion
AI can be a useful tool. In discovery review and very formulaic work, AI can be an excellent cost-saving resource. However, for identifying and applying legal principles and referencing cases, great caution needs to be exercised. For individuals who may be tempted to use AI to formulate and present their case, there is no substitute for instructing a qualified practitioner for proper advice who can check source materials. Practitioners should ensure that if they use AI, especially to identify source material, anything being relied upon is checked thoroughly. Client confidentiality must also be protected when engaging with any GenAI. Ultimately, for anyone using AI as part of a litigation process, caution should be exercised. For litigants, it may affect the outcome of the case. For practitioners, it may have liability and professional conduct consequences. Given that solicitors and barristers have professional obligations not to mislead the court, there would appear to be scope for the Irish Courts to impose sanctions in appropriate cases, including wasted costs orders and regulatory referrals. In addition, it is important to remember that engagement with an AI platform does not attract privilege.
For more information and expert advice on commercial disputes, contact a member of our Commercial Disputes team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
[1] [2025] IEHC 172
[2] [2024] IEHC 697
[3] Kruse v Karlen 692 SW3d (Mo Ct App 2004)
[4] In the Matter of Samuel 206 N.Y.S.3d 888
[5] 678 F. Supp. 3d 443
[6] Gauthier v Goodyear Tire & Rubber Co 2024 U.S. Dist. LEXIS 214029
[7] Smith v Farwell No.2282CV01197 (Norfolk, SS. Mass. Superior Court, 12 February 2024)
[8] Bandla v Solicitors Regulatory Authority [2025] EWHC 1167
[9] R (on the application of Ayinde) v Haringey LBC [2025] EWHC 1040 (Admin)
[10] R (on the application of Ayinde) v Haringey LBC & Al-Haroun v QNB [2025] EWHC 1383 (Admin)
[11] [224] FedCFamC2F 1166
[12] Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95
Share this: