AI Hallucinations: Why Human Review is not Optional
April 2026Generative AI tools are rapidly changing the way we work and can be profoundly useful. However, their tendency to invent facts, quotes or citations that never existed is well reported. Recent high profile matters show how easily “hallucinations” can appear in external outputs produced by well respected professionals. This can result in real reputational, legal and financial harm, if humans do not check the output first.
Recent cautionary examples
- Deloitte (Australia) – In Australia, Deloitte’s member firm has agreed to partially refund the Australian government’s Department of Employment and Workplace Relations after a $290,000 report it produced contained numerous AI generated errors including fabricated references that had to be corrected.
- DL Law Corporation – In Singapore, a lawyer from DL Law Corporation was ordered to pay $800 in costs after court filings were found to cite a non-existent case that a judge concluded had been generated by an AI tool. The court found this to be improper, negligent and a waste of judicial time and resources, potentially damaging the legal profession’s integrity. The court emphasised that professionals remain responsible for verifying submissions they put before a court.
- Sullivan & Cromwell – In the United States, Sullivan & Cromwell admitted to the Bankruptcy Court that a motion filed in proceedings contained AI-generated hallucinations, including fabricated case citations, mischaracterised authorities and quotations attributed to the court that did not exist. Sullivan & Cromwell stated that its internal policies require lawyers to complete mandatory AI training and to independently verify all AI-generated output, but acknowledged that those procedures were not followed during the preparation of the filing.
What these examples reveal
- False confidence – AI outputs can sound authoritative and can be complete with plausible citation formats or quotations while being factually false. This can make users complacent and if users accept such outputs without verification, it can have significant impact on the organisation.
- Hidden use of AI – Clients and stakeholders can be blindsided if an organisation relies on AI in its workflows, particularly if the provenance is not checked and the output verified. Regulators and clients are increasingly expecting clear disclosure of AI use.
- Legal and regulatory risks – Depending on the business of an organisation, it may well have contractual, regulatory, professional and sometimes statutory obligations to verify the materials produced. Using AI without sufficient human review can expose individuals and organisations to sanctions, fee awards, contract remedies or professional disciplinary proceedings.
- Insurance risks – It is possible that use of AI without sufficient human review may breach an organisation’s obligations under its professional indemnity insurance, resulting in any claims stemming from the incorrect output to be outside the scope of coverage and uninsured.
- Reputation and financial harm – Organisations risk client loss, corrective costs, refunds and public backlash when AI assisted work is found to contain fabricated outputs.
Practical measures
Outlined below are some practical measures organisations should be thinking about.
- Human in the loop. Organisations should treat AI as a drafting assistant and verify every factual claim, quote and citation against primary sources and keep a human in the loop at every stage throughout the production process.
- Record the workflow. Keep an auditable trail of prompts, model versions and human edits so mistakes can be traced and corrected.
- Clear policies and staff training. Create organisation wide rules on permitted AI uses, mandatory verification steps and train staff on responsible use and review.
- Test tools before relying on them. Never assume a vendor’s “hallucination-free” claim is absolute. These recent matters show hallucinations remain an issue for AI tools.
How we can help
Whilst AI will keep adding speed and scale to professional work, these matters highlight AI tools tendencies to hallucinate and the need for rigorous human oversight which is non-negotiable for any key outputs. The above examples are timely reminders that when deploying AI, human review processes are what protect an organisation’s reputation and maintains stakeholder trust.
To learn more about how we can support your organisation in managing AI compliance or risk management, please contact James Hutchinson and Jonathan Booton.
Download PDF

