Risks of Generative AI: The Case of Deloitte Australia
Details
BECG200
6
-
2026
YES
300
-
-
Australia
Ethics in Business,Ethics and Morality
Abstract
Global professional company Deloitte’s Australian arm, Deloitte Australia, came under scrutiny in 2025 after its use of generative Artificial Intelligence (AI) to prepare a report commissioned by Australia’s Department of Employment and Workplace Relations (DEWR) came to light. The 237-page report was commissioned in December 2024 to review DEWR’s Targeted Compliance Framework used in automating welfare penalties. Deloitte released the report in July 2025, which was then displayed on DEWR’s website. Scrutiny of the report by a researcher unearthed several flaws like fabricated citation, references to reports that did not exist, and judicial statements that had been misquoted. These were reported widely by the media, after which DEWR conducted an internal review. It submitted a 16-page report of errors and mistakes in the report to Deloitte, which then admitted to having used generative AI for drafting some parts of the report. Subsequently Deloitte corrected the report and agreed to partially refund the consultancy fee. Though Deloitte maintained that the findings and recommendations of the report remained unaffected, the incident triggered widespread criticism of the use of generative AI in professional services. It brought to the fore the risks of AI hallucination, limits of the existing review processes. and the challenges organizations faced in balancing efficiency gains from AI with the ethics of using AI.
Learning Objectives
The case is structured to achieve the following Learning Objectives:
- Understand the risks of AI hallucination.
- Examine governance and control gaps in AI adoption.
- Apply responsible AI principles to professional services contexts.
- Assess the ethical use of emerging technologies.
Keywords
Deloitte Australia; Generative AI; AI hallucination; AI governance; Consulting ethics; Public-sector consulting; Transparency and disclosure; Accountability; Public trust; Responsible AI,