A $1.1 million federal contract awarded to Deloitte, a global consulting firm, is now under intense scrutiny following a series of alarming errors linked to the use of artificial intelligence. The deal, intended to guide the deployment of AI within the government, comes as Deloitte faces mounting criticism for inaccuracies appearing in reports delivered to multiple governments.
The controversy ignited when fabricated citations surfaced in a Deloitte report for Newfoundland and Labrador’s Health Department, costing the province nearly $1.6 million. One cited article, purportedly co-authored by a nursing professor, simply doesn’t exist. This revelation followed a similar incident in Australia, where Deloitte admitted to providing a government report riddled with AI-generated citation errors.
Experts are voicing serious concerns, questioning the wisdom of entrusting Deloitte with further AI guidance. “If ESDC is standing by this contract, I really do begin to question their judgment,” stated Robert Shepherd, a professor specializing in Canadian public management. The core issue isn’t just about flawed research, but a potential erosion of public trust.
Federal officials defend the contract, asserting its scope differs from the problematic projects elsewhere. They emphasize that the agreement, running until July 2026, includes stipulations regarding AI usage and promises consequences for non-compliance. However, this assurance rings hollow for some, given the recent pattern of errors.
Newfoundland and Labrador’s premier has demanded accountability, signaling a potential bid to recoup funds spent on the flawed report. A provincial review of AI use by employees and consultants is currently underway, seeking to understand the extent of the problem and prevent future occurrences.
Public Services and Procurement Canada has responded by requiring suppliers to disclose any use of AI in their work upfront, aiming to ensure quality and value for money. But experts argue this is merely a reactive measure, and a fundamental shift in oversight is needed.
The onus, they say, lies with the government to proactively verify the authenticity of research provided by consulting firms. These companies wield significant influence, shaping policies that directly impact citizens’ lives. Accepting flawed work undermines the integrity of the entire process.
Deloitte Canada maintains the recommendations within the Newfoundland and Labrador report remain valid, despite the citation errors. They claim AI was “selectively used” to support research, but declined to elaborate on the specifics. The company insists it’s continually refining its AI practices to ensure accuracy.
The situation highlights a critical dilemma: governments increasingly rely on external expertise, yet the rise of generative AI threatens to deliver generic, unoriginal recommendations. As one expert warned, “The government is going to receive some bland, mediocre recommendation out of a (large language model) that public sector employees could have done themselves.”
Proposed solutions include stricter protocols for reviewing and fact-checking consultant reports, and demanding proof of original research. The stakes are high, extending beyond financial losses to the very foundation of public trust in government and the quality of the policies that shape our lives.