Header Logo
Subscribe Blog Presentations Courses
Templates
Analytics Capability Maturity Model Benford's Law Analysis Binomial Distribution Beta Distribution Cohort Analysis Template Cyber Risk Assessment Linear Regression Logistic Regression Lognormal Distribution Normal Distribution Outliers Analysis Scenario Analysis t-Test
About Contact
Log In
← Back to all posts

Three Real-World AI Governance Failures

by Rob Valdez
Dec 21, 2024
Connect
AI Governance Failures

Artificial Intelligence (AI) has the potential to revolutionize industries, improve efficiency, and unlock new opportunities. But when AI systems fail or are poorly governed, the consequences can be severe. 

Real-world cases show the need for robust AI governance to ensure accountability, transparency, and fairness. Here are three examples where AI governance shortcomings led to significant issues, illustrating why good goverance is so important.

Vision.cpa

Listen to Rob Valdez's Vision.cpa podcast on Apple Podcasts.

podcasts.apple.com

1. Air Canada’s Chatbot Blunder

In February 2024, Air Canada was ordered to pay for the trouble that an AI chatbot caused for a customer. The customer was traveling to attend a funeral and, during his booking process, was informed by the chatbot that he could claim a bereavement discount within 90 days after his flight. Relying on this information, he completed his travel plans but was later denied the discount because Air Canada’s actual policy required the discount to be claimed before traveling, not after.

When the man challenged this decision, Air Canada argued that the chatbot had linked to their policy, implying that the customer should have read the fine print. But the court sided with the customer, ruling that the chatbot, acting as a representative of Air Canada, had provided misleading information. The company was held accountable under the principle that they were responsible for the AI’s statements and their impact on the customer.

This case illustrates a critical governance principle: accountability (one of the OECD's Five AI Governance Principles). AI systems must be designed and deployed with clear oversight mechanisms to ensure organizations take responsibility for the actions and outputs of their AI. In this instance, Air Canada’s failure to accept accountability upfront compounded the issue, damaging their reputation and customer trust.

2. Apple Card and Alleged Bias

In 2019, a tweet from software entrepreneur David Heinemeier Hansson ignited a firestorm around the Apple Card. He alleged that the card’s algorithm had granted him a credit limit 20 times higher than his wife’s, despite their similar financial profiles. The claim gained traction when others, including Apple co-founder Steve Wozniak, shared similar experiences.

A subsequent investigation by the New York Department of Financial Services eventually determined that the algorithm was not discriminating based on gender. Instead, discrepancies arose from complex financial factors, such as whose name was on certain accounts or mortgages. While the investigation cleared the algorithm of intentional bias, the controversy exposed a lack of transparency and explainability (together, representing another of the OECD's Five AI Governance Principles) in the system. Customers seeking explanations for credit decisions encountered opaque processes and inadequate responses.

This case highlights the importance of ensuring AI systems are both transparent and explainable. Organizations leveraging AI should be able to justify decisions in a way that users can understand, particularly in sensitive areas like credit and lending where there are relevant laws and regulations.

3. The COMPAS Algorithm and Criminal Justice

One of the most consequential examples of an AI governance challenges lies in the use of AI within the criminal justice system. The COMPAS algorithm (Correctional Offender Management Profiling for Alternative Sanctions) has been used in U.S. courts to assess the likelihood of someone performing another crime.  That likelihood prediction can be used in determining the harshness of sentencing decisions.

A 2016 investigation by ProPublica raised concerns that COMPAS was biased against Black individuals, assigning them higher risk scores compared to White individuals.

The COMPAS algorithm is proprietary, meaning its underlying methodology and data are not publicly accessible. This lack of transparency undermines trust and prevents independent verification of its fairness. While Northpointe, the company behind COMPAS, disputed ProPublica’s findings, the controversy underscored the serious ethical and societal risks of opaque AI systems in high-stakes applications.

The COMPAS case emphasizes the need for governance principles like transparency, fairness, and oversight in AI systems. When algorithms impact fundamental rights and freedoms, such as access to justice, ensuring their accountability and fairness should be non-negotiable.

Why AI Governance Matters

The above examples, ranging from customer service failures to financial decisions and criminal justice, demonstrate the consequences of poor AI governance.

Effective governance frameworks help balance the benefits and risks of AI by:

  1. Assuring Accountability: Organizations must take responsibility for their AI systems, including their outputs and impacts.

  2. Promoting Transparency: Users should understand how AI decisions are made and have recourse when those decisions are disputed.

  3. Upholding Fairness: AI systems should be free from biases that could lead to discrimination or unequal treatment.

  4. Maintaining Trust: Transparent and accountable systems foster confidence in AI, ensuring its continued adoption and success.

Conclusion

The real-world failures of AI governance serve as powerful reminders of the stakes involved. As AI becomes more prevalent and more integrated into our lives, the principles of accountability, transparency, and fairness should guide its development and deployment.

By learning from past mistakes and implementing robust AI governance frameworks, organizations can harness the transformative power of AI while minimizing risks and upholding societal values.


Are you interested in assessing your organization's AI Risk?  Get on the waitlist for the upcoming AI Risk Assessment Template!

AI Risk Assessment

The ultimate tool to identify, analyze, and manage risks in your AI initiatives.

www.vision.cpa

 

Responses

Join the conversation
t("newsletters.loading")
Loading...
New! Interactive Accounting Salary Dashboards
I’m excited to share a new resource I’ve built: MyAccountingSalary.com. This accounting salary dashboard visualizes real-world salary data across accounting roles, industries, and locations.  And it's designed to help professionals and students better understand compensation trends in our field. Explore salaries by title (like Forensic Accountant or Audit Associate) Filter by location and indu...
The Accountant Shortage is Shifting
Hiring managers continue saying how hard it is to hire qualified accountants. But the latest job postings data suggests that the demand is shifting for accountants, and other high-demand positions, like software development and nursing. To understand where the accounting job market is headed, it's helpful to know what made the accountant shortage so extreme in the first place. As lockdowns eas...
Salaries in Public Accounting
2025 Public Accounting Salary and Career Guide 2025 Public Accounting Salary and Career Guide www.vision.cpa Accounting is a profession built on good financial informtion and communication. But too often, accountants lack access to good data about their own compensation. Whether you’re starting your career, benchmarking your firm, or considering a new role, understanding salary trends ...

Vision.cpa

Succeed in Your Accounting Career
Footer Logo
Privacy Policy
© 2026 Roberto Valdez

Stay Informed!

Sign up for the newsletter to receive the latest info, tools, and announcements.

Stay Informed!

Sign up forĀ the newsletter to receive the latest info, tools, and announcements.