Published on 2024年10月25日
As artificial intelligence (AI) becomes ubiquitous, it’s reshaping decision-making in ways that go far beyond the scope of traditional business automation. Using a combination of predictive and generative AI, systems can now make tactical, operational, and strategic decisions at scale. They dynamically adjust product prices, recommend your next binge-worthy TV show, and generate sales and marketing content for mass diverse audiences. Yet scaling such AI use cases requires governance frameworks that do more than just manage data—effective AI governance frameworks encompass systems that continuously learn, adapt, and operate with minimal human intervention.
In this blog, we’ll unpack the differences between data and AI governance, examining the new factors leaders must consider when designing their AI governance programs.
What makes AI governance different from data governance? AI governance focuses on outputs–the decisions, predictions, and autonomous content created by AI systems. As the world turns and data drifts, AI systems can deviate from their intended design, magnifying ethical concerns like fairness and bias. Such off-track systems might invade privacy, inadvertently release intellectual property (IP), and exacerbate nontransparent decision-making. Without appropriate AI governance, businesses risk unintended consequences of these outputs, leading to regulatory challenges and reputational damage.
This raises urgent questions for business leaders: How can we adapt our governance framework to incorporate AI? What new processes need to be implemented to fully understand autonomous decisions?
Traditional data governance focuses on managing the lifecycle of data, which includes findability, accessibility, trustability, and security. While this remains important, it is no longer sufficient. AI systems introduce factors that data governance alone cannot address. AI/ML algorithmic decision-making requires continuous oversight and accountability. Governance structures must evolve to monitor not only the data inputs but also the ever-changing outputs generated by AI.
Without the right AI governance framework, organizational risk profiles will increase. This begs the question: What is your organization’s appetite for risk? AI-driven decisions, whether they involve optimizing supply chains or influencing customer interactions, are only as effective as the governance structures that support them. If you are not already governing AI with the same rigor you apply to other strategic initiatives, you will put your company at undue risk and competitive disadvantage.
As AI becomes more embedded into business operations and organizational strategies, a fundamental question arises:
How do we govern systems that make autonomous decisions?
Consider real-world cases where this lack of oversight had significant consequences: In 2018, Amazon had to scrap its AI-driven hiring tool due to its bias against female applicants.1 Similarly, predictive policing algorithms have been shown to unfairly target minority communities, exacerbating societal biases rather than mitigating them.2
Image generators also perpetuate stereotypes and bias. In testing the popular image generator, Stable Diffusion XL, reporters from the Washington Post uncovered such tendencies. They write, “In 2020, 63 percent of food stamp recipients were white and 27 percent were Black in the US, according to the latest data from the Census Bureau’s Survey of Income and Program Participation. Yet, when we prompted the technology to generate a photo of a person receiving social services, it generated only non-white and primarily darker-skinned people. For image generators, across the board, results for a ‘productive person,’ meanwhile, were uniformly male, majority white, and dressed in suits for corporate jobs.”3 Please note, this is not specific to any one image generator; it’s inherent in all of them.4
This occurs because this technology relies on the same basic training data representing societal bias. Left unchecked, AI systems can systematically and quickly perpetuate and amplify stereotypes that may put your company at unnecessary risk. For example, if you are generating creative images used for marketing campaigns at scale with AI, how is that monitored and governed? Will your marketing content perpetuate biases? These examples demonstrate the importance of AI governance, which is essential to prevent unintended outcomes and ensure that AI decisions align with organizational goals and values.
Adding to the urgency, regulatory bodies are now focused on AI. The EU AI Act and similar regulations in the U.S. such as California’s SB-1047, (recently vetoed by Governor Newsom), are imposing stricter requirements on companies using AI across all industries but especially sectors like healthcare and finance.5,6,7 These regulations demand transparency, accountability, and fairness, making it essential for businesses to proactively implement robust AI governance frameworks.
Ultimately, business leaders need to ask: Are our AI systems aligned with our business objectives, ethical standards, and legal obligations? Without proper governance, AI introduces new risks that traditional approaches to data management are ill-equipped to handle.
Traditional data governance models typically emphasize data cataloging, stewardship, curation, and policy enforcement, providing a strong foundation for managing information. However, these frameworks focus on data as a static asset—something to be stored, protected, and accessed. In contrast, AI governance deals with algorithms that are dynamic and continuously adjusting based on inputs.
The real difference lies in managing the decision-making process of AI systems. These systems are not static; they generate insights and outcomes that can affect critical business processes—from pricing strategies to risk assessments. This requires a governance model that not only tracks the data but also monitors the evolution of AI/ML models themselves.
Another challenge in AI governance is transparency—or rather, the lack of it. Many AI systems operate as "black boxes," meaning that while they produce results, the internal process is not transparent, even to the data scientists who designed them. This is fundamentally different from traditional data transformations (and the governance frameworks that structure them), where data lineage and provenance are well-understood, traceable, and auditable.
In AI, predictions are made by layers of ML algorithms, making it impossible to explain exactly how or why an output was reached. This lack of explainability presents significant risks, especially in highly regulated industries. For example, in the U.S. AI Bill of Rights and B/ECOA, a business must be able to clearly articulate how its AI arrived at a particular conclusion to justify that decision to regulators, customers, or stakeholders.8,9 If you were denied a mortgage, job, or medical treatment, wouldn’t you want an explanation that goes beyond “Because the AI said so”?
One of the most significant concerns in AI, which governance must address, is the perpetuation of biases and stereotypes–especially for historically marginalized and disenfranchised groups. AI systems rely heavily on data, and in the case of LLMs, the data used to train these systems reflects societal ills, resulting in decisions and outputs that will mirror and possibly amplify these biases.
This is not a theoretical risk—it has already played out in real-world scenarios, like facial recognition software that misidentifies individuals based on race and gender. In a more sobering example, in 2021, Meta’s algorithms promoted posts that encouraged violence against the Rohingya, an ethnic minority group in Myanmar.10,11 According to a report from Amnesty International, Meta’s algorithms “proactively amplified” inflammatory, anti-Rohingya content, and Meta disregarded appeals to curb hate-mongering on its platform while profiting from the boost in engagement.
Many factors contributed to this, but two that stick out are 1) the objective function of the algorithms was to maximize engagement, which unfortunately promoted, hate speech to drive more engagement in an ugly, vicious cycle and 2) the lack of human intervention by the Facebook Leadership team since they were aware of the problem.
In this context, AI governance must include protocols for bias detection and mitigation. Companies need to ensure that their AI systems are not only producing accurate results but are doing so fairly and equitably. This is not as easy as it sounds; did you know there are over 20 different algorithmic definitions of fairness?12 As a business, who is accountable for selecting the right metric?
With the EU AI Act, the US Executive Order on AI, the FTC’s stance on AI (Section 5 of the FTC Act), and a slew of state and local regulations enacted and pending, AI governance frameworks must be ready to adapt to the impending onslaught of compliance requirements.13 Now, there are differences in how the EU and the US are approaching AI regulations; while the EU is taking a top-down, compliance-based approach, the US is unfortunately only at the voluntary guidelines stage. Nevertheless, companies that fail to align their AI practices with these regulations risk fines, litigation, and reputational damage. Even though the US doesn't have overarching AI regulations in place (yet), businesses can still be penalized for misusing AI outputs, as in the case of the 2022 Equifax credit score scandal, in which AI systems miscalculated credit scores, causing blameless individuals to be denied loans, leading to a lawsuit.14
Business leaders need to see and prioritize AI governance as an ongoing, strategic priority rather than a one-time check-box compliance exercise. It requires a shift in mindset—from governing data as a static resource to governing AI as a dynamic agent that shapes business decisions and outcomes. Businesses that wait for regulations to tighten will find themselves scrambling to implement costly fixes. Proactive governance ensures compliance before the law catches up.
AI is transforming the way businesses operate, with these systems making decisions that impact everything from pricing strategies to customer interactions. As we’ve explored, AI governance goes far beyond traditional data management. It requires oversight not only of the data that fuels AI systems but also of the decisions these systems make—decisions that can have wide-reaching ethical, legal, and societal implications.
The need for AI governance is clear. Without it, organizations risk unintentional bias, lack of transparency, and potential regulatory penalties. As AI becomes more central to business strategy, governance frameworks must evolve to ensure that AI systems remain aligned with corporate values, legal standards, and ethical principles.
In the next part of this series, we will dive into practical steps businesses can take to implement effective AI governance frameworks. We’ll explore how organizations can build accountability, improve transparency, and mitigate risks to ensure AI systems are not only efficient but fair and trustworthy.
Dastin, Jeffrey. 2018. “Insight - Amazon scraps secret AI recruiting tool that showed bias against women.” Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
Douglas, Will. 2020. “Predictive policing algorithms are racist. They need to be dismantled.” MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.
Tiku, Nitasha, Kevin Schaul, Szu Y. Chen, Alexis S. Fitts, Kate Rabinowitz, and Karly D. Sadof. 2023. “AI generated images are biased, showing the world through stereotypes.” Washington Post, November 1, 2023. https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/.
Zhou, Mi, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. 2024. “[2403.02726] Bias in Generative AI.” arXiv. https://arxiv.org/abs/2403.02726.
n.d. EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. Accessed October 3, 2024. https://artificialintelligenceact.eu/.
“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” 2023. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
“SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” n.d. California Legislative Information. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047.
“Financial Regulatory Agencies.” 2024. Center for American Progress. https://www.americanprogress.org/article/taking-further-agency-action-on-ai/financial-regulatory-agencies-chapter/.
“Artificial Intelligence and Machine Learning in Financial Services.” 2024. CRS Reports. https://crsreports.congress.gov/product/pdf/R/R47997.
Fergus, Rachel. 2024. “Biased Technology: The Automated Discrimination of Facial Recognition.” ACLU of Minnesota. https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition.
“Myanmar: Facebook's systems promoted violence against Rohingya; Meta owes reparations – new report.” 2022. Amnesty International. https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/.
Castelnovo, Alessandro, Riccardo Crupi, Greta Greco, Daniele Regoli, Ilaria G. Penco, and Andrea C. Cosentini. 2022. “A clarification of the nuances in the fairness metrics landscape.” Nature. https://www.nature.com/articles/s41598-022-07939-1.
Liu, Henry. 2024. “FTC Announces Crackdown on Deceptive AI Claims and Schemes.” Federal Trade Commission. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes.
Kerner, Sean M. 2022. “How AI 'data drift' may have caused the Equifax credit score glitch.” VentureBeat. https://venturebeat.com/data-infrastructure/did-data-drift-in-ai-models-cause-the-equifax-credit-score-glitch/.