Corporate boards can apply familiar oversight principles in new ways to dynamic AI systems, advises consultant and author M.M. Sathyanarayan.
“AI adoption requires a balance between experimentation and discipline,” writes M.M. (Sath) Sathyanarayan in his new book, AI Adoption: Strategies and Tactics for Success. “Boards must guide the organization to invest in capability-building before demanding predictable returns.”
Oversight of dynamic learning systems requires a shift in mindset from overseeing static, rules-based systems. Boards don’t need to abandon everything they know about corporate governance, however. Rather, they “need to expand how they think about governance, applying familiar oversight principles to unfamiliar risks,” says Sathyanarayan, a veteran technology and consulting leader who now advises corporate boards on AI governance.
The Heller Report recently interviewed Sathyanarayan via email about where the boards he works with are in their understanding of AI, what lessons from past technology shifts they can apply, and how boards can finetune their governance to manage the singular risks of AI in the present. He also describes a framework for AI governance that expands as the organizations AI maturity grows.
Stephanie Overby: In your book, you write that “boards don’t need to master the underlying technology — but they do need to recognize how it affects accountability, reputation, and strategic alignment.” Do you think boards grasp the singular nature of AI?
M.M. (Sath) Sathyanarayan: From my experience engaging in board-level discussions—through the Corporate Directors Forum and while researching my book—it’s clear that most boards are still in the early stages of understanding how to govern AI effectively.
One board member told me bluntly: “The state of knowledge is really lousy—with exceptions among tech-savvy directors.” Outside of companies that are building AI products or have tech leaders on their boards, the overall understanding of AI’s implications remains limited.
Boards faced similar knowledge gaps during the early days of cybersecurity, robotic process automation, and cloud computing. Over time, they built the fluency needed to govern well.
Boards don’t need to understand how neural networks work. But they do need to understand what makes AI different and how those characteristics affect shareholder value, operational risk, and reputation. That’s a governance issue, not a technical one.
Are there lessons from previous disruptive technological advances that boards can apply, or is AI unique in its risks and benefits?
Yes, there are valuable lessons from past technology shifts that still apply. Boards can draw on their experience with cybersecurity, digital transformation, and cloud adoption to guide oversight of risk, change management, and long-term investment.
However, AI introduces governance challenges that go beyond traditional frameworks. Unlike rule-based systems, AI systems:
- Produce outputs that may be unexplainable (“black box” decisions).
- Can hallucinate, creating confident but false responses.
- May carry bias from training data or model design.
- Require constant monitoring and updates to remain accurate.
- Can impact brand reputation, customer trust, and even legal compliance.
That means the board’s governance mindset needs to shift—not away from fundamentals, but toward:
- Capability-building with AI applications before ROI measurement.
- A portfolio approach to AI investments that supports experimentation under oversight.
- Context-specific AI governance policies around explainability, model risk, and data quality.
What is the biggest hurdle to effective governance of AI at the board level?
The biggest hurdle is governing without a clear, adaptive understanding of what makes AI different from traditional technologies—and how those differences show up in oversight.
AI introduces non-deterministic behavior and systemic uncertainty that require a broader lens. Unlike traditional systems, AI models make decisions probabilistically— often drawing from massive, unstructured datasets—and in ways that are opaque even to their creators. That demands a shift in governance mindset: from overseeing static, rules-based systems to dynamic, learning systems that can evolve over time.
We hear a lot about responsible AI. You write that it’s “not about perfection, but preparedness.” What does that look like in practice?
With LLMs, boards should understand the governance implications of hallucinations, bias, and lack of transparency. With Agentic AI, the oversight question becomes: What controls are in place if autonomous agents act unexpectedly? When must a human be in the loop?
This is about proactive governance: guiding the organization to adopt AI with foresight, not just speed.
What do you wish boards better understood about AI?
I wish more boards understood that governing AI is not about mastering the technology, but about ensuring the organization is prepared, accountable, and aligned with shareholder interests.
That includes asking: Are we using AI in ways that align with our values and strategic goals? Do we have the right controls and escalation mechanisms if something goes wrong? Is the board seeing AI oversight metrics that match the level of risk and maturity?
AI is not just an IT initiative. It affects customer trust, employee dynamics, compliance exposure, and brand reputation. Boards should focus not only on ROI, but also on whether leadership is building systems that are resilient, fair, and explainable—especially in high-risk domains.
Are there enough AI experts at the board level?
There’s a growing pool of technical AI experts, but a shortage of those who can translate that expertise into the language and priorities of the boardroom.
Today, many boards rely on IT executives to brief them on AI initiatives. While these leaders bring deep technical knowledge, they may not have experience communicating at the governance level where the focus is on risk, accountability, shareholder impact, and long-term alignment with company strategy.
Until more governance-savvy AI leaders emerge internally, external experts can help fill the gap through targeted board briefings or advisory roles.
How can CIOs do a better job communicating with their boards about AI?
CIOs play a central role in making AI both operationally sound and governable. They oversee infrastructure, integration, and data pipelines. But they also need to help the board understand how AI is being deployed, where the risks lie, and what governance measures are in place.
To do this effectively, their communication must be translated for the boardroom—not focused on implementation details or model selection, but on where AI is being used, what value it's delivering, what risks are emerging, and how they’re being managed.
As one board member put it during a recent discussion: “The CEO needs to make sure that when his staff comes to the board, they don’t bury us in technical jargon. We want to hear business issues, not a data science presentation.”
That’s the heart of it. Boards don’t need to understand system architecture. They need to hear from CIOs in terms of strategic alignment, risk, and accountability. Boards don’t need more dashboards. They need insightful, risk-aware conversations from their CIOs and governance partners.
What is the ideal role of the board in AI governance and oversight? What are the best approaches to achieve that?
The board’s role is not to manage AI, but to ensure that management is deploying it responsibly, strategically, and in alignment with the company’s values and risk appetite.
That starts by making AI a regular agenda item, especially as its impact on operations, customers, and compliance grows. Oversight should focus on:
- The company’s level of AI literacy and preparedness.
- Security posture, including exposure through third-party tools.
- Emerging bias and explainability risks, and their reputational implications.
- Whether business benefits are being realized, relative to the organization’s AI maturity.
Boards should expect management to provide reporting on relevant AI oversight metrics—including both tangible and intangible indicators—such as movement from prototypes to production and improvements in market share, customer satisfaction, employee engagement, or productivity.
You talk about the importance of balancing speed with discipline — using governance approaches that protect the company’s reputation without stalling innovation. Is there a framework that can help?
To balance speed with discipline, organizations need a governance model that can scale with AI adoption and evolve with its risks. Taking a minimum viable governance approach can provide that foundation. Inspired by agile and minimum viable product principles, minimum viable governance emphasizes starting with core governance controls, then expanding oversight iteratively based on actual risk and usage—not hypothetical worst-case scenarios.
Rather than creating rigid, top-down governance structures from day one, this approach encourages organizations to:
- Prioritize governance in high-risk use cases (e.g., hiring, lending, healthcare).
- Begin with lightweight, essential controls—such as documentation, accountability, and risk assessment.
- Scale governance alongside AI maturity, complexity, and impact.
In essence, it reframes governance not as a compliance burden, but as a flexible, risk-managed enabler of responsible AI adoption. Minimum viable governance can align directly with an organization’s AI maturity level. I discuss this in greater detail in the book, but the table below offers a summary of how organizations can match governance with maturity levels.
| AI Maturity Level | Governance Focus | Key Actions |
| Level 1: AI Awareness | Basic AI Ethics & Data Oversight | Establish AI principles; conduct risk-awareness training |
| Level 2: AI Experimentation | Risk-Tiered AI Governance | Assign governance ownership; introduce basic risk assessments |
| Level 3: AI Integration | Operationalized Governance | Implement bias detection, model documentation, and explainability |
| Level 4: AI Optimization | Compliance & Audit Readiness | Align with EU AI Act, NIST AI Risk Management Framework and others as applicable; implement regular audits |
| Level 5: AI Leadership | Enterprise-Wide Governance Strategy | Deploy real-time risk dashboards; lead in industry governance |
Written by Stephanie Overby
Stephanie Overby is an award-winning journalist who currently writes about enterprise IT strategy, technology trends, and business leadership and management topics. Her work has appeared in numerous publications, including CIO, Computerworld, The Wall Street Journal, and NYTimes.com.