Helmuth Ludwig of Southern Methodist University and Benjamin van Giffen of the University of Liechtenstein began developing research-based guidance for board-level AI oversight even before ChatGPT went into wide release. Much has changed since then, but the pillars of their framework have stood the test of time.
AI introduces business opportunities, competitive threats, capital investment requirements, evolving risks, and — for boards — the need to develop competence in all of the above in order to fulfill their oversight responsibilities. More than 62 percent of directors say their boards are setting aside agenda time to discuss AI today, according to a 2025 National Association of Corporate Directors (NACD) survey, compared with just 28 percent in 2023. But how can they best spend that time to guide their company’s AI adoption in ways that align with strategy, mitigate risk, and deliver real business value?
Dr. Helmuth Ludwig, professor of strategy and entrepreneurship and the Cox executive director of the Hart Institute for Technology Innovation and Entrepreneurship at Southern Methodist University’s Edwin L. Cox School of Business, and Dr. Benjamin van Giffen, associate professor of information systems and digital innovation specializing in board-level AI governance at the University of Liechtenstein, have been finetuning a framework to help board members answer that question.
First developed in 2022, their research delved into how boards were addressing AI at the time. They identified four groups of board-level AI issues — strategic oversight, capital allocation, AI risk, and technology competence — with best practices for establishing AI governance in each area.
This year, Ludwig and van Giffen revisited their work in partnership with NACD, updating their AI governance framework based on insights gathered from more than 100 boards. (See Sidebar: A 4-Pillar Framework for AI Oversight). The pillars remain standing, but some emphasis has shifted. And the researchers have fleshed out the framework with key questions board members should ask and red and green flags to look out for in each area.
The Heller Report recently talked to Ludwig and van Giffen about the significant shift in emphasis from risk mitigation to strategic opportunity, what directors misunderstand about enterprise AI, why AI should be incorporated into more committee charters, what a board’s role should be in AI oversight, and why these governance frameworks must remain living documents.
Stephanie Overby: In the introduction to the most recent version of your framework, you note that “AI presents businesses and their boards with a combination of urgency and uncertainty.” Do corporate boards view AI more as a risk to manage or an opportunity to pursue responsibly?
Helmuth Ludwig: Board members have a fiduciary duty. Their first responsibility is protecting the assets of the company. So, there’s a natural inclination to look at things from a risk perspective. In 2022, pre-ChatGPT, when there were lots of articles around the reputational risk of AI, there was a tendency to think about AI in board discussions almost exclusively from a risk perspective. That’s changed materially. It’s no surprise given board members are lifelong learners. What we see more and more are board members saying, “AI may be an opportunity for us”— very often from a cost perspective, but increasingly from strategic perspective as well.
Do boards have enough time to devote to AI?
Benjamin van Giffen: NACD surveyed 300 directors, and 62 percent said that there is time for AI discussions on the agenda. Certainly, there’s competition with other topics, but AI is increasingly integrated. It’s also clear that once-a-year sessions are not enough given the rate of advancement in this area.
Ludwig: When you think about AI at the board level, it’s “nose in, hands off” as with any other business initiative. The responsibility lies with the leadership of the company.
The board does have a role in making sure capital is allocated to build the right infrastructure. You need a definitive set of tool sets in place and not have a thousand flowers bloom. Another critical element is investing in a clear data strategy. These are platform investments. They don’t have a short-term ROI.
Then you have to fund AI experiments as well as the ability to scale the successful ones quickly. MIT’s State of AI in Business 2025 report found that 95 percent of GenAI projects started didn’t scale. It’s important that the board understands that most AI experiments will fail, and the goal is to let them fail quickly.
Speaking of speed, does the rate at which AI is advancing mean that boards should approach governance differently than they do other business matters for which they provide oversight?
Ludwig: Boards play a role in strategy and the future of company, making sure that AI is part of the discussion.
A second role they play is in capital allocation. But their next question is always, “What’s the ROI?” When a quarter is tight, they may want to cut back on AI experimentation, but that’s not a recipe for success when only a few will scale. It’s like playing the drum. The only way to learn to play the drum is to play the drum. If you’re not out there trying things, you’re never going to find what works.
The third part they play is in making sure the AI competence is there at all levels in the company, from the nominating committee making sure they find candidates with domain knowledge and a specific competency in AI to succession planning, ensuring they have a good base of people with knowledge of how to apply AI.
Van Giffen: It’s important to distinguish between local and enterprise projects. Scaling systems across an enterprise or ecosystem or into products requires additional investment and a governance that identifies and prioritizes scaled AI value creation from the outset.
What do board members most misunderstand with regard to AI?
Van Giffen: Some board members use ChatGPT personally or to digest board materials. That has value, but that’s not a strategic investment, besides the risk of making potentially confidential information broadly available. It’s not impacting the company or its services or adding value to a business model itself. The risk is to think that the company is already an AI company because directors or executives are using AI personally.
A second pitfall is to look at the topic of AI and say, “It’s not relevant for us. We’ll look into it in a year.” That’s neglecting the speed at which the technology is evolving. New companies are emerging quite fast. AI change happens slowly and subtly at first — and then suddenly.
Ludwig: The other problem is when there’s no discussion of AI at all. And then a board member sees a headline in The Wall Street Journal and says we have to do something and do it fast. The pendulum swings. It’s not necessarily wrong but can lead to the wrong drivers. Then management feels pressure to show something to the board. You need a stable AI framework, and it should be based on business strategy.
But the worst is when they bring in one board member who has a lot of background in IT or data and AI and they think, “Now, this person is going to take care of it”. That would be like bringing in an investment banker and having them make all the decisions about M&A deals. No board would do that.
The best boards take a different tack. They bring in some people with competence, and they make sure the whole board gains experience through active educational sessions. We see a lot of value in “go and see’s,” where a company already has best practices emerging in parts of the company. The board member of an insurance company told us they learned the most from what their company was already doing with AI in pockets of the organization.
How much AI understanding do board members need?
Van Giffen: Board directors don’t need to be data science experts. It’s not about in-depth technical competence. Directors need business-focused and strategic competence regarding AI’s meaning, its potential, and its potential risks. This means learning about the value creation mechanisms and scale economics of AI, as well as how it applies in their business context. One options to advance is to bring in speakers on topics like prediction technologies, scaling and scalability, S-curving, exponential effects, upgrading processes, systems of record versus systems of the future, and the full range of cognitive capabilities (reasoning, understanding, predicting, and comparing), to envision how AI can create value for customers and employees.
The NACD survey found that just 23 percent of boards have assessed how AI disruption might happen or where it may come from. Are there aspects of AI that boards should be devoting more time to, but aren’t yet? Or things they’re focused on that are not the best use of their limited time?
Ludwig: You’re hitting on an interesting area. Boards purposely distribute work among committees. But only 25 percent of them have incorporated AI into their committee charters. That means three-quarters of boards are not explicitly declaring AI as important using the established committee structure.
Typically, boards have three committees. The audit committee addresses risk, and they could play a critical role given their competence in enterprise risk management. The compensation committee focuses on HR and talent. They could address succession and also talent development from an AI perspective. And the nominating committee could play a key role identifying the best talent for the board.
What was the impetus for developing your framework for board oversight of AI? And how did you develop it?
Van Giffen: We wanted to offer a holistic perspective — something that everyone can look at to better understand the topic that could also spark a discussion. We discovered what topics were relevant in talking board members representing around 100 boards.
Ludwig: The framework did not just appear. We had some hypotheses to begin and did some interviews. We looked at best practices. Then we defined the framework, tested it, and published it in MIS Quarterly Executive. We tested and discussed the framework with around 300 board members at the NACD annual summit.
Two years later, when we expanded the research together with NACD, the framework was confirmed as a helpful structure for boards thinking through application of AI in their companies.
The most important evolution was the shift in focus from risk to strategy. There had been too much focus on risk. The biggest risk with AI is in not in taking a risk.
What changes did you make to the framework when you reviewed it with the help of NACD?
Ludwig: We added questions that the board can and should ask. Are we investing enough/too much into AI? Do we allocate capital for AI in the right places? Are we exposed to new AI risks? How does our fiduciary responsibility translate into an AI world?
We also added the green flags and red flags, which are very practical for the implementation of the concept. This avoids a purely technical and tool discussion. My recommendation to CIOs is: if they’re asked to talk to the board about AI, they should invest a lot of time talking to business leaders about AI in business language first.
Another red flag is shadow AI. The MIT State of AI 2025 report had another important finding: 40 percent of companies have enterprise licenses for GenAI, but 90 percent of employees are using LLMs. They use them to prepare for important meeting, making confidential information accessible as training data to a public LLM like ChatGPT. It’s a high risk for data loss. You can only avoid that with enterprise licenses and educating everyone in company about this risk.
How have the questions and red and green flags been received by board members? Have they needed “cheat sheets” when it comes to AI governance?
Van Giffen: Absolutely. The NACD is not a theoretical organization; they serve directors. They looked at our work and helped us structure our four pillars in a way that can help boards further evaluate their situations.
Ludwig: We started this work early in 2022 and made it available. We had workshop in 2023; it was immediately packed with 300 directors. That showed us there was a need. You can sometimes underestimate the demand for this kind of information, but as we have continued the research and updated it, it’s grown.
We are also continuous learners. The framework stands, but it will not stay the same forever. The content will change.
|
Sidebar: A 4-Pillar Framework for AI Oversight Drs. Helmuth Ludwig and Benjamin van Giffen first developed a four-pillar framework for AI governance at the board level in 2022 and recently updated and expanded it in collaboration with the National Association of Corporate Directors and the Data & Trust Alliance.
|
Written by Stephanie Overby
Stephanie Overby is an award-winning journalist who currently writes about enterprise IT strategy, technology trends, and business leadership and management topics. Her work has appeared in numerous publications, including CIO, Computerworld, The Wall Street Journal, and NYTimes.com.