In this edition of Cyber Means Business, veteran CISO Jim Routh urges security leaders to rethink their approach to AI governance—not as a risk barrier, but as a foundation for innovation. Drawing on decades of experience leading security programs at organizations like Aetna, MassMutual, and JPMorgan Chase, Routh, who currently serves as a board member for many organizations and as Chief Trust Officer with Saviynt, makes the case that effective security leadership isn’t about building walls—it’s about building consensus.
For Jim Routh, security must evolve from risk mitigation to enablement. That starts with letting go of the illusion that we can control generative AI—and instead learning to govern its use in ways that reflect the business's values, objectives, and appetite for innovation.
CISO Leadership Takeaways:
|
Joan Goodchild: You’ve said that security leaders need to stop trying to contain generative AI and start learning how to govern it. Why do you believe the traditional control-based approach is flawed?
Jim Routh: Because it’s just not feasible. The second that generative AI hit the mainstream, CISOs began putting boundaries around it—restricting access to ChatGPT and trying to lock things down. But the reality is, over 80% of people in North America are using generative AI tools daily. They may not even realize it, but they are. So the idea that we can completely control this technology is a flawed premise.
Instead of pretending we can contain it, security leaders need to support safe exploration. Our job isn’t to shut it down—it’s to enable its use responsibly, with the right controls in place to protect the business and our customers.
How does that shift in mindset enable innovation?
By acknowledging that generative AI is changing how we work—and meeting that change with tools and governance that allow people to use it while managing the business risk. That’s what it means to be a business enabler. We’re on a journey of learning, and our governance models need to reflect that.
You advocate for a use case–driven approach to AI governance. What does that look like in practice?
You advocate for a use case–driven approach to AI governance. What does that look like in practice?
Rather than drafting a single top-down policy to cover every possible AI scenario, we invite business units to bring forward their use cases. If a team wants to use generative AI for product development or customer communications, they submit a proposal to a cross-functional governance team. That team—made up of stakeholders from security, privacy, legal, architecture, finance, data science, and more—reviews the use case, identifies the risks, and recommends controls specific to that context.
This lets innovation move forward safely and strategically. It also helps us build an enterprise-wide governance model organically, one use case at a time. Over time, we learn which controls work and which don’t—and we refine as we go.
Some leaders might worry that submitting every AI initiative to a governance team could slow progress. How do you address that concern?
That’s a fair concern, but it’s based on the assumption that governance equals bureaucracy. In our model, the governance team isn’t there to say “no”—it’s there to help the business move forward responsibly. If a use case is viable but the right controls don’t exist yet, then we create them. That’s part of the CISO’s job.
It’s not about building a perfect framework from day one. It’s about enabling momentum and learning as we go. Even if we start with 30% of the controls we’ll need long-term, that’s still far more effective than a blanket moratorium that stops innovation cold.
Not every CISO feels equipped to lead that kind of cross-functional collaboration. What’s the role of the CISO in this model?
Facilitation. Not control.
The challenge is that most CISOs came up as subject matter experts. Business leaders will often defer to them situationally—but they won’t necessarily buy into the decisions unless they were part of making them. That’s where things fall apart.
Consensus-building requires someone who can lead without bias. That may be the CISO, but if not, it might be someone from privacy, legal, or enterprise architecture. What matters is that the CISO still has a seat at the table—defining the controls and contributing to the governance process. If we don’t have the skills to lead the group, we support the group. That’s how security shows up as a partner, not a roadblock.
You’ve said that we should stop promoting a “culture of security.” That’s a bold statement. Why?
Because it doesn’t serve the business. I had a senior leader once get flagged by one of those phishing simulation emails. He was embarrassed—so embarrassed, in fact, that he stopped opening any email from the CEO for two and a half weeks. That’s not security. That’s fear.
What we need instead is a culture of resilience. That means fostering an environment where anyone can identify a problem, speak up about it, and contribute to a solution—without blame or shame. It's about learning from incidents, improving our systems, and recovering quickly. That’s how we build long-term business value. These are the behaviors of cyber resilience that are necessary in every part of the enterprise.
How do you know if your organization is getting AI governance right? What are the signs?
Start with this: if your company has a blanket moratorium on generative AI, that’s a sign of unhealthy governance. It usually means the organization doesn’t understand how pervasive the technology already is—and that kind of freeze can stop innovation before it starts or encourage it in the shadows.
Instead, good governance should enable business decisions that are fast, ethical, and aligned with the company’s values. That can’t come from one person. It requires diverse voices at the table. If you’re building AI strategies in silos, you’re going to miss the mark.
Final thought: What’s your call to action for CISOs who want to lead in this space?
Be a builder. If the controls don’t exist yet for a particular use case, help design them. If the governance framework isn’t clear, help shape it. If you’re not the right person to lead, support the person who is. Just don’t sit on the sidelines.
AI is a generational shift. Security has a critical role to play—not just in protecting the business, but in helping it thrive.

Written by Joan Goodchild
Joan Goodchild is a veteran journalist, editor, and writer who has been covering business technology and cybersecurity for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online.