listing page or single post https://www.hellersearch.com/blog Heller Blog

Cyber Means Business: Malcolm Harkins on Why GenAI Security is a Boardroom Imperative

Joan Goodchild
By Joan Goodchild

Jul 30, 2025

In this edition of Cyber Means Business, longtime security leader Malcolm Harkins, former CISO of Intel and current chief security and trust officer at AI security company HiddenLayer, issues a call to action: if your company is leveraging AI to deliver business value, securing it isn’t optional. It’s a strategic requirement.

Ethics may be the starting point, but it’s not the finish line. “Responsible AI” frameworks are important, but without real protections, they can create a false sense of security. Take policy puppetry, for example, a new class of attack where adversaries manipulate a model’s instructions to bypass built-in safety controls. Or consider overprovisioned large language models (LLMs), AI systems given too many permissions and functions out of the box, which significantly increases the potential attack surface. A third concern is unregulated access to these tools, where the lack of role-based guardrails can lead to accidental leaks of sensitive data. These threats make it clear: AI security isn’t just about ethics. It’s about defending your business against serious, emerging risks.

These are not hypothetical scenarios. They are already impacting real organizations. In the conversation below, Harkins breaks down what security leaders need to understand about these threats, why AI security belongs in the boardroom, and how to act before the risks become reality.

CISO Leadership Takeaways:
  • AI security is not just a technical concern. It is a core business and boardroom issue.
  • Ethical AI design is meaningless without runtime protection—security measures that operate while the AI is running and making decisions, not just during development.
  • “Policy puppetry” demonstrates how easily built-in safety guardrails, like filters that block harmful or inappropriate outputs, can be manipulated.
  • Shadow AI—the use of unsanctioned AI tools by employees—is real. Empower users with safe, approved options instead of relying solely on blocking tools.
  • Frameworks like MLDR (Model Lifecycle Defense and Response) and CaMeL (Content and Memory Layering) help establish secure privilege boundaries by separating what a model knows from what it is allowed to do.
  • If AI systems are being used in ways that could cause financial, reputational, or legal harm, companies may eventually need to disclose those risks to investors or the public—especially if they remain unmanaged.

 

Joan Goodchild: You’ve said securing AI isn’t just an ethical issue—it’s foundational to delivering business value. What’s the risk if companies don’t get this right?

Malcolm HarkinsMalcolm Harkins: The reality is this: it’s not responsible if it’s not secure. Most companies today are being told they should have ethical AI frameworks—and they should. These typically focus on fairness, accountability, transparency, and bias mitigation during development. But if your AI models can be subverted at runtime, the integrity of those frameworks collapses.

That’s what we’ve seen with policy puppetry—an unpatchable method that lets bad actors bypass every major LLM’s guardrails. So if a company is deriving material benefit from AI—more revenue, lower costs—and those models are left unprotected, the risk to the business is just as material. Boards and investors should expect transparency about that risk.

Let’s talk about policy puppetry. What does that say about how we’ve approached GenAI risk so far?

It reveals a fundamental flaw. Too many organizations have invested heavily in “responsible AI” principles during model development, but have done little to address what happens when the model is actually running. That’s where runtime security comes in—and it’s been largely overlooked.

Policy puppetry is a perfect example. It’s a method that exploits the model alignment layer, essentially tricking the AI into ignoring its built-in safety rules. Imagine editing the fire escape map in a building so people run toward the fire instead of away from it. That’s what happens when guardrails fail. And guardrails alone won’t save you.

That’s why runtime protection matters. It refers to the safeguards that operate while the model is being used—not just how it was built. That includes monitoring prompts in real time, enforcing access controls, and detecting malicious behavior as it happens. And yes, that protection needs to extend across the full AI stack: from how data flows in, to how outputs are generated, to how decisions are logged and reviewed.

For business and board leaders, the takeaway is this: AI is not a set-it-and-forget-it tool. It’s a dynamic, evolving system. If you’re relying on it to make decisions, power customer interactions, or drive revenue, then you’re also inheriting live, unpredictable risk. Without real-time safeguards, it's like flying your business in a blimp full of hydrogen. One spark, and the damage is immediate.

You’ve described today’s general-purpose LLMs as “overprovisioned and underprotected.” What can companies do right now?

First, understand that these models were built to do too much by default. That creates a massive attack surface. Combine that with weak or nonexistent privilege boundaries and you’ve essentially violated basic security principles.

Start by implementing principles that apply traditional cybersecurity thinking to the lifecycle of an AI model. At my company, we refer to this as MLDR, or Model Lifecycle Defense and Response. It’s a framework that includes all the stages of an AI model, from training through deployment and maintenance, and incorporates cybersecurity practices such as  monitoring, access controls, and change management to every stage.

Then there’s CaMeL, which stands for Content and Memory Layering. It helps separate what a model knows (its internal knowledge or memory) from what it’s being asked to do (the prompt). This separation is especially important in environments with sensitive intellectual property or regulatory obligations, because it reduces the risk of unauthorized actions or data exposure during everyday use.

When companies rethink how models are accessed and how prompts are handled, they can start to define roles, restrict access based on job functions, and prevent users from unintentionally triggering unsafe or unauthorized responses. That shift turns AI from an open-ended black box into a managed, auditable system.

You’ve warned against combining data and control instructions in prompts. Why is that so dangerous?

Because it breaks basic security logic. When the same prompt includes both the sensitive data and the instruction for how to use it, you’ve created an all-in-one vulnerability.

Imagine someone saying, “Here’s our customer list—now write an email campaign to target them.” That one input gives access and control at the same time, with no separation or oversight.

That’s why we need clear privilege boundaries. Whether it’s through prompt gateways that filter and inspect requests, tiered access roles based on job responsibilities, or content separation that keeps sensitive information in controlled locations, organizations need to decouple influence from permission.

Many CISOs are still figuring out where they fit in the AI conversation. What’s your advice?

Be a choice architect. If the business is already investing in AI to drive material outcomes, then security leaders need to help shape the path forward—not just throw up roadblocks. That means empowering users with secure, governed tools instead of defaulting to blanket bans on public platforms like ChatGPT.

A secure tool could be a company-approved interface that connects to a large language model but adds guardrails like prompt filtering, access logging, and data loss prevention. It might restrict users from uploading sensitive information or enforce role-based permissions that align with job responsibilities. The goal is to make it safe and easy for employees to use AI in a way that supports the business—without exposing the organization to unnecessary risk.

If security teams don’t enable that, users will find workarounds. And when that happens, the organization loses visibility, control, and ultimately, trust. That’s why this is a fiduciary issue. If AI is driving material value, then managing its risks is part of protecting the business.

What core leadership lesson should CISOs carry forward as they secure AI-driven business initiatives?

Simple. Do your job. If AI is delivering business value, then it’s worth protecting. Understand the benefit. Assess the risk. Build the right controls. Because if we don’t, we’re not just failing at security—we’re failing the business.

Joan Goodchild

Written by Joan Goodchild

Joan Goodchild is a veteran journalist, editor, and writer who has been covering business technology and cybersecurity for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online.