In this edition of Cyber Means Business, Dr. Margaret Cunningham, vice president of security and AI strategy and field CISO at Darktrace, explains how human behavior, trust, and decision-making influence security practices. Drawing on her background in behavioral science, security research, and organizational dynamics, Cunningham argues that understanding how people interpret risk is essential to building resilient systems, accelerating innovation, and maintaining customer confidence.
For many organizations, security conversations focus heavily on tools, controls, and technical execution. Dr. Margaret Cunningham believes that typical approach overlooks a foundational reality. Security outcomes are the result of human decisions, shaped by fear, incentives, communication patterns, trust, and the work environment. Whether it is responding to an incident, collaborating under pressure, or operating in ambiguity, people interpret signals differently. Those interpretations influence not only an organization’s risk posture but also productivity, customer experience, and long-term performance.
Cunningham’s work spans behavioral engineering, insider threat research, security analytics, and now AI-informed strategy at Darktrace, a cybersecurity company that uses self-learning AI to spot and respond to unusual activity in real time. Cunningham serves as the company’s vice president of security and AI strategy and field CISO. Across these roles, she has observed how quickly decision making can break down in environments where fear dominates, where controls do not reflect real workflows, or where expectations are unclear. She has also seen how culture, transparency, and supportive structures help teams make better choices even in uncertain and fast-moving situations.
Today, Cunningham advises enterprises on how to align technology strategy with human behavior, how to promote secure practices that do not impede progress, and how to calibrate trust in an era where AI and synthetic identities introduce new types of risk. Her perspective brings together psychology, human factors, and business strategy to show why secure behavior cannot be mandated. Such practices have to be designed, understood, and reinforced.
|
CISO Leadership Takeaways
|
Joan Goodchild: You have spent much of your career studying why people make risky choices. How can CISOs translate behavioral insight into business outcomes like reducing downtime, protecting brand trust, and improving customer experience?
Margaret Cunningham: Risk can be helpful or harmful. Companies need people who are willing to take risks as well as people who are more cautious. Innovation usually comes from the first group, and protection and stability come from the second. The challenge is creating an environment where these different mindsets can work together comfortably.
Before looking at technology, leaders should think about the people who make decisions. Who are they, what motivates them, and how do they work together. If you are not paying attention to how decisions are made, you cannot influence the outcomes that matter for customer trust or business performance.
Fast-moving environments make it easy to make quick decisions without pausing to understand the consequences. Building intentional time to check assumptions and discuss outcomes can have a meaningful impact on both security and business results.
Different teams also have different needs. An innovation team and a compliance team cannot be expected to operate with the same risk tolerance. Leaders should design processes that recognize those differences.
Every business depends on people making good decisions under pressure. What are some of the organizational or psychological factors that drive human error?
Fear is a major factor. People fear being wrong, fear taking ownership, and fear reporting mistakes. In uncertain situations, that fear often leads to inaction. Right now almost everything feels uncertain, which makes this even more challenging.
Leaders can reduce this fear by modeling openness about failure, acknowledging that many problems do not have a single correct answer, and showing trust in their teams. A culture that allows for mistakes helps people move forward instead of shutting down.
There is also the opposite issue. Some people make very quick decisions with little information. If they have strong personalities, teams may follow them without pushing back. That also introduces risk. Leaders need to balance both tendencies.
Boards often ask how to balance productivity with protection. From a behavioral science perspective, what does an environment that supports both innovation and secure behavior look like?
Boards are motivated by value creation. Speed, revenue, and market opportunity shape their expectations. These pressures influence how teams perceive protective behaviors.
Starting small and showing value quickly can be helpful. For example, a safety-by-design review process may take time upfront, but if you can demonstrate that it prevented a costly issue, you can communicate the benefit of such a review in business terms.
Once processes are established, teams often find they have more freedom to innovate responsibly. They can move quickly within guardrails that protect long term health, not just short-term gains. Boards also need to understand that strategic investments in resilience pay off over time, even if they slow things down briefly in the moment.
Many awareness programs fail because they treat security as compliance instead of culture. What does it take to build a real culture of care around cyber risk?
Policies exist everywhere, but most people do not read them and do not think about them when they disrupt their workflow. Compliance does not work unless the people involved understand and support the behaviors you are asking for.
If secure behavior is difficult, if the reasoning behind it is unclear, or if everyone is quietly working around the rules, the program will not succeed. One of my biggest concerns is when people become invisible, meaning they avoid approved processes entirely because they do not see a way to get their work done within the system.
Leaders need to model the behaviors they expect, understand how people actually work, and clearly communicate why certain practices are essential even when they create friction.
You have said that trust is a measurable business asset. How can organizations strengthen digital trust with employees, customers, and partners?
Trust shows up as engagement. If you provide a tool that should make someone’s work easier and they do not use it, trust is usually the issue. People tend to be naturally trusting, so when they pull back or stop engaging, that is a sign that trust is weakening. It is subtle and rarely spoken aloud.
There is also the risk of “overtrust,” people putting their faith in the technology, especially with generative AI. Some people assume the output is always accurate, which becomes dangerous when AI is used for detections that require precision and explainability.
Organizations now need to help teams understand when to trust a system, when to question it, and how to recognize signals that trust is forming or fading.
If an organization sees that people are not using a feature or a shortcut and trust seems to be the issue, what should they do?
They need to understand the reason. Does the tool help them? Does it create friction? Do they feel like they need to redo the work? There is always a specific cause.
For vendors, this often means having direct conversations with users. It is easy to imagine explanations that do not match reality. Once you understand the real reason, the impact on product success and customer trust can be significant.
What is the most overlooked factor that leaders should be paying attention to right now?
Security culture is shaped by behavior, not by what is written in a policy. If leaders pay attention to how decisions are made, how people respond under pressure, and what influences trust, they can create systems that support both innovation and protection. That combination drives long-term business value.
Written by Joan Goodchild
Joan Goodchild is a veteran journalist, editor, and writer who has been covering business technology and cybersecurity for more than a decade. She has written for several publications and previously served as editor-in-chief for CSO Online.