Like other sensitive assets, artificial intelligence models deserve a high level of rigor to protect them – and your corporate data, argues Dr. Jaushin Lee, founder and CEO of Zentera Systems.

Artificial intelligence (AI) adoption is increasing exponentially across industries, so it’s understandable that the potential benefits and advantages are getting all the attention.

Unfortunately, the critical security considerations of employing AI are already trailing behind, leaving businesses and their data, customers, employees, and systems vulnerable on many fronts. Many organizations already concerned about the risks of exploitation or data breaches now have a new, even more sensitive asset to protect—their AI models and the data used to train them.

Based on what AI models can do and the lack of sufficient controls over them, it is not far-fetched to imagine a scenario in which an AI model can be tricked into spilling a company’s financial statements before public release and facilitating insider trading. This is just one relatively benign example of what failing to understand and contain the risks of AI could mean for a business.

If you’re considering using AI in your organization, now is the time to implement the same level of security rigor to AI initiatives that is currently applied to other data-sensitive endeavors.

This is where we’ve seen our clients fall short in securing AI and how to fill the most critical gaps.

There Are No AI Islands

Many organizations, driven by a sense of urgency, prioritize the speed of AI deployment over security considerations. This haste can result in AI workloads being treated as specialized initiatives, bypassing conventional network and data security best practices.

For instance, an organization may consider new AI systems experimental and deploy them as “islands” of compute, storage, and network resources without the security protections required of production systems. However, we’ve found AI efforts don’t stay isolated for long because they can’t be properly trained and their inferences cannot be shared if they’re limited to using historical snapshots or synthetic data. As development continues, the pressures to integrate with third-party services, incorporate new algorithms, and ingest potentially sensitive data sources may create a new “shadow AI” that can be difficult to protect if it lacks enterprise-grade security.

Where Conventional Security Falls Short

Isolating AI instances behind firewalls or within separate subnets may seem sufficient. However, these protections quickly prove inadequate, given the complexity of data flows and integrations required to train AI models, the distributed computing infrastructure needed to train them, and evolving computing demands for new types of models. AI training is too complex and fast-changing to fit within the static security models used for production systems. In the rush to deploy AI, many organizations lose sight of security basics such as data encryption, access controls, auditing standards, and overall data governance.

Relying on conventional security solutions, which often depend on network segmentation or IP-based protocols to map users to identities within dynamic AI environments, can still allow unauthorized access to critical systems and data due to the lack of identity information in the IP packet header. This is especially true in environments characterized by constantly changing users, resources, and usage patterns.

Traditional data loss prevention (DLP) mechanisms also fall short in safeguarding AI models against data exfiltration vulnerabilities. For example, many popular DLP mechanisms flag pre-identified strings from known hosts—an approach that simply can’t comprehend the compressed representations of production data stored within AI training weights. In other words, DLP can struggle to process and flag the abstract and condensed forms of data stored within AI model parameters or weights when they are in use, including the AI’s learned patterns and features.

AI Needs Zero Trust

By granting AI the status of a strategic asset, enterprises recognize the need to apply specialized protection beyond traditional security paradigms. The core of your AI security strategy should be Zero Trust—a security model that, by default, denies access to applications or data unless an application or user can prove their identity, and then provides only the access that person or application requires.

Zero Trust principles emphasize continuous authentication and verification, serving as a robust framework for effectively mitigating unauthorized data access, insider threats, and external attacks.

One method of effectively bolstering security is implementing a chamber or enclave—a virtual network zone tailored to the unique requirements of AI instances, with all inbound and outbound network access managed using Zero Trust principles. For help in implementing such policies, organizations can turn to established security standards, such as the National Institutes of Standards and Technology Zero Trust Architecture, and apply them to the AI project.

Other options that can help secure AI instances include identity and access management solutions and threat detection and response mechanisms. By continuously monitoring user activities and network traffic, providing authentication of user identities, and ensuring device integrity, organizations can identify and mitigate security threats to AI. Together, these technologies can be used to provide the real-time threat mitigation and access management that organizations need across their enterprise, including for their AI programs.

 

Related article:

Taming the AI — Anxious Inertia — in AI Decision-Making

By Kenneth Corriveau

 

How to Find the Right Balance

For many business leaders—and even security professionals—a move to Zero Trust can seem complicated, as it conjures up images of large-scale transformation. However, focusing on critical assets and proactively engaging stakeholders can allow organizations to limit the cost and complexity by prioritizing the implementation of Zero Trust on the most critical assets.

To start, teams can engage stakeholders across departments and administrators in charge of the applications that will contribute data to and draw on insights generated by AI models to capture usage patterns and document any associated security risks. This collaboration fosters shared responsibility for cybersecurity without negatively impacting business operations. Modern tools support observability and monitoring and can simplify this job for the security team.

Taking it one step further, organizations can focus on establishing officially "sanctioned" AI systems with clear policies and procedures. By actively promoting and investing in AI research and development in a safe manner—containing and controlling how the organization uses AI and establishing policies that outline standards for accessing, documenting, testing, running, and maintaining these systems—organizations can harness the benefits of AI while proactively mitigating risks and reducing incentives for shadow AI to emerge.

Finally, ongoing risk assessments should identify and prioritize the risks associated with the tools, systems, and data linked to AI models. By conducting regular audits, organizations can identify potential vulnerabilities early and ensure the effectiveness of security controls.

Looking Ahead

Although the buzz around AI certainly is warranted, failing to secure AI systems puts sensitive data, intellectual property, and organizational assets at risk of potential threats and breaches.

However, adopting a holistic security approach that integrates robust access control measures like Zero Trust into AI development and implementing proven tools to isolate these resources and deployment can ensure the responsible use and continued advancement of AI technologies. In doing so, organizations can prove that they are serious about prioritizing security alongside innovation, setting their business up to harness the full potential of AI while effectively mitigating its risks. 

Roles We Recruit


 

Read our weekly e-newsletter packed with career advice and resources for the strategic technology leader, and information about active searches.

The Heller Report

Add a Comment

My CIO Career: Ron Blahnik, CIO of Hibbett Sports, on Leading Both IT and Business Strategy in the C-Suite

Jul 10, 2024

How to Rescue a Failing Technology Function

Jun 26, 2024