Build a Gen AI Security Function

Gen AI Security Risks, Controls, Threat Modelling & More

👋 Good morning!

Each week I provide an in-depth response to your questions about careers, roadmaps, building security teams, AI security, cloud security, and anything else you need support with. Send me your questions and I’ll do my best to provide actionable advice.

Browsing LinkedIn recently led me to a rather interesting job posting - 'Head of Generative AI Security' at a major global financial services firm. This discovery prompted a more serious train of thought: Should every organisation setup a specialised Generative AI Security team?

We're still figuring out how AI fits into our daily grind and contemplating how we integrate AI into our corporate frameworks, with Generative AI at the forefront. Let's dive into why having a dedicated team for this might just be the next big thing.

There are two primary viewpoints for AI and Security and they are both solving different problems:

  • Security for AI is concerned with implementing measures to protect AI systems and the data they process from potential threats, vulnerabilities and malicious activities.

  • AI for Security is concerned with enhancing existing capabilities, such as supercharging threat detection, pattern recognition and incident response. There’s an endless list of use cases for the application of AI to security problems.

Stay with me whilst I share some insights on the following questions:

  • Do you need a dedicated Gen AI Security team?

  • What are the security risks of Gen AI?

  • How should I approach security controls?

  • How do you perform threat modelling for Gen AI?

  • How can AI be used for security?

  • What steps can I take in the short term?

Note: there is a full list of resource links at the end.

Do you need a dedicated Gen AI Security team?

All security teams are not created equal. They are shaped by factors such as team size, budget, industry focus, organisational scale, and capability maturity. Larger firms and well-established security teams typically exhibit a higher degree of role granularity and specialised skill sets. Conversely, in smaller organisations or those with limited security resources, team members often wear multiple hats, requiring a more versatile skill set.

So, do you need a dedicated Gen AI security team? In short, no - you don’t NEED an elaborate setup with flashy titles and bells and whistles. What truly matters is the establishment of solid foundations and defined responsibilities for AI security. Structural changes and personnel appointments are secondary in importance.

Google’s Secure AI Framework provides six core elements to help address this:

The priority is to assign responsibilities to drive the outlined framework. The approach to achieving this will depend on the organisation's context and the composition of its security team. An individual or a dedicated team should champion the secure integration of AI into the organisation and explore how AI can be harnessed to enhance the security capability.

Note: How well this capability functions is dependent on cross-functional collaboration between security, legal, privacy and others, to tackle the technical and legislative complexity.

What are the security risks of Gen AI?

The compelling reason to formalise AI security responsibilities is rooted in risk. Fortunately, several widely recognised industry organisations and individuals have provided valuable insights into defining AI security risks and understanding the broader AI attack surface. For example, the NIST AI Risk Management Framework, OWASP Top 10 for Large Language Model Applications and the AI Attack Surface Map by Daniel Misessler.

To kick off risk management processes, the first crucial step is to create and maintain an inventory of your AI initiatives. With this inventory in hand, you can then allocate responsibilities for the identification, analysis, management, and remediation of risks. Despite the dynamic nature of Generative AI, particularly as the fastest-moving subfield in AI, its security landscape isn't drastically different from AI in general.

How should I approach security controls?

While AI offers powerful performance boosts, it also increases the attack surface available to bad actors. It is therefore imperative to approach the use of AI with a clear understanding of potential threats, their impact on different consumption models, and baseline controls for each.

The first step is to define a way to scope your AI consumption. Amazon has created a useful template that you can adapt to your needs.

Just as CIS benchmarks were established to guide thinking on baseline controls in the cloud, a similar approach is emerging for defining a standardised set of controls for AI. OWASP has introduced the AI Exchange, offering a comprehensive list of AI security threats and mitigating controls for your consideration.

Security Threat Modelling

A threat modelling framework is useful for thinking about harms and impacts that can come from AI systems. Its purpose is to clearly articulate security concerns and is incredibly valuable for having conversations with senior stakeholders. Here’s the framework:

ATHI Structure = Actor, Technique, Harm, Impact

A(n) [insert actor] uses [insert technique] to create [insert harm] which results in [insert impact].

Once we understand the threat and technique, we can define a suitable mitigation:

This technique can be mitigated by [insert control].

To do this we can populate a table covering: system component, threat / technique, and countermeasure / mitigation. Consider exploring MITRE ATLASâ„¢ (Adversarial Threat Landscape for Artificial-Intelligence Systems) - a knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups.

AI for Security

Soon, AI will give us the power to digest and evaluate the entire context of our organisations – networks, applications, users, and policies. The more context we have about what we’re defending, the better we can defend it. But we are not there yet.

Key considerations of AI for Security are:

  • How are you using AI in terms of existing workflows?

  • Are you augmenting what your team is already doing?

  • Are you building a new capability?

Use case ideas around threat detection and response, as well as security testing, are taking centre stage right now. Threat detection and response capabilities will be supercharged, helping teams analyse mountains of data in real-time, performing threat pattern recognition, and preventing cyber-attacks from materialising. Solutions will alleviate common challenges like too many vulnerabilities, limited resources, and limited time to remediate.

Now is the time to explore how AI-augmented services will support your defensive capabilities and factor it into your strategic planning.

What’s the way forward?

Focus on securing existing and planned consumption of AI services. To do this, security must assign responsibility to an individual or team to champion Generative AI security across the organisation, and support the development of AI related skill sets in the security team. This individual or team should sit on the relevant governing bodies within the organisation where decisions are being made on the use of AI.

The individual / team would initially focus on:

  • Creating a strategy for AI Security within the organisation - likely a pillar of a wider Enterprise AI Strategy.

  • Defining policy / standard requirements for AI Security - these requirements/controls won’t be one size fits all and should vary based on the scope of AI services.

  • Attending existing forums as a security representative to offer AI security guidance and threat modelling.