- Cyber Pro Club
- Posts
- How to Secure AI
How to Secure AI
PLUS: Global AI Regs Map & Fixing Vuls w/ AI
👋 Good morning, Cyber Pros!
This week’s issue brings you:
How to secure AI - risks, frameworks and best practice
Global AI Regulations Map
Fixing security vulnerabilities with AI
Let’s dive in!
Read time: <5 mins
How to Secure AI
I’ve been writing a lot about ‘AI Security’. I spend a good portion of my week learning and getting to grips with this domain. Why? Because I believe it’s the greatest disrupter our field has ever experienced. And right now, most security teams are in an entirely reactive state.
AI will become the engine behind all modern enterprises, woven into the fabric of all critical operations in hope to gain a competitive advantage, and those that don’t adapt will be left behind. This places AI Security as a cornerstone of enterprise security capabilities - dedicated to secure development and consumption of AI.
We’ve seen an explosion in AI-adoption into products and services across industries. Whether it’s a SecOps tool or a banking application, we’re seeing integrations everywhere. Development teams are leveraging LLMs to develop apps at unprecedented speed. The adoption curve of cloud-native AI services is growing exponentially.
One thing we know for sure - every single AI use case is vulnerable to cyberattacks.
Greater adoption, equals greater risk exposure. Forbes reports that AI incidents have increased by 690% from 2017 to 2023, and they’re expected to keep accelerating. AI is not inherently secure, but it can be secured.
Adopt an AI Security Framework for your Organisation
We need to systematically address AI security risk through the adoption of a suitable framework. Frameworks provide a structured way to address risks and protect against rising threats.
AI security risks include increased attack surfaces, data breaches, credential theft, vulnerable AI pipelines vulnerabilities, data poisoning, prompt injections, and hallucination abuse.
There are 3 frameworks I believe you should familiarise yourself with:
Google’s Secure AI Framework offers a six-step process to mitigate the challenges associated with AI systems. These include automated cybersecurity fortifications and risk-based management.
NIST’s Artificial Intelligence Risk Management framework breaks down AI security into four primary functions: govern, map, measure, and manage.
Mitre’s Sensible Regulatory Framework for AI Security and ATLAS Matrix anatomise attack tactics and propose certain AI regulations.

AI Security Best Practice
This is not an exhaustive list, but some of the top priority control areas for ‘securing AI’:
Tenant isolation: Tenant isolation is a powerful way to combat the complexities of Gen AI integration.
Gen AI boundary architecture: All components need to have optimised security boundaries, carefully considering what needs to be shared vs isolated based on contexts and use cases.
Evaluate Gen AI complexities: Model/test the integration of Gen AI and consider the implications before implementation.
Good AI Security is good Cloud Security: It’s still vulnerable to traditional challenges like API vulnerabilities and data leaks. Remember the wider context of AIs integration and how it relates to overarching cloud security vulnerabilities.
Sandboxing: Start with isolated test environments that are subject to ongoing vulnerability scanning and secure configuration.
Apply input limitations: where possible (ensuring it doesn’t compromise user experience) leverage dropdown menus with limited input options, instead of open text boxes.
Prompt monitoring: It is important to monitor and log end-user prompts to identify suspicious activity, such as malicious code execution.
Mini High Level Roadmap for AI Security
Organisations and security professionals that are unsure where to start, here’s a basic outline of activities you should initially focus on:
Adopt an AI Security Framework and customise to your organisation
Identify key risks and map to core controls for mitigation
Focus on guardrails to facilitate secure adoption and development of AI in your cloud environments
Adapt your third party security approach to account for AI services - this is not a one-size fits all approach
Build a forum including business and technical stakeholders to align on an approach
Explore AI-powered capabilities of existing technology partners
Global AI Regulations Map
Whether you’re working in a GRC role or you’re an Architect, it’s important you familiarise yourself with global regulations that could impact your responsibilities and guidance.
Fairly has created a Global AI Regulations Map to help you do just that.
Fixing Security Vulnerabilities w/ AI
I recently wrote about shift left security which advocates for embedding security into the development process as early as possible. This article by GitHub on its code scanning auto-fix feature, which uses AI to suggest fixes for security vulnerabilities in users' codebases, may make this an easier reality to achieve!
Key takeaways:
Code scanning can be triggered on a schedule or upon specified events.
The feature is enabled for CodeQL alerts for JavaScript and TypeScript.
The technology behind the auto-fix prompt involves using a large language model and post-processing heuristics.
Counter arguments:
Some fixes may require adding new project dependencies, which may not be suitable for all codebases.
Some users may prefer to manually review and edit the suggested fix, rather than relying solely on AI-generated suggestions.
AI hallucinations could lead to vulnerable code.
Learn more here.
Did you enjoy this one?
If you’ve got a question or feedback, you can reply to this directly!
I want to create a newsletter that you can’t wait to open every week.
Your feedback will help me do that.
If you found this newsletter valuable, share this link to others: https://www.cyberproclub.com/subscribe
Thanks for reading.
Cal J Hudson