Generative AI like ChatGPT and Large Language Models (LLMs) that power chatbots are quickly becoming the most popular tech innovation in recent history. Gartner predicts that “by 2026, more than 80% of enterprises will have used GenAI APIs, models and/or deployed GenAI-enabled applications in production environments.” While the technology helps people and companies with scale, efficiency and speed, they also pose a threat if attacked by a malicious actor. Pentesting AI and LLMs can identify vulnerabilities and reduce AI cybersecurity risk, removing opportunities for abuse.
What Cybersecurity Risks Does AI Pose?
With the rise of AI utilization, numerous risks have become more pressing:
- Vulnerable code: As developers utilize AI to write code and speed up the development process, the code could be lacking in security best practices and produce applications vulnerable to traditional common vulnerabilities.
- Exposure of sensitive data: Conversational AI applications may have access to sensitive internal or customer data, potentially leaking protected information that results in privacy violation and legal consequences.
- Vulnerabilities posed to the larger attack surfaces: Just like adding javascript functionality to a web page introduces the possibility of cross-site scripting, adding LLM or AI applications adds their own category of vulnerabilities. These include exploitations like prompt injection or insecure output handling.
How Does Pentesting Reduce AI Risk?
When you deploy a new AI-based experience, whether internally or externally, you’re deploying a new potential vector of attack. AI and LLMs have been identified by the Open Web Application Security Project (OWASP) as potentially having critical vulnerabilities that provide attackers the same leverage and data as traditional exploits.
NIST also released an AI risk management framework for security teams to understand the potential risks of implementing AI in their environments and how to evaluate AI services for such risks.
Pentesting allows you to emulate an adversary’s interaction with the AI modules deployed on your attack surface, mimicking the carefully crafted prompts an attacker would send, the analysis of the tech stacks surrounding the AI and the explorative nature of attackers searching for any point of entry they can find.
What Are Common AI Applications That Can Be Tested?
Gartner lists a few example implementations for LLMs, including conversational AI, generating scene descriptions for images and retrieving documents through search.
Synack AI/LLM pentesting is agnostic of your implementation or use case. Whether you are deploying a chatbot in your web applications, using GenAI to guide your customers along the buying journey or deploying an internal tool to improve operational efficiency within your organization, these LLMs share common potential flaws and AI cybersecurity risk. These flaws can be identified through Synack pentesting.
AI can also introduce risks into your environment without implementing it at all. If a software developer uses AI to help them write a bit of code, and the code that they receive contains vulnerabilities, your application will now also have that vulnerability. Continuously pentesting your applications can proactively identify insecure code before it’s ever released.
What Kind of AI Vulnerabilities Are Found?
The Synack Platform connects your attack surface with The Synack Red Team (SRT), an elite community of 1,500 security researchers. When pentesting through the platform, you receive a diversity of perspectives and expertise, real-time results and unparalleled visibility into testing activity.
For AI and LLMs, researchers are guided to check for items listed in the OWASP LLM Top 10, for example including prompt injection. Prompt injection describes the phenomenon of carefully-crafted prompts causing an AI to divulge sensitive information, from customer data to source data. According to a recent Gartner survey, “58% of respondents are concerned about incorrect or biased outputs.”
Malicious prompts may also cause AI to inject malicious code or commands into downstream components, enabling the prompter to initiate remote code execution (RCE) or cross-site scripting (XSS).
Additional vulnerabilities checked for include insecure output handling, model theft and excessive agency. SRT researchers will check for testable flaws and provide reports on what they find.
What Results Can I Expect From Synack AI Pentesting?
Reports detailing SRT researcher testing methodologies will be delivered in real-time through the Synack Platform, allowing you to understand the testing coverage received on your LLM implementation. These reports, and any exploitable vulnerabilities noted within, are vetted by an internal team called Vulnerability Operations, ensuring you receive only high-quality results.