Blog

How Generative AI Can Be Used in API Security

June 19, 2025 | 5 MIN READ

by Jeff Harrell

Chat GPT and API Security

The rapid adoption of generative AI (GenAI) such as ChatGPT and Claude got us thinking about how it may or may not impact API security. As more and more people use GenAI to perform complex searches, for “vibe coding,” or even to write blog posts (but not this one!), they’re often doing so in business settings without the proper oversight and control of IT and security teams. Enterprises need to understand the potential for accidentally harmful or malicious use of these new tools.

However, used with the proper guardrails and controls in place, can GenAI help IT and security teams improve security faster and more efficiently? Let’s find out.

The Risks of Generative AI for Enterprises

Part of what makes GenAI use risky for enterprises is that it’s so new and is changing quickly. People are still learning what it’s capable of, and attackers are still learning how to use it for malicious purposes. We’ve also written about how ChatGPT or other Large Language Models (LLM) could be used to automate attacks. Some other risks of GenAI include:

Sensitive Data Exposure

Employees using GenAI for coding may upload confidential company code in order to give the AI more context, or someone in marketing may upload a list of customers to automate a mundane task. Sensitive data exposure can occur by accident or maliciously, but either way it’s one of the top concerns of businesses around AI.

Model Poisoning

This type of attack occurs when a bad actor manipulates the GenAI model causing it to behave in an unexpected way. It could enable confidential information extraction, fraud, or other types of impacts.

Improper Output Handling

If GenAI output is not validated and sanitized properly, the outputs generated could affect downstream systems. For example, if a user prompts AI to create code, it’s possible that the code could cause privilege escalation or remote code execution on business systems if not properly reviewed.

OWASP has put together an excellent LLM Top 10 list of risks, vulnerabilities, and mitigations

Using GenAI to Improve API Security

While there are certainly risks to using GenAI in the enterprise, there are also some clear advantages. Here are a few things GenAI is making better when it comes to software security.

Using GenAI to Debug APIs

LLMs are very code aware, and the more the language is used the more they learn and can help. You can actually put API code in and ask the platform to debug it, and the output is pretty accurate. Users need to be aware of the security risk inherent in sharing code with an LLM, but they can be helpful for non-proprietary APIs.

Using GenAI to Write APIs

GenAI can be particularly useful for things like regexes for parsing OS and diagnosing third party tools errors. ChatGPT, for example, can write secure, enterprise grade APIs and application code just as easily as it writes a basic script. You can even ask it how to securely use the libraries it suggests. Imagine if there were fewer API code mistakes before the API enters the testing or quality phase. That would mean fewer vulnerabilities, and fewer exploits. Again, it’s critical to review the output as LLMs have been reported to ‘hallucinate’ the code libraries it requires, leading to wasted time at best and potential inclusion of malicious libraries at worst.

Using GenAI to Find Security Flaws in Existing APIs

GenAI can analyze your APIs for operational issues, but it can also help you understand where flaws might exist in your code. Often software is made up of dependent libraries of activities, not necessarily shortcuts but operations that are performed over and over and may be combined with other libraries. For example, there are millions of bad Java libraries in the wild. What if you dropped in the dependencies and asked if the libraries are secure? This is already happening for some organizations. A simple library security analysis is allowing developers to double check the 3rd parties they rely on in order to help ensure that code is secure.

Policy and Configuration Generation

LLMs can translate natural-language requirements into enforceable controls such as bot management policies and firewall rules. This makes it easier for less experienced IT and security personnel to develop these controls which can then be reviewed by a more senior staff, speeding up the process of policy creation and freeing up senior staff for other work.

Threat Intelligence Enrichment

LLMs can track open source intelligence (OSINT), dark-web chatter, and proprietary threat feeds, and then instantly answer, “Which of today’s threat alerts tie back to known vulnerabilities on our network?” This type of enrichment could transform days of manual enrichment into seconds, enabling organizations to outpace attackers.

Generative AI Can Improve Cybersecurity

GenAI can improve your security posture and cybersecurity in general – with the proper controls and guardrails. The example positives above are clear advancements, while the risks are also real and must be taken seriously. Generative AI enables users of all skill levels to express what they want in natural language, and the tool returns information to kickstart a project or solve a problem in a way you hadn’t yet considered. If we’re thoughtful about how we use it, it can provide great benefits while minimizing the risks.

How Cequence Can Help

Cequence’s API security and bot management solutions protect businesses against superpowered AI bot attacks, unwanted IP scraping, and sensitive data exposure. Our unique, network-based approach enables businesses to allow AI agents to have secure, managed access to your applications while protecting applications and APIs from AI scraping and attacks. Contact us to learn more.

Jeff Harrell

Author

Jeff Harrell

Director of product marketing

Jeff Harrell is the director of product marketing at Cequnce and has over 20 years of experience in the cybersecurity field. He previously held roles at McAfee, PGP, Qualys, and nCircle, and co-founded the company that created the first commercial ad blocker.

Related Articles