Managing API Security for AI Programming

Published May 06, 2021
Ed Tittel HeadshotWRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel

The best tool for securing use of application programming interfaces (APIs) – including those employed for AI programming – may be AI itself. Artificial intelligence is extraordinarily adept at modeling how APIs get used. This means that AI models can continuously examine and analyze API activity. This provides an opportunity to address oversights and issues that policy-based API coverage cannot handle. The timing on this technology is fortuitous, because Gartner predicts that API abuses will represent the most frequent attack vector that results in data breaches within enterprise web-based applications.

Managing API Security For AI Programming

These days, says InfoWorld, enterprises make use of authentication, authorization and throttling capabilities to manage APIs. Such tools are vital in controlling who accesses APIs within an enterprise IT environment. But these approaches do not address attacks that survive such filtering and scrutiny because of clever attacks embedded within apparently legitimate API calls and uses. Nowhere is this as apt as for AI programming itself, which represents a substantial and increasing share of programming activity within enterprises nowadays.

Within an organization, it’s typical to use API gateways as the primary way to call and use APIs. Such gateways can enforce API policies by checking inbound requests against rules and policies that relate to security, throttling, rate limits, value checks, and more. In this kind of environment, both static and dynamic security checks can be helpful, and improve security within the applications they serve.

Static Security Checks and Policies

Static policy checks work well for quick simple analyses because they do not change with request volume or previous request data. Static security scans work well to protect against SQL injection attacks, cohesive parsing attacks, schema poisoning attacks, entity expansion attacks, and other attacks that depend on clever manipulation of APIs inputs. Static policy checks work when scanning incoming packet headers and payloads, and can match against already-known access patterns associated with attacks. This permits, for example, JSON payloads to be validated against predefined JSON schemas, and can screen against injection attempts of various kinds. An API gateway can also enforce element count, size, and text pattern limits or filters to forestall attempted buffer overflow or illegal command injections.

Dynamic Security Checks and Policies

Dynamic security checks, as the name implies, work against inputs and behaviors that can change. Typically, this means that inputs must be validating against some notion of state or status, as defined by previous inputs and data associated with them. Most often dynamic checks reflect the volume or frequency of API traffic and requests at the gateway. For example, throttling techniques depend on tracking previous activity volume to limit access when the number of prior API requests exceeds some predetermined (but adjustable) threshold. Rate limiting works in similar fashion – by curbing concurrent access allowed for some particular service or resource.

While techniques based on authorization, authentication, throttling and rate limiting can be helpful, they do not address all the ways in which APIs might be attacked. Because API gateways typically serve numerous web services, their attendant APIs may be called in large numbers of current, ongoing sessions. This poses issues regarding the depth of inspection and the complexity of rules and policies that API gateways can apply based simply on available CPU cycles.

APIs also manifest typical access patterns. What’s normal for one API (to access a product search engine) – namely, a large number of search requests as a shopper hones in on a particular item to buy – might be malicious for another, related API (to complete the purchase, once a selection is made). The latter pattern (a large number of items chosen for purchase at the same time) could, in fact, signal use of a stolen credit card to quickly exhaust its credit limit before access is blocked. This strongly indicates that API access patterns vary, and must be analyzed independently to establish the best, most secure response.

Likewise, it’s well known that many attacks occur internally, and involve trusted parties (employees, contractors, service personnel, and so forth) taking advantage of valid credentials and access to foist their attacks. Policy-based authentication and authorization is generally not able to detect and prevent such attacks. And, in fact, API gateways may be limited as to how much more overhead they can handle in terms of added rules and policies for them to enforce.

This is precisely where AI can help, even with AI programming itself. AI-based API security can build models of what’s normal, and be taught to recognize anomalous or unwanted patterns of activity that often signal attacks. Because such AI models are inherently adaptive, they work well in responding to dynamic attacks within the context of what’s expected (and not expected) within individual APIs. In the same vein, AI can be cognizant of threat and vulnerability intelligence on a per-API basis and respond to known threats and vulnerabilities while flagging unknown threats and vulnerabilities for investigation.

By building out from predefined models based on observed use access patterns, AI-based security permits detection of attacks as they start to unfold. The more frequently such attacks occur, the more the model should learn about how to identify them and fend them off. Better yet, AI systems usually run on their own dedicated compute, storage and networking infrastructures, though they will communicate their findings to API gateways. This means the API gateway can act on AI-based information without incurring the overhead needed to produce such data. AI-based security can extend what API gateways already do without affecting their runtime behavior.

AI developers and programming security analysts or experts have a lot to learn from each other. By putting AI to work on better understanding and securing APIs based on usage patterns and inputs, enterprises can improve their overall security posture, while building more intelligent and capable AI-based (and cognizant) applications and services.


Would you like to know more about implementing secure application development solution in your company? Get in touch with our Kiuwan team! We love to talk about security.

Scan your code with Kiuwan banner