- Ikenna Consulting Newsletter
- Posts
- Evaluating Kong AI Gateway using the IGT-AI Model: Sensitive Data Disclosure
Evaluating Kong AI Gateway using the IGT-AI Model: Sensitive Data Disclosure
Issue #58
Contents
Evaluating Kong AI Gateway Against the IGT-AI Model: Sensitive Data Disclosure
Interesting content for the week
Tool updates
Feedback & share
Upcoming conferences
My services: API governance consulting
Evaluating Kong AI Gateway Against the IGT-AI Model: Sensitive Data Disclosure
The IGT-AI is the framework I use for evaluating AI gateways. One risk detailed in the IGT-AI risk model is sensitive data disclosure. In this post, I'll examine how the Kong AI Gateway mitigates this specific risk.
The Risk of Sensitive Data Disclosure
In the process of communicating with AI APIs, sensitive information may be inadvertently exposed via requests to a Large Language Model (LLM). Furthermore, malicious actors can exploit these systems by manipulating LLMs to reveal confidential data they have been trained on. These scenarios can result in significant security breaches and regulatory non-compliance, and collectively constitute what I term sensitive data disclosure.
Kong AI Gateway Mitigation Controls
The Kong AI Gateway functionality is enabled by applying the foundational AI Proxy plugin to route and manage LLM traffic, and then applying a collection of separate Guardrail plugins to enforce security policies, such as the ones detailed below. At the time of this writing and as far as I could find, the Kong AI Gateway supports a collection of six primary guardrail plugins designed to mitigate sensitive data disclosure:
The AI Prompt Guard plugin: This blocks or allows queries based on regular expression (regex) matches against defined prompts, words, and phrases. This provides a deterministic blocklist.
The AI Semantic Prompt Guard plugin: This blocks or allows queries based on embedding-based similarity matching. Kong's embedding-based matching supports Redis and Pgvector vector databases.
The AI PII Sanitizer plugin: This plugin guards against Personally Identifiable Information (PII) from reaching an LLM and PII from an LLM reaching a user. This plugin works in conjunction with Kong's AI PII Anonymizer service, which is available as a Docker image in a private registry for Kong clients. The Sanitizer plugin delegates requests and responses to an instance of the Anonymizer service (over HTTP/S), which detects and applies the chosen sanitisation methods to the message (I suspect that this is the reason to allow the Anonymizer service to scale independently). The Anonymizer service supports nineteen different field anonymisation options, including phone numbers, emails, credit card numbers, and more. Quite comprehensive.
Note that I have not included Kong’s AI Prompt Template plugin here, as I see that as primarily aimed at mitigating prompt injection, which is more of a security concern.
Third-Party Guardrail Integration
Should you choose not to use these built-in sensitive data disclosure guardrails, Kong also offers plugins that delegate to third-party guardrails:
AI AWS Guardrails plugin (for integration with Amazon Bedrock Guardrails)
This collection of guardrails offers effective run-time protection for sensitive data disclosure in AI-API interactions. Kong's support for the guardrails provided by the hyperscalers—AWS, Azure, and GCP—is a significant advantage, as many enterprises are already comfortable using these cloud-native tools.
However, while Kong offers its own sensitive-data guardrail plugins and hyperscaler integrations, it does not currently offer the same wide range of third-party guardrail integrations as competitors like Portkey and LiteLLM. These platforms offer integration with providers such as Lakera AI, Lasso Security, Pangea, and Guardrails AI, among others.
The good news is that Kong supports the creation of custom plugins, meaning that users can always develop their own integrations to address any specific or missing requirements.
Overall, I think the Kong AI gateway does a great job of providing comprehensive mitigation for sensitive data disclosure in AI-API interactions.
Interesting content for the week
Runtime AI Governance
The state of AI in 2025: Agents, innovation, and transformation: The latest McKinsey survey reveals a landscape of widespread AI adoption but uneven realisation of enterprise-level value.
Code execution with MCP: Building more efficient agents: Anthropic’s post discusses presenting MCP servers as code APIs and enabling agents to write code is a significantly more efficient and secure method for handling complex, large-scale tool use than traditional direct tool calling.
Dynamic MCPs with Docker: Stop Hardcoding Your Agents’ World: Jim Clark introduces Dynamic Model Context Protocols (MCPs), arguing that the practice of statically configuring AI agents with a fixed set of tools is outdated, inefficient, and limits the agent's potential.
5 Agentic AI Design Patterns Transforming Enterprise Operations in 2025: Shakudo argues that autonomous AI requires fundamental architectural shifts, highlighting five design patterns that enable systems to operate independently within enterprise boundaries.
Server Instructions: Giving LLMs a user manual for your server: Ola Hungerford writes about server instructions in the Model Context Protocol (MCP), a feature designed LLMs explicit, workflow-aware guidance on how to use a server's tools.
API Production Governance
API Gateway vs. AI Gateway: The Definitive Guide to Modern AI Infrastructure: Kong argues that while traditional API Gateways remain vital infrastructure, they are ill-suited for the unique demands of Large Language Model (LLM) workloads, necessitating a specialised AI Gateway.
The Hidden Trust Problem in API Formats: Bruno Pedro highlights the fundamental challenge emerging from the widespread use of the OpenAPI Specification (OAS) and other descriptive formats.
The Feature You Didn't Know You Needed: Multi-Layer Routing in Traefik: Immánuel Fodor discusses multi-layer routing in Traefik Proxy, explaining how it uses different levels of configuration to define traffic rules efficiently.
H2 2025 State of API Security: The report from Salt Security highlights a growing disconnect between rapid API adoption and outdated security practices, identifying this gap as a systemic risk threatening modern enterprise initiatives.
Tool Updates
GPT-5.1: A smarter, more conversational ChatGPT: GPT-5.1 announces a significant upgrade to the GPT-5 series focused on making ChatGPT smarter, more conversational, and highly customisable.
Postman Product Update: November 2025: showcases a suite of features focused on closing gaps in the API lifecycle to ensure API specifications, testing, and deployment are reliable for both humans and AI agents.
Introducing Solo Enterprise for agentgateway: Solo.io’s enterprise offering for agentgateway, a gateway designed to provide enterprise-grade security, governance, and observability for AI-driven, agentic workloads.
What do you think of this newsletter issue? |
Upcoming conferences
Apidays Paris: Apidays Paris sparks essential conversations on data security, digital sovereignty, and sustainable innovation in the age of intelligent systems. Date: 9 - 11 December 2025 Location: CNIT Forest, Paris
My Services: API Governance Consulting
Is poor API governance slowing down your delivery? Do you experience API sprawl, API drift and poor API developer satisfaction? I'll provide expert guidance and a tailored roadmap to transform your API practices. |
Ikenna® Delivery Assessment → Identify your biggest API delivery pain points. Ikenna® Delivery Canvas (IDC) & API Transformation Plan → Get a unified, data-driven view of your API delivery and governance process. Ikenna® Improvement Cycles → Instil a culture of scientific, measurable progress towards API governance. Ikenna® Governance Team Model → Set up and improve your governance team to sustain progress. Ikenna® Delivery Automation Guidance → Reduce lead time and improve API quality through automation |
Schedule a consultation by emailing: [email protected]. |
Reply