The 95% Problem: The Trust Trade-off in AI-Generated UIs

Exploring the Limits of Trust and Predictability in Generative UI APIs

Issue #56

Contents

  • The 95% Problem: The Trust Trade-off in AI-Generated UIs

  • Interesting content for the week

  • Tool updates

  • Podcast

  • FeedBack & share

  • Upcoming conferences

  • My services: API governance consulting

The 95% Problem: The Trust Trade-off in AI-Generated UIs

Introduction

Imagine asking your application "Show me sales by region" and instead of getting static text or navigating through multiple screens, a fully interactive chart appears instantly—complete with filters, hover states, and drill-down capabilities. This is the promise of generative UI APIs: Artificial intelligence that doesn't just process your request, but builds the interface to display it. While traditional applications rely on pre-built screens coded by developers, generative UI APIs create live, interactive components on demand. When a user types "Compare the performance of XLK and the S&P 500," the AI doesn't just fetch data—it generates a complete UI specification that renders as charts, tables, and controls tailored to that exact request. This shift represents a fundamental change in how we think about user interfaces. Instead of anticipating every possible user need and pre-building screens for them, we let AI generate the perfect interface for each unique context. But this power comes with a critical limitation: you can achieve rapid adoption and user satisfaction up to a point, but trust plateaus at around 95%—and that final 5% might be impossible to bridge.

What Are Generative UI APIs?

Generative UI APIs are endpoints that return structured UI specifications instead of raw data [1, 2, 3]. Rather than sending JSON for a client to display in predetermined layouts, these APIs generate complete component definitions—specifying which charts, forms, buttons, and interactive elements should appear, along with their configurations and data [4, 5, 6]. These APIs enable developers to integrate LLMs that dynamically create, modify and render fully structured UI components (charts, forms, buttons, cards) in real time, based on user input and context. Applications range from analytics dashboards to conversational agents that can respond to a request like "Book me a trip to Berlin" with generated forms and confirmation screens. The key benefits include real-time UI adaptation to user needs, rich interactive experiences that go beyond text-only conversations, and reduced frontend development time—no need to build every possible screen or layout in advance.

The Trust Ceiling

These benefits, however, come with a critical limitation. At API Conference Berlin 2025, I heard Tobias Berei speak on 'Optimising APIs for AI agents' (great talk—you can catch up on it on the Devmio app https://devm.io/app/). In his talk, Tobias discussed how putting logic in the AI tier offers faster time-to-feature, but also introduces non-determinism that reduces trustworthiness. After catching up with him post-talk, we came up with the following chart that summarises this fundamental tradeoff based on our anecdotal experience:

This chart illustrates that while you can quickly get an initial rapid increase in user trust as we move UI logic to the AI tier (up to about 50% trust), the cost and time involved in achieving higher trust levels increase exponentially. Trust maxes out around 90—95%, and we cannot achieve the same level of trust as with traditional statically coded UIs. (Caution: do note that these numbers and the chart itself are based on anecdotal experience. It’s not based on any hard experimental data, so treat this more like a hypothesis.)

This limitation becomes the central challenge for any organisation considering generative UIs: that final 5-10% of trust may be impossible to bridge, regardless of how much time and resources you invest.

How Generative UI APIs Work

The generative UI workflow transforms natural language requests into interactive interface components through a structured process. Here's how it works end-to-end:

The Request Flow

1. User Intent Capture
A user submits a natural language request through a chat interface or input field. For example: "Compare the performance of XLK and the S&P 500 over the last year."

2. Context Enrichment
The frontend application packages the user's prompt with additional context, such as user preferences and settings, historical interaction data, current application state, and authentication tokens and permissions

3. API Processing
The backend forwards this enriched request to the Generative UI API. This enriched request includes the original user prompt, system instructions defining available UI components, data access permissions and constraints, and any brand guidelines and design system rules.

4. Intent Analysis and Tool Selection The LLM analyses the request and determines what type of visualisation best serves the user's need, which data sources to query and how to structure the response for optimal user experience. The LLM calls predefined function or tools that correspond to specific UI components (e.g createBarChart(data)).

5. Component Specification Generation
The API returns a structured JSON specification containing the component types and hierarchy, data binding configurations and styling, layout instructions, and interactive behaviour definitions

6. Dynamic Rendering
The frontend receives this specification and uses a rendering SDK to parse the component definitions, fetch any required data, apply appropriate styling, render the interactive interface, and handle user interactions within the generated components

This process typically completes in 1-3 seconds, creating the illusion of instant, intelligent interface generation.

Example of Generative UI API: Thesys C1

Thesys C1 is a generative UI API. Apart from interactive components, it also provides themable, streaming UI features. Here is an example of using the Thesys C1 API. Imagine that as a user, I type in the following prompt in a chat interface: "Compare the performance of the Technology Select Sector SPDR Fund (XLK) and the S&P 500." The backend application can make the following request to the Thesys C1 Generative UI API:

 curl --request POST \
  --url https://api.thesys.dev/v1/embed/chat/completions \
  --header 'Authorization: Bearer sk-th-my-beaer-token' \
  --header 'content-type: application/json' \
  --data '{"model": "c1/anthropic/claude-sonnet-4/v-20250617", "messages": [
  {"role": "system", "content": "You generate UI widgets for a financial dashboard."},
  {"role": "user", "content": "Compare the performance of the Technology Select Sector SPDR Fund (XLK) and the S&P 500."}]
}
' 

The response contains the following React component, which can be rendered dynamically in the frontend using the Thesys C1 SDK [7]:

{
  "id": "chatcmpl-1761646410913-mlbyjw9vlj",
  "object": "chat.completion",
  "created": 1761646410,
  "model": "c1/anthropic/claude-sonnet-4/v-20250617",
...
    "content":
        "<content>{
          component: {
            component: Card,
            props: {
              children: [
                {
                  component: CardHeader,
                  props: {
                    title: XLK vs S&P 500 Performance Comparison,
                    subtitle: Technology Select Sector SPDR Fund compared to broader market
                  }
                },
                {
                  component: Tabs,
                  props: {
                    children: [
                      {
                        value: overview,
                        trigger: {
                          text: Overview
                        },
                        content: [
...               

Apart from Thesys C1, other generative UI APIs to consider include the Vercel AI SDK and CopilotKit. However, I haven't played with these yet.

Tradeoffs to consider with Generative UI APIs

Moving to a generative UI means that the UI loses predictable appearance. The same user prompt or context may result in different UI renderings at different times. Developers cannot reliably predict exactly what the user will see.

Development and Maintenance Challenges

Several operational issues emerge with AI-generated interfaces:

  • Testing complexity: Dynamic UI generation makes traditional testing approaches inadequate.

  • Design system compliance: LLM hallucinations can produce components that violate brand guidelines, colours, and typography standards.

  • Resource overhead: Sophisticated language models with autonomous reasoning consume significantly more compute resources than traditional API endpoints.

Conclusion

Generative UI APIs represent a significant shift in how UIs are built and rendered. By leveraging AI to dynamically generate interactive components based on user context, they enable more adaptive and personalized experiences. However, this comes with tradeoffs around predictability, trust, testing complexity and cost. As with any emerging technology, it's important to carefully evaluate whether the benefits outweigh the challenges for your specific use case.

Interesting content for the week

Runtime AI Governance

The Agentic Sandbox: Infrastructure for Safe AI: Frank Kilcommins shares that safe, successful AI deployment requires a structured simulation layer to discover, validate, and govern autonomous actions before they enter production.

API Summit 2025 Recap: AI Connectivity and the Agentic Era: Catch a recap of Kong's API Summit 2025, Augusto Marietti shares that the event was an unprecedented convergence of traditional API traffic with AI traffic, marking the arrival of the Agentic Era.

Prompting agents: What works and why: Nolan Sullivan explains that effective agentic prompting is a multi-layered architectural practice that requires influencing an agent's behaviour at every level, from system prompts to tool specifications.

How Atlassian Is Driving the AI Agent-Assisted Workflow: Atlassian is shifting its AI strategy to focus on the workflow context and connected data to deliver meaningful results for enterprise customers. Jennifer Riggins shares that successful AI adoption hinges on breaking down data and organisational silos by centralising knowledge into a "Teamwork Graph" that powers AI agents across a company’s entire workflow.

Agentic AI and MCP dominated Platform Summit 2025: Bill Doerrfeld, recapping Platform Summit 2025, shares that the rise of highly actionable AI agents is necessitating a fundamental shift in API security and governance standards, turning APIs into the core "neural pathways" of the new AI world.

Introducing any-guardrail: A common interface to test AI safety models: Daniel Nissani announces Any-Guardrail, a new open-source tool from Mozilla.ai designed to provide a unified interface for all AI safety models (guardrails).

How to Optimize API Documentation for AI Discoverability: J Simpson writes that to succeed in the agentic AI era, documentation must shift its focus from human readability to machine readability, ensuring that large language models (LLMs) can reliably understand, reason about, and use the exposed APIs as tools.

AI agents will succeed because one tool is better than ten: Ryan Donovan writes that because AI agents will last because they can have one chat interface that uses multiple tools.

API Production Governance

MCP vs. API Gateways: They’re Not Interchangeable: Christian Posta argues that Model Context Protocol (MCP) and API Gateways are not interchangeable technologies because they operate on fundamentally different paradigms.

Top Benefits of Unified APIs: Kateryna Poryvay discusses the problems caused by the proliferation of SaaS applications in modern business and presents Unified APIs as the core solution.

Apideck Joins the OpenAPI Initiative: APIDeck joins the OpenAPI Initiative (OAI) to drive standardisation for unified APIs, and support an open, reliable specification.

API Prototypes Need Persistent Operations: Bruno Pedro argues that API prototypes must incorporate data persistence to accurately simulate mutating operations, and gather meaningful feedback from stakeholders.

Five things we learned from API leaders in financial services: Budhaditya Bhattacharya shares that AI readiness in financial services is directly proportional to API management maturity and requires a governance-first approach to manage data sovereignty and API sprawl.

Tool Updates

Introducing the Volcano SDK to Build AI Agents in a Few Lines of Code: Marco Palladino announces the open-sourcing of the Volcano SDK, a TypeScript library designed by Kong to simplify building multi-step, multi-LLM (Large Language Model) AI agents.

Introducing New MCP Support Across the Entire Konnect Platform: Kong’s Model Context Protocol (MCP) is now fully integrated across the Konnect platform to accelerate the transition to AI-powered developer workflows and automation, effectively making Konnect the unified API and AI connectivity platform.

Swagger Editor: A New Era Begins: Michał Krawczyk announces a major overhaul and redesign of the Swagger Editor, marking a new era focused on improving the developer experience for creating OpenAPI Specifications (OAS).

Podcast

Open source is giving you choices with your agent systems: The podcast recap, featuring John Dickerson, CEO of Mozilla.ai, discusses the evolving landscape of AI agents and the critical role of open source and open standards in ensuring a healthy, trustworthy ecosystem.

References

[1] J. Stevens, "Beyond Chatbots: A Vision of Context-Generated UI," Medium, [Online]. Available: https://medium.com/@jessestevens/beyond-chatbots-a-vision-of-context-generated-ui-f005a590f5ab.

[2] P. Deshmukh, "Beyond Chatbots: Why Interactive AI UIs Are the Next Frontier," Thesys, [Online]. Available: https://www.thesys.dev/blogs/beyond-chatbots-why-interactive-ai-uis-are-the-next-frontier.

[3] Z. Varnagy-Toth, "Will AI chat replace legacy UIs? Unlikely for editor UIs," UX Planet, [Online]. Available: https://uxplanet.org/will-ai-chat-replace-legacy-uis-unlikely-for-editor-uis-647074e549e9.

[4] "Generative UI," Thesys Documentation, [Online]. Available: https://docs.thesys.dev/guides/concepts#generative-ui.

[5] P. Deshmukh, "What Are Agentic UIs? A Beginner's Guide to AI-Powered Interfaces," Thesys, [Online]. Available: https://www.thesys.dev/blogs/what-are-agentic-uis-a-beginners-guide-to-ai-powered-interfaces.

[6] "Generative UI API," Thesys Documentation, [Online]. Available: https://docs.thesys.dev/guides/concepts#generative-ui-api.

[7] "Rendering C1 Responses into live UI," Thesys Documentation, [Online]. Available: https://docs.thesys.dev/guides/rendering-ui.

Feedback & Share

What do you think of this newsletter issue?

Login or Subscribe to participate in polls.

Upcoming Conferences

KubeCon + CloudNativeCon North America 2025: The Cloud Native Computing Foundation’s flagship conference brings together adopters and technologists from leading open source and cloud native communities in Atlanta, Georgia. Date: November 10-13, 2025.

Apidays Paris: Apidays Paris sparks essential conversations on data security, digital sovereignty, and sustainable innovation in the age of intelligent systems. Date: 9 - 11 December 2025 Location: CNIT Forest, Paris

My Services: API Governance Consulting

Is poor API governance slowing down your delivery? Do you experience API sprawl, API drift and poor API developer satisfaction? I'll provide expert guidance and a tailored roadmap to transform your API practices.

Ikenna® Delivery Assessment → Identify your biggest API delivery pain points.

Ikenna® Delivery Canvas (IDC) & API Transformation Plan → Get a unified, data-driven view of your API delivery and governance process.

Ikenna® Improvement Cycles → Instil a culture of scientific, measurable progress towards API governance.

Ikenna® Governance Team Model → Set up and improve your governance team to sustain progress.

Ikenna® Delivery Automation Guidance → Reduce lead time and improve API quality through automation

Schedule a consultation by emailing: [email protected].

Reply

or to participate.