Home
Home
Blog
Blog
Work
Work

Quick Links

Work
Work
Blog
Blog
Home
Home

Stay Updated

Get notified about new articles, projects, and updates.

Tope Akinkuade | 2026

Back to Blog
AI & ML0 views3 min read

Building Agentic UI that adapts to your user's needs - AG-UI

AG-UI in plain language: capabilities vs static UI, tradeoffs, and where to start (repo, quickstart)

AG-UIMCPgenerative UICopilotKitAI
Building Agentic UI that adapts to your user's needs - AG-UI

Author

TA

Tope Akinkuade

Results-driven Product Engineer with years of hands-on experience building and scaling web applications across fintech and logistics B2B platforms

Table of contents

Brief HistoryEcosystemTradeoffsConclusion

Share Via

If you've used MCPs and CLIs, or you've read about the A2A protocol, you'll eventually bump into AG-UI. I had to read their repo readme and docs quite a few times before it stuck, but the concept is nice. If you've built chat interfaces, especially agentic ones, you've definitely had to build around this already, whether or not you had a name for it.

Traditionally, we build static UIs. We predict every possible user path and hard-code a button or some other component for it. If a user wants to do something we didn't build for, or wants to tweak their UI in ways we didn't plan for that would actually help them, they're stuck.

AG-UI flips this. Instead of a library of fixed pages, you build a library of capabilities. When a user expresses an intent, an agent figures out which capability is needed and "renders" the specific UI components for that moment.

Notice how I said you might have done something like this before? You've probably tweaked the UI based on the response from an agent more times than you can count. I know I did a lot at CompasAI.


Brief History

In early 2024, "AI in the UI" mostly meant a small chat bubble in the corner of a screen. It was disconnected from the rest of the application.

By late 2025, the chat box was a bottleneck. Users wanted to get things done rather than just chat all the time. We started seeing generative UI, but it was fragmented. Every team was building its own custom "streaming component" logic.

The AG-UI protocol came out of that mess. The docs line it up next to MCP: MCP standardized how agents talk to data; AG-UI is the push to standardize how agents talk to humans through interfaces (MCP, A2A, and AG-UI).

If a traditional UI is a fast food menu, AG-UI is closer to a private chef.

You say: "I'm hosting a Chinese dinner for four." The chef won’t hand you the same laminated list every time like a waiter. They lay out the ingredients, tools, and plates for that meal.

In AG-UI, the "ingredients" are your React / Shadcn components (or whatever you register), and the "chef" is the AI agent. The UI is generated just-in-time from what the agent is allowed to ask for.

Core pieces:

  1. Intent parser: figures out what the user is trying to do (voice, text, behavior).

  2. Orchestrator: the brain (LLM) that picks which capability fires.

  3. The registry: your catalog of headless UI pieces the agent can actually request.

  4. The transport: streaming, usually SSE or WebSockets, carrying UI instructions to the client.

Long sentence out of the way: you still own the design system.


Ecosystem

The ag-ui-protocol/ag-ui repo is the obvious home base. It's a wire protocol that's being standardised.

Quick start if you're ready to touch code: Build applications. If you want demos with previews, there's the AG-UI Dojo.


Tradeoffs

Switching from traditional, deterministic UIs to AGUI is a significant architectural decision that it unlocks a lot of flexibility. It, however, introduces new challenges to address

In a traditional UI, you know exactly what a user sees. In AGUI, the agent can literally "hallucinate" the interface, where the agent might request a component in a sequence that makes no sense to a human user (e.g., showing a "Confirm Purchase" button before showing the "Price Breakdown").

People rely on habits (sidebar on the left, checkout steps in order) to move without thinking. When the shell shifts every turn, those mental models crack. The user is not lost because they're stupid; the app UI just keeps moving.

Agents often stream tool outputs straight into generative components, so backend over-fetching is a real risk. You have to be strict about what leaves the server: PII and internal IDs should not hit the AG-UI layer unfiltered.

The layout can change run to run. Your tests can't live on one happy-path snapshot anymore. Someone on the team has to own state machines, interrupts, and what happens when the stream stops mid-card. Nondeterministic UI, deterministic QA: same tension as always, just louder.

Guardrails mitigate some of these issues, like what the agent can request, enforce order for money flows, human-in-the-loop before irreversible actions, and validate payloads before rendering.


Conclusion

I might have oversold the chef metaphor btw. AG-UI probably increases the number of ways you can cook beans. Still, if you're tired of bespoke streaming message types and you're building a new agentic application with standardized tooling, AG-UI is worth the random readme pass I did at 1 am last Sunday.

Comments

  • Anonymous · about 19 hours ago

    Test Comment

  • Tops · about 7 hours ago

    This is another test comment

Add a comment

If you add an email, only Tope will see it so he can reply if needed.
It is not shown on this page or shared publicly.

Related articles

Remote PostgreSQL DB on Ubuntu (EC2) not connecting
Cloud ComputingApr 6, 2026 · 5 min read

Remote PostgreSQL DB on Ubuntu (EC2) not connecting

Fix PostgreSQL on Ubuntu when remote connections fail: open the port in AWS Security Group and UFW, set listen_addresses and pg_hba.conf, then verify with nc.