Untitled
LLM-Driven UI: MCP-UI vs json-render#
Research notes for building rich info cards (weather, stocks, flights, maps, etc.) rendered from MCP web search results.
The Two Approaches#
MCP-UI (MCP-UI-Org/mcp-ui)#
- Philosophy: "Tools should have UIs" — MCP servers push full HTML/JS interfaces to AI hosts
- Who builds the UI: The tool author writes HTML/JS, rendered in a sandboxed iframe
- Transport: Piggybacks on MCP resource system (
_meta.ui.resourceUri) - Rendering: Sandboxed iframes (
srcDoc/src) or Shopify Remote DOM - Trust model: Host doesn't trust server code — everything sandboxed
- Stars: 4.4k, Apache 2.0
- SDKs: TypeScript, Python, Ruby
json-render (vercel-labs/json-render)#
- Philosophy: "AI should generate UIs" — LLMs output JSON specs mapped to a developer-curated component catalog
- Who builds the UI: AI generates JSON spec at runtime; developer only defines what's possible
- Transport: Framework-agnostic — LLM outputs JSON, no protocol dependency
- Rendering: React/React Native/Remotion components matched to a typed registry
- Trust model: Developer constrains AI output via Zod schemas — AI can only use pre-approved components
- License: Apache 2.0
Decision: json-render#
For our use case (info cards from search results), json-render wins because:
- Design system consistency — components are our own React, always match the app
- No iframe overhead — direct React rendering, not sandboxed iframes for tiny cards
- AI picks the right layout — LLM understands search result semantics and composes appropriately
- Streaming — SpecStream gives progressive rendering as LLM responds
- Primitive composition — give AI building blocks (Card, Row, Text, Icon), it composes card types itself. New data types (sports scores, crypto) work without new code
- Cross-platform — same catalog works on web and React Native
Interactivity Comparison#
| Capability | MCP-UI | json-render |
|---|---|---|
| UI calls tools | Built-in AppBridge | Wire onAction to MCP client yourself |
| Streaming updates | Built-in toolInputPartial |
SpecStream for initial render; re-renders need new specs |
| State management | Full JS in iframe | Declarative JSON path expressions + setState action |
| Arbitrary JS logic | Yes (full webpage) | No (JSON spec only) |
| Follow-up prompts | Built-in prompt action |
Wire through onAction callback |
For info cards with "search again" / "tell me more" actions, json-render's action system + a thin onAction handler is sufficient.
Architecture: json-render + LLM Backend#
Three-Layer System#
textCatalog: Primitive Components Approach#
Rather than high-level WeatherCard, StockCard etc., define primitives and let the AI compose:
typescriptAI then composes, e.g. weather:
jsonLLM Integration Pattern#
From the json-render dashboard example (uses Vercel AI SDK + Anthropic):
API Route (Next.js example)#
typescriptKey points:
catalog.prompt()auto-generates the system prompt from your Zod component definitions- AI outputs JSONL (RFC 6902 JSON Patch format) for progressive streaming
buildUserPrompt()merges user input + context data
Client-Side Streaming#
typescriptRenderer Component#
tsxIntegration with Our LLM Load Balancer#
The LLM app (apps/llm, port 11114) already exposes OpenAI-compatible endpoints. Integration options:
- Direct: Point
streamText()athttp://localhost:11114/v1/chat/completionswith the catalog system prompt - Via MCP: Web search MCP tool returns raw data → a second LLM call with
catalog.prompt()transforms it into a UI spec - Hybrid: MCP search returns structured data, a lightweight pass through the LLM formats it as a json-render spec
The catalog system prompt (catalog.prompt()) instructs the LLM to output JSONL patches. The LLM just needs to be capable enough to follow the component schema — works with Claude, GPT-4, or local models via LM Studio.
Packages#
textReferences#
- json-render: https://github.com/vercel-labs/json-render
- MCP-UI: https://github.com/MCP-UI-Org/mcp-ui
- MCP Apps spec:
@modelcontextprotocol/ext-apps