Untitled

LLM-Driven UI: MCP-UI vs json-render#

Research notes for building rich info cards (weather, stocks, flights, maps, etc.) rendered from MCP web search results.

The Two Approaches#

MCP-UI (MCP-UI-Org/mcp-ui)#

  • Philosophy: "Tools should have UIs" — MCP servers push full HTML/JS interfaces to AI hosts
  • Who builds the UI: The tool author writes HTML/JS, rendered in a sandboxed iframe
  • Transport: Piggybacks on MCP resource system (_meta.ui.resourceUri)
  • Rendering: Sandboxed iframes (srcDoc/src) or Shopify Remote DOM
  • Trust model: Host doesn't trust server code — everything sandboxed
  • Stars: 4.4k, Apache 2.0
  • SDKs: TypeScript, Python, Ruby

json-render (vercel-labs/json-render)#

  • Philosophy: "AI should generate UIs" — LLMs output JSON specs mapped to a developer-curated component catalog
  • Who builds the UI: AI generates JSON spec at runtime; developer only defines what's possible
  • Transport: Framework-agnostic — LLM outputs JSON, no protocol dependency
  • Rendering: React/React Native/Remotion components matched to a typed registry
  • Trust model: Developer constrains AI output via Zod schemas — AI can only use pre-approved components
  • License: Apache 2.0

Decision: json-render#

For our use case (info cards from search results), json-render wins because:

  1. Design system consistency — components are our own React, always match the app
  2. No iframe overhead — direct React rendering, not sandboxed iframes for tiny cards
  3. AI picks the right layout — LLM understands search result semantics and composes appropriately
  4. Streaming — SpecStream gives progressive rendering as LLM responds
  5. Primitive composition — give AI building blocks (Card, Row, Text, Icon), it composes card types itself. New data types (sports scores, crypto) work without new code
  6. Cross-platform — same catalog works on web and React Native

Interactivity Comparison#

Capability MCP-UI json-render
UI calls tools Built-in AppBridge Wire onAction to MCP client yourself
Streaming updates Built-in toolInputPartial SpecStream for initial render; re-renders need new specs
State management Full JS in iframe Declarative JSON path expressions + setState action
Arbitrary JS logic Yes (full webpage) No (JSON spec only)
Follow-up prompts Built-in prompt action Wire through onAction callback

For info cards with "search again" / "tell me more" actions, json-render's action system + a thin onAction handler is sufficient.

Architecture: json-render + LLM Backend#

Three-Layer System#

text

Catalog: Primitive Components Approach#

Rather than high-level WeatherCard, StockCard etc., define primitives and let the AI compose:

typescript
typescript

AI then composes, e.g. weather:

json
json

LLM Integration Pattern#

From the json-render dashboard example (uses Vercel AI SDK + Anthropic):

API Route (Next.js example)#

typescript
typescript

Key points:

  • catalog.prompt() auto-generates the system prompt from your Zod component definitions
  • AI outputs JSONL (RFC 6902 JSON Patch format) for progressive streaming
  • buildUserPrompt() merges user input + context data

Client-Side Streaming#

typescript
typescript

Renderer Component#

tsx
tsx

Integration with Our LLM Load Balancer#

The LLM app (apps/llm, port 11114) already exposes OpenAI-compatible endpoints. Integration options:

  1. Direct: Point streamText() at http://localhost:11114/v1/chat/completions with the catalog system prompt
  2. Via MCP: Web search MCP tool returns raw data → a second LLM call with catalog.prompt() transforms it into a UI spec
  3. Hybrid: MCP search returns structured data, a lightweight pass through the LLM formats it as a json-render spec

The catalog system prompt (catalog.prompt()) instructs the LLM to output JSONL patches. The LLM just needs to be capable enough to follow the component schema — works with Claude, GPT-4, or local models via LM Studio.

Packages#

text

References#

Viewing as guest

Untitled | Knowledgebase