React Defined the Web. The AI SDK Will Define AI

John Lindquist
Instructor

John Lindquist

The Mental Model Shift

The tools we use dictate how we think. Before React, we thought about manipulating individual elements. React taught us to think in components and state. Today, we think about calling APIs and manually handling responses. The AI SDK is teaching us to think in streams and intelligent interfaces.


AI development today feels like JavaScript in 2012.

In 2012, we were using jQuery to imperatively manipulate the DOM. We focused on the how (updating elements) rather than the what (the desired UI state).

Today, we are imperatively manipulating the outputs of LLMs. We are focused on the mechanics of integration, not the intelligence itself.

React introduced a declarative model and the right abstractions (Virtual DOM, components). It defined the era. The Vercel AI SDK is doing the same for the intelligence layer.

The AI SDK Is Already Establishing Dominance

React won because it shifted developers from imperative DOM manipulation to declarative state management. Today's AI tools largely keep developers stuck in an imperative mindset, focusing on the how of integration rather than the what of the intelligent feature. The AI SDK is changing this paradigm.

1. Imperative Plumbing (Provider SDKs)

Using direct SDKs (like openai).

The Mindset: This approach is purely imperative. You manually initiate the fetch, explicitly parse the stream chunks, handle connection errors, and write the logic to update your application state step-by-step. The focus is entirely on the mechanics of the connection.

2. Complex Configuration (LangChain JS)

Using comprehensive backend frameworks.

The Mindset: While aiming for higher-level outcomes, the implementation often involves complex chaining (like LCEL) and deep configuration. The steep learning curve means the developer's focus remains on configuring the framework rather than the application's core logic.

The AI SDK's advantage: It introduces a declarative model for the intelligence layer. Primitives like streamText (for backends/CLIs) and useChat (for frontends) abstract the entire imperative lifecycle of a streaming connection. Crucially, this declarative approach is universal. Whether you are rendering a UI, automating a CLI task, or coordinating backend agents, the mental model remains the same: define your inputs and tools, and let the SDK manage the execution flow.

The Declarative Shift

The AI SDK focuses on UI-native primitives.

useChat

The useChat hook abstracts the entire imperative lifecycle of a streaming connection.

'use client'; import { useState } from 'react'; import { useChat } from '@ai-sdk/react'; export default function Chat() { const { messages, sendMessage, isLoading } = useChat(); const [input, setInput] = useState(''); return ( <div> {/* You just map over the messages */} {messages.map(m => ( <div key={m.id}> {m.parts?.map((part, i) => 'text' in part ? <span key={i}>{part.text}</span> : null )} </div> ))} <form onSubmit={(e) => { e.preventDefault(); if (input.trim()) { sendMessage(input); setInput(''); } }}> <input value={input} onChange={(e) => setInput(e.target.value)} disabled={isLoading} /> </form> </div> ); }

The mechanics (fetch, SSE parsing, state updates, re-renders) are handled. You focus on rendering the messages.

The Unified Interface (The Virtual DOM for AI)

The model landscape is volatile. Vercel recently reported that 65% of developers switched LLM providers in the last six months. Lock-in is a liability.

The AI SDK provides a provider-agnostic API. It acts as a stable abstraction layer over disparate models.

// app/api/chat/route.ts import { openai } from '@ai-sdk/openai'; // import { anthropic } from '@ai-sdk/anthropic'; import { streamText, convertToModelMessages } from 'ai'; export async function POST(req: Request) { const { messages: uiMessages } = await req.json(); const result = await streamText({ // The model is just a configuration detail. model: openai('gpt-5'), // model: anthropic('claude-3-5-sonnet-20250113'), messages: convertToModelMessages(uiMessages), }); return result.toUIMessageStreamResponse({ originalMessages: uiMessages }); }

Switching providers requires changing one line of code.

The Emerging Standard

The ecosystem is consolidating around the AI SDK.

By the Numbers:

  • ai (Vercel AI SDK): Over 3.6 million weekly downloads (October 2025)
  • The AI SDK has become the dominant choice for AI integration in JavaScript/TypeScript

Ecosystem Endorsement:

Provider Support: First-party providers for OpenAI, Anthropic, Google, Mistral, xAI, Groq, Cohere, Together AI, Amazon Bedrock, Replicate, Perplexity, and dozens more.

Beyond the Browser: Agents and Orchestration

React made UIs declarative. The AI SDK does that and gives you a declarative runtime for backends, CLIs, and multi-agent systems. The same primitives that power useChat (streamText, tools, and the Agent class) run seamlessly in Node, enabling you to build everything from streaming CLIs to enterprise-scale agent orchestration—without heavy frameworks.

Why This Matters

One mental model, everywhere. The same streaming and tool-calling APIs work in Next.js routes and in Node/CLI scripts. No extra plumbing for SSE, retries, or tool schemas.

Agent loops without boilerplate. The Agent class encapsulates tool loops, stopping conditions, and step control. What used to require complex state machines becomes a few lines of declarative code.

Scale from personal tools to enterprise orchestration. Build a CLI tool for yourself, then use the exact same patterns to coordinate dozens of agents in production. The abstractions scale.

A Personal CLI in 20 Lines

Here's a simple streaming CLI that searches and summarizes news. This is a complete, working tool you can build in minutes:

import { anthropic } from '@ai-sdk/anthropic'; import { streamText, tool } from 'ai'; import { z } from 'zod'; const result = streamText({ model: anthropic('claude-3-5-sonnet-20250113'), prompt: 'What are the latest developments in AI this week?', tools: { search: tool({ description: 'Search the web for current information', inputSchema: z.object({ query: z.string(), }), execute: async ({ query }) => { // Replace with actual search API return { results: `Mock results for: ${query}` }; }, }), }, }); // Stream tokens directly to terminal for await (const textPart of result.textStream) { process.stdout.write(textPart); }

No manual fetch calls. No SSE parsing. No state management. You define a tool, pass it to streamText, and watch the LLM use it automatically. The same pattern works with any model—swap anthropic() for openai('gpt-5') or google('gemini-2.5-flash') and it just works.

Multi-Agent Orchestration in ~80 Lines

The same primitives scale to complex multi-agent systems. Here's a complete orchestrator pattern: a main agent delegates to specialized workers (researcher, reviewer) through typed tools. This is the same architecture that scales to enterprise workflows—just replace the stubbed tools with calls to your SDKs, databases, and services.

import { z } from 'zod'; import { google } from '@ai-sdk/google'; import { Experimental_Agent as Agent, stepCountIs, tool } from 'ai'; // Specialized worker agents const researcher = new Agent({ model: google('gemini-2.5-flash'), system: 'You are a concise researcher. Return bullet points and cite sources.', stopWhen: stepCountIs(5), }); const reviewer = new Agent({ model: google('gemini-2.5-flash'), system: 'You are a strict reviewer. Produce a short checklist and verdict.', stopWhen: stepCountIs(4), }); const workers = { researcher, reviewer } as const; // Orchestrator that delegates via tools const orchestrator = new Agent({ model: google('gemini-2.5-pro'), system: 'Break the user goal into sub-tasks. Delegate to specialists via tools.', tools: { callAgent: tool({ description: 'Delegate a sub-task to a specialized agent.', inputSchema: z.object({ to: z.enum(['researcher', 'reviewer']), task: z.string(), }), execute: async ({ to, task }) => { const target = to === 'researcher' ? researcher : reviewer; const res = await target.generate({ prompt: task }); return { from: to, output: res.text }; }, }), }, stopWhen: stepCountIs(20), }); // Stream results to the terminal const stream = orchestrator.stream({ prompt: 'Compare two logging libraries and recommend one.' }); for await (const token of stream.textStream) { process.stdout.write(token); }

No manual SSE parsing. No ad-hoc state management. The orchestrator delegates work through a typed tool interface—the same primitive you use in your web app.

Want to swap models? Change one line: anthropic('claude-3-5-sonnet-20250113') or openai('gpt-5'). The provider abstraction works identically in the backend.

From CLI to Enterprise

This pattern scales:

Personal CLI: A single agent with a few tools to automate your workflows.

Team Automation: Multiple specialized agents coordinating through an orchestrator.

Enterprise Orchestration: Dozens of agents managing complex workflows, each wrapped behind tool interfaces—search, vector DBs, billing systems, deployment pipelines, code execution (via Vercel Sandbox for safety).

The AI SDK provides stop conditions (stopWhen) to bound agent loops and structured outputs (generateObject) where determinism matters. You define tools once, and any model can call them.

The headline: The AI SDK isn't "just a React hook." It's the interaction layer for your browser, your CLI, and your agentic backends—with a unified mental model for streams, tools, and multi-step loops.

The Next Paradigm: Generative UI

The SDK moves beyond streaming text. It enables streaming interfaces.

By integrating with React Server Components (RSC), the LLM doesn't just return text—it can dynamically decide which component to render based on the user's intent. The AI becomes the engine driving the UI.

Note: AI SDK RSC is currently experimental. For production chat applications, the AI SDK UI (useChat) is recommended.

From Static Responses to Dynamic Interfaces

Traditional approach: User asks "Show me AAPL stock performance." The LLM returns text describing the stock's current state.

Generative UI approach: The LLM analyzes the intent and streams back an interactive component:

// app/api/chat/route.ts import { openai } from '@ai-sdk/openai'; import { streamUI } from '@ai-sdk/rsc'; import { z } from 'zod'; import { StockChart } from '@/components/stock-chart'; import { WeatherWidget } from '@/components/weather-widget'; export async function POST(req: Request) { const { messages } = await req.json(); const result = await streamUI({ model: openai('gpt-5'), messages, text: ({ content }) => <div>{content}</div>, tools: { showStockChart: { description: 'Display an interactive stock chart for a given symbol', parameters: z.object({ symbol: z.string().describe('Stock ticker symbol'), period: z.enum(['1D', '1W', '1M', '1Y']), }), generate: async function* ({ symbol, period }) { // Stream loading state first yield <div>Loading {symbol} data...</div>; // Fetch data const data = await fetchStockData(symbol, period); // Stream the actual component return ( <StockChart symbol={symbol} data={data} period={period} interactive /> ); }, }, showWeather: { description: 'Display current weather for a location', parameters: z.object({ location: z.string(), }), generate: async function* ({ location }) { yield <div>Fetching weather for {location}...</div>; const weather = await fetchWeather(location); return <WeatherWidget location={location} data={weather} />; }, }, }, }); return result.toAIStreamResponse(); }

On the client, components stream in seamlessly:

// app/page.tsx 'use client'; import { useState } from 'react'; import { useChat } from '@ai-sdk/react'; export default function Page() { const { messages, sendMessage } = useChat(); const [input, setInput] = useState(''); return ( <div> {messages.map(m => ( <div key={m.id}> {m.role === 'user' ? ( m.parts?.map((part, i) => 'text' in part ? <span key={i}>{part.text}</span> : null ) ) : ( m.parts?.map((part, i) => 'toolResult' in part ? part.toolResult : null ) )} </div> ))} <form onSubmit={(e) => { e.preventDefault(); if (input.trim()) { sendMessage(input); setInput(''); } }}> <input value={input} onChange={(e) => setInput(e.target.value)} /> </form> </div> ); }

What happens:

  1. User types: "Show me AAPL stock performance for the past month"
  2. The LLM recognizes the intent and calls the showStockChart tool
  3. The tool streams a loading state, then the interactive <StockChart /> component
  4. The component appears in the chat—fully interactive, with real data

The interface is the response. No manual component selection logic. No conditional rendering. The AI decides the right interface for the context.

This is the foundation of products like Vercel's V0, where users describe what they want and receive not just code, but working, interactive prototypes.


React taught us to think in components and state.

The AI SDK is teaching us to think in streams and intelligent, dynamic interfaces—from the browser to the backend.

It provides the right abstractions at the right time. That is how you define an era.

Share with a coworker