Stream Protocols
AI SDK UI functions such as useChat
and useCompletion
support both text streams and data streams.
The stream protocol defines how the data is streamed to the frontend on top of the HTTP protocol.
This page describes both protocols and how to use them in the backend and frontend.
You can use this information to develop custom backends and frontends for your use case, e.g., to provide compatible API endpoints that are implemented in a different language such as Python.
For instance, here's an example using FastAPI as a backend.
Text Stream Protocol
A text stream contains chunks in plain text, that are streamed to the frontend. Each chunk is then appended together to form a full text response.
Text streams are supported by useChat
, useCompletion
, and useObject
.
When you use useChat
or useCompletion
, you need to enable text streaming
by setting the streamProtocol
options to text
.
You can generate text streams with streamText
in the backend.
When you call toTextStreamResponse()
on the result object,
a streaming HTTP response is returned.
Text streams only support basic text data. If you need to stream other types of data such as tool calls, use data streams.
Text Stream Example
Here is a Next.js example that uses the text stream protocol:
'use client';
import { useChat } from '@ai-sdk/react';import { TextStreamChatTransport } from 'ai';import { useState } from 'react';
export default function Chat() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat({ transport: new TextStreamChatTransport({ api: '/api/chat' }), });
return ( <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch"> {messages.map(message => ( <div key={message.id} className="whitespace-pre-wrap"> {message.role === 'user' ? 'User: ' : 'AI: '} {message.parts.map((part, i) => { switch (part.type) { case 'text': return <div key={`${message.id}-${i}`}>{part.text}</div>; } })} </div> ))}
<form onSubmit={e => { e.preventDefault(); sendMessage({ text: input }); setInput(''); }} > <input className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl" value={input} placeholder="Say something..." onChange={e => setInput(e.currentTarget.value)} /> </form> </div> );}
import { streamText, UIMessage, convertToModelMessages } from 'ai';import { openai } from '@ai-sdk/openai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), });
return result.toTextStreamResponse();}
Data Stream Protocol
A data stream follows a special protocol that the AI SDK provides to send information to the frontend.
The data stream protocol uses Server-Sent Events (SSE) format for improved standardization, keep-alive through ping, reconnect capabilities, and better cache handling.
When you provide data streams from a custom backend, you need to set the
x-vercel-ai-ui-message-stream
header to v1
.
The following stream parts are currently supported:
Message Start Part
Indicates the beginning of a new message with metadata.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"start","messageId":"..."}
Text Parts
Text content is streamed using a start/delta/end pattern with unique IDs for each text block.
Text Start Part
Indicates the beginning of a text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-start","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}
Text Delta Part
Contains incremental text content for the text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-delta","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d","delta":"Hello"}
Text End Part
Indicates the completion of a text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-end","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}
Reasoning Parts
Reasoning content is streamed using a start/delta/end pattern with unique IDs for each reasoning block.
Reasoning Start Part
Indicates the beginning of a reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-start","id":"reasoning_123"}
Reasoning Delta Part
Contains incremental reasoning content for the reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-delta","id":"reasoning_123","delta":"This is some reasoning"}
Reasoning End Part
Indicates the completion of a reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-end","id":"reasoning_123"}
Source Parts
Source parts provide references to external content sources.
Source URL Part
References to external URLs.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"source-url","sourceId":"https://example.com","url":"https://example.com"}
Source Document Part
References to documents or files.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"source-document","sourceId":"https://example.com","mediaType":"file","title":"Title"}
File Part
The file parts contain references to files with their media type.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"file","url":"https://example.com/file.png","mediaType":"image/png"}
Data Parts
Custom data parts allow streaming of arbitrary structured data with type-specific handling.
Format: Server-Sent Event with JSON object where the type includes a custom suffix
Example:
data: {"type":"data-weather","data":{"location":"SF","temperature":100}}
The data-*
type pattern allows you to define custom data types that your frontend can handle specifically.
Error Part
The error parts are appended to the message as they are received.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"error","errorText":"error message"}
Tool Input Start Part
Indicates the beginning of tool input streaming.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-start","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation"}
Tool Input Delta Part
Incremental chunks of tool input as it's being generated.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-delta","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","inputTextDelta":"San Francisco"}
Tool Input Available Part
Indicates that tool input is complete and ready for execution.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation","input":{"city":"San Francisco"}}
Tool Output Available Part
Contains the result of tool execution.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-output-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","output":{"city":"San Francisco","weather":"sunny"}}
Start Step Part
A part indicating the start of a step.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"start-step"}
Finish Step Part
A part indicating that a step (i.e., one LLM API call in the backend) has been completed.
This part is necessary to correctly process multiple stitched assistant calls, e.g. when calling tools in the backend, and using steps in useChat
at the same time.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"finish-step"}
Finish Message Part
A part indicating the completion of a message.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"finish"}
Stream Termination
The stream ends with a special [DONE]
marker.
Format: Server-Sent Event with literal [DONE]
Example:
data: [DONE]
The data stream protocol is supported
by useChat
and useCompletion
on the frontend and used by default.
useCompletion
only supports the text
and data
stream parts.
On the backend, you can use toUIMessageStreamResponse()
from the streamText
result object to return a streaming HTTP response.
UI Message Stream Example
Here is a Next.js example that uses the UI message stream protocol:
'use client';
import { useChat } from '@ai-sdk/react';import { useState } from 'react';
export default function Chat() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat();
return ( <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch"> {messages.map(message => ( <div key={message.id} className="whitespace-pre-wrap"> {message.role === 'user' ? 'User: ' : 'AI: '} {message.parts.map((part, i) => { switch (part.type) { case 'text': return <div key={`${message.id}-${i}`}>{part.text}</div>; } })} </div> ))}
<form onSubmit={e => { e.preventDefault(); sendMessage({ text: input }); setInput(''); }} > <input className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl" value={input} placeholder="Say something..." onChange={e => setInput(e.currentTarget.value)} /> </form> </div> );}
import { openai } from '@ai-sdk/openai';import { streamText, UIMessage, convertToModelMessages } from 'ai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages: convertToModelMessages(messages), });
return result.toUIMessageStreamResponse();}