Migrate AI SDK 4.0 to 5.0 Beta
AI SDK 5.0 is currently in beta and introduces significant improvements in type safety, consistency, and developer experience. This guide will help you migrate from AI SDK 4.0 to 5.0. We are continuously improving our automated migration tools to make the upgrade process smoother. Note that you may want to wait until GA for production projects - this is for early adopters who want to help us.
Recommended Migration Process
- Backup your project. If you use a versioning control system, make sure all previous versions are committed.
- Upgrade to AI SDK 5.0.
- Follow the breaking changes guide below.
- Verify your project is working as expected.
- Commit your changes.
AI SDK 5.0 Package Versions
You need to update the following packages to the following versions in your package.json
file(s):
ai
package:5.0.0-beta
@ai-sdk/provider
package:2.0.0-beta
@ai-sdk/provider-utils
package:3.0.0-beta
@ai-sdk/*
packages:2.0.0-beta
(other@ai-sdk
packages)
AI SDK Core Changes
generateText and streamText Changes
Maximum Output Tokens
The maxTokens
parameter has been renamed to maxOutputTokens
for clarity.
const result = await generateText({ model: openai('gpt-4.1'), maxTokens: 1024, prompt: 'Hello, world!',});
const result = await generateText({ model: openai('gpt-4.1'), maxOutputTokens: 1024, prompt: 'Hello, world!',});
Message and Type System Changes
Core Type Renames
CoreMessage
→ ModelMessage
import { CoreMessage } from 'ai';
import { ModelMessage } from 'ai';
Message
→ UIMessage
import { Message, CreateMessage } from 'ai';
import { UIMessage, CreateUIMessage } from 'ai';
UIMessage Changes
Content → Parts Array
For UIMessage
s (previously called Message
), the .content
property has been replaced with a parts
array structure.
import { type Message } from 'ai'; // v4 Message type
// Messages (useChat) - had content propertyconst message: Message = { id: '1', role: 'user', content: 'Bonjour!',};
import { type UIMessage, type ModelMessage } from 'ai';
// UIMessages (useChat) - now use parts arrayconst uiMessage: UIMessage = { id: '1', role: 'user', parts: [{ type: 'text', text: 'Bonjour!' }],};
Data Role Removed
The data
role has been removed from UI messages.
const message = { role: 'data', content: 'Some content', data: { customField: 'value' },};
// V5: Use UI message streams with custom data partsconst stream = createUIMessageStream({ execute({ writer }) { // Write custom data instead of message annotations writer.write({ type: 'data-custom', id: 'custom-1', data: { customField: 'value' }, }); },});
UIMessage Reasoning Structure
The reasoning property on UI messages has been moved to parts.
const message: Message = { role: 'assistant', content: 'Hello', reasoning: 'I will greet the user',};
const message: UIMessage = { role: 'assistant', parts: [ { type: 'reasoning', text: 'I will greet the user', }, { type: 'text', text: 'Hello', }, ],};
Reasoning Part Property Rename
The reasoning
property on reasoning UI parts has been renamed to text
.
{ message.parts.map((part, index) => { if (part.type === 'reasoning') { return ( <div key={index} className="reasoning-display"> {part.reasoning} </div> ); } });}
{ message.parts.map((part, index) => { if (part.type === 'reasoning') { return ( <div key={index} className="reasoning-display"> {part.text} </div> ); } });}
File Part Changes
File parts now use .url
instead of .data
and .mimeType
.
{ messages.map(message => ( <div key={message.id}> {message.parts.map((part, index) => { if (part.type === 'text') { return <div key={index}>{part.text}</div>; } else if (part.type === 'file' && part.mimeType.startsWith('image/')) { return ( <img key={index} src={`data:${part.mimeType};base64,${part.data}`} /> ); } })} </div> ));}
{ messages.map(message => ( <div key={message.id}> {message.parts.map((part, index) => { if (part.type === 'text') { return <div key={index}>{part.text}</div>; } else if ( part.type === 'file' && part.mediaType.startsWith('image/') ) { return <img key={index} src={part.url} />; } })} </div> ));}
Stream Data Removal
The StreamData
class has been completely removed and replaced with UI message streams for custom data.
import { StreamData } from 'ai';
const streamData = new StreamData();streamData.append('custom-data');streamData.close();
import { createUIMessageStream, createUIMessageStreamResponse } from 'ai';
const stream = createUIMessageStream({ execute({ writer }) { // Write custom data parts writer.write({ type: 'data-custom', id: 'custom-1', data: 'custom-data', });
// Can merge with LLM streams const result = streamText({ model: openai('gpt-4.1'), messages, });
writer.merge(result.toUIMessageStream()); },});
return createUIMessageStreamResponse({ stream });
Provider Metadata → Provider Options
The providerMetadata
input parameter has been renamed to providerOptions
. Note that the returned metadata in results is still called providerMetadata
.
const result = await generateText({ model: openai('gpt-4'), prompt: 'Hello', providerMetadata: { openai: { store: false }, },});
const result = await generateText({ model: openai('gpt-4'), prompt: 'Hello', providerOptions: { // Input parameter renamed openai: { store: false }, },});
// Returned metadata still uses providerMetadata:console.log(result.providerMetadata?.openai);
Tool Definition Changes (parameters → inputSchema)
Tool definitions have been updated to use inputSchema
instead of parameters
and error classes have been renamed.
import { tool } from 'ai';
const weatherTool = tool({ description: 'Get the weather for a city', parameters: z.object({ city: z.string(), }), execute: async ({ city }) => { return `Weather in ${city}`; },});
import { tool } from 'ai';
const weatherTool = tool({ description: 'Get the weather for a city', inputSchema: z.object({ city: z.string(), }), execute: async ({ city }) => { return `Weather in ${city}`; },});
Tool Property Changes (args/result → input/output)
Tool call and result properties have been renamed for better consistency with schemas.
// Tool calls used "args" and "result"for await (const part of result.fullStream) { switch (part.type) { case 'tool-call': console.log('Tool args:', part.args); break; case 'tool-result': console.log('Tool result:', part.result); break; }}
// Tool calls now use "input" and "output"for await (const part of result.fullStream) { switch (part.type) { case 'tool-call': console.log('Tool input:', part.input); break; case 'tool-result': console.log('Tool output:', part.output); break; }}
Tool Part Type Changes (UIMessage)
In v5, UI tool parts use typed naming: tool-${toolName}
instead of generic types.
// Generic tool-invocation type{ message.parts.map(part => { if (part.type === 'tool-invocation') { return <div>{part.toolInvocation.toolName}</div>; } });}
// Type-safe tool parts with specific names{ message.parts.map(part => { switch (part.type) { case 'tool-getWeatherInformation': return <div>Getting weather...</div>; case 'tool-askForConfirmation': return <div>Asking for confirmation...</div>; } });}
Media Type Standardization
mimeType
has been renamed to mediaType
for consistency. Both image and file types are supported in model messages.
const result = await generateText({ model: someModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'What do you see?' }, { type: 'image', image: new Uint8Array([0, 1, 2, 3]), mimeType: 'image/png', }, { type: 'file', data: contents, mimeType: 'application/pdf', }, ], }, ],});
const result = await generateText({ model: someModel, messages: [ { role: 'user', content: [ { type: 'text', text: 'What do you see?' }, { type: 'image', image: new Uint8Array([0, 1, 2, 3]), mediaType: 'image/png', }, { type: 'file', data: contents, mediaType: 'application/pdf', }, ], }, ],});
Reasoning Support
Reasoning Text Property Rename
The .reasoning
property has been renamed to .reasoningText
for multi-step generations.
for (const step of steps) { console.log(step.reasoning);}
for (const step of steps) { console.log(step.reasoningText);}
Generate Text Reasoning Property Changes
In generateText()
and streamText()
results, reasoning properties have been renamed.
const result = await generateText({ model: anthropic('claude-4-sonnet-20250514'), prompt: 'Explain your reasoning',});
console.log(result.reasoning); // String reasoning textconsole.log(result.reasoningDetails); // Array of reasoning details
const result = await generateText({ model: anthropic('claude-4-sonnet-20250514'), prompt: 'Explain your reasoning',});
console.log(result.reasoningText); // String reasoning textconsole.log(result.reasoning); // Array of reasoning details
Continuation Steps Removal
The experimental_continueSteps
option has been removed from generateText()
.
const result = await generateText({ experimental_continueSteps: true, // ...});
const result = await generateText({ // experimental_continueSteps has been removed // Use newer models with higher output token limits instead // ...});
Image Generation Changes
Image model settings have been moved to providerOptions
.
await generateImage({ model: luma.image('photon-flash-1', { maxImagesPerCall: 5, pollIntervalMillis: 500, }), prompt, n: 10,});
await generateImage({ model: luma.image('photon-flash-1'), prompt, n: 10, maxImagesPerCall: 5, providerOptions: { luma: { pollIntervalMillis: 500 }, },});
Step Result Changes
Step Type Removal
The stepType
property has been removed from step results.
steps.forEach(step => { switch (step.stepType) { case 'initial': console.log('Initial step'); break; case 'tool-result': console.log('Tool result step'); break; case 'done': console.log('Final step'); break; }});
steps.forEach((step, index) => { if (index === 0) { console.log('Initial step'); } else if (step.toolResults.length > 0) { console.log('Tool result step'); } else { console.log('Final step'); }});
Step Control: maxSteps → stopWhen
For core functions like generateText
and streamText
, the maxSteps
parameter has been replaced with stopWhen
, which provides more flexible control over multi-step execution. The stopWhen
parameter defines conditions for stopping the generation when the last step contains tool results. When multiple conditions are provided as an array, the generation stops if any condition is met.
Recommended Pattern:
- Use
stopWhen
in server-side functions (generateText
/streamText
) for your main stopping logic - Use
maxSteps > 1
inuseChat
for client-side tool execution limits
// V4: Simple numeric limitconst result = await generateText({ model: openai('gpt-4'), messages, maxSteps: 5, // Stop after a maximum of 5 steps});
// useChat with maxStepsconst { messages } = useChat({ maxSteps: 3, // Stop after a maximum of 3 steps});
import { stepCountIs, hasToolCall } from 'ai';
// V5: Server-side - flexible stopping conditions with stopWhenconst result = await generateText({ model: openai('gpt-4'), messages, // Only triggers when last step has tool results stopWhen: stepCountIs(5), // Stop at step 5 if tools were called});
// Server-side - stop when specific tool is calledconst result = await generateText({ model: openai('gpt-4'), messages, stopWhen: hasToolCall('finalizeTask'), // Stop when finalizeTask tool is called});
// Client-side - useChat still uses maxSteps for tool execution limitsconst { messages } = useChat({ maxSteps: 3, // Limit client-side tool execution rounds});
Common stopping patterns:
// Stop after N steps (equivalent to old maxSteps)// Note: Only applies when the last step has tool resultsstopWhen: stepCountIs(5);
// Stop when specific tool is calledstopWhen: hasToolCall('finalizeTask');
// Multiple conditions (stops if ANY condition is met)stopWhen: [ stepCountIs(10), // Maximum 10 steps hasToolCall('submitOrder'), // Or when order is submitted];
// Custom condition based on step contentstopWhen: ({ steps }) => { const lastStep = steps[steps.length - 1]; // Custom logic - only triggers if last step has tool results return lastStep?.text?.includes('COMPLETE');};
Important: The stopWhen
conditions are only evaluated when the last step contains tool results.
Usage vs Total Usage
Usage properties now distinguish between single step and total usage.
// usage contained total token usage across all stepsconsole.log(result.usage);
// usage contains token usage from the final step onlyconsole.log(result.usage);// totalUsage contains total token usage across all stepsconsole.log(result.totalUsage);
AI SDK UI Changes
Package Structure Changes
@ai-sdk/rsc
Package Extraction
The ai/rsc
export has been extracted to a separate package @ai-sdk/rsc
.
import { createStreamableValue } from 'ai/rsc';
import { createStreamableValue } from '@ai-sdk/rsc';
npm install @ai-sdk/rsc
React UI Hooks Moved to @ai-sdk/react
The deprecated ai/react
export has been removed in favor of @ai-sdk/react
.
import { useChat } from 'ai/react';
import { useChat } from '@ai-sdk/react';
Don't forget to install the new package: npm install @ai-sdk/react@beta
useChat Changes
The useChat
hook has undergone significant changes in v5, with new transport architecture, removal of managed input state, and more.
Chat Transport Architecture
Configuration is now handled through transport objects instead of direct API options.
import { useChat } from '@ai-sdk/react';
const { messages } = useChat({ api: '/api/chat', credentials: 'include', headers: { 'Custom-Header': 'value' },});
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';
const { messages } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat', credentials: 'include', headers: { 'Custom-Header': 'value' }, }),});
Removed Managed Input State
The useChat
hook no longer manages input state internally. You must now manage input state manually.
import { useChat } from '@ai-sdk/react';
export default function Page() { const { messages, input, handleInputChange, handleSubmit } = useChat({ api: '/api/chat', });
return ( <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} /> <button type="submit">Send</button> </form> );}
import { useChat } from '@ai-sdk/react';import { DefaultChatTransport } from 'ai';import { useState } from 'react';
export default function Page() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat({ transport: new DefaultChatTransport({ api: '/api/chat' }), });
const handleSubmit = e => { e.preventDefault(); sendMessage({ text: input }); setInput(''); };
return ( <form onSubmit={handleSubmit}> <input value={input} onChange={e => setInput(e.target.value)} /> <button type="submit">Send</button> </form> );}
Message Sending: append
→ sendMessage
The append
function has been replaced with sendMessage
and requires structured message format.
const { append } = useChat();
// Simple text messageappend({ role: 'user', content: 'Hello' });
// With custom bodyappend( { role: 'user', content: 'Hello', }, { body: { imageUrl: 'https://...' } },);
const { sendMessage } = useChat();
// Simple text message (most common usage)sendMessage({ text: 'Hello' });
// Or with explicit parts arraysendMessage({ parts: [{ type: 'text', text: 'Hello' }],});
// With custom body (via request options)sendMessage( { role: 'user', parts: [{ type: 'text', text: 'Hello' }] }, { body: { imageUrl: 'https://...' } },);
Message Regeneration: reload
→ regenerate
The reload
function has been renamed to regenerate
with enhanced functionality.
const { reload } = useChat();
// Regenerate last messagereload();
const { regenerate } = useChat();
// Regenerate last messageregenerate();
// Regenerate specific messageregenerate({ messageId: 'message-123' });
onResponse Removal
The onResponse
callback has been removed from useChat
and useCompletion
.
const { messages } = useChat({ onResponse(response) { // handle response },});
const { messages } = useChat({ // onResponse is no longer available});
Send Extra Message Fields Default
The sendExtraMessageFields
option has been removed and is now the default behavior.
const { messages } = useChat({ sendExtraMessageFields: true,});
const { messages } = useChat({ // sendExtraMessageFields is now the default});
Keep Last Message on Error Removal
The keepLastMessageOnError
option has been removed as it's no longer needed.
const { messages } = useChat({ keepLastMessageOnError: true,});
const { messages } = useChat({ // keepLastMessageOnError is no longer needed});
Chat Request Options Changes
The data
and allowEmptySubmit
options have been removed from ChatRequestOptions
.
handleSubmit(e, { data: { imageUrl: 'https://...' }, body: { custom: 'value' }, allowEmptySubmit: true,});
sendMessage( { /* yourMessage */ }, { body: { custom: 'value', imageUrl: 'https://...', // Move data to body }, },);
Request Options Type Rename
RequestOptions
has been renamed to CompletionRequestOptions
.
import type { RequestOptions } from 'ai';
import type { CompletionRequestOptions } from 'ai';
Loading State Changes
The deprecated isLoading
helper has been removed in favor of status
.
const { isLoading } = useChat();
const { status } = useChat();// Use state instead of isLoading for more granular control
Resume Stream Support
The resume functionality has been moved from experimental_resume
to resumeStream
.
// Resume was experimentalconst { messages } = useChat({ experimental_resume: true,});
const { messages } = useChat({ resumeStream: true, // Resume interrupted streams});
@ai-sdk/vue
Changes
The Vue.js integration has been completely restructured, replacing the useChat
composable with a Chat
class.
useChat Replaced with Chat Class
<script setup>import { useChat } from '@ai-sdk/vue';
const { messages, input, handleSubmit } = useChat({ api: '/api/chat',});</script>
<script setup>import { Chat } from '@ai-sdk/vue';import { DefaultChatTransport } from 'ai';import { ref } from 'vue';
const input = ref('');const chat = new Chat({ transport: new DefaultChatTransport({ api: '/api/chat' }),});
const handleSubmit = () => { chat.sendMessage({ text: input.value }); input.value = '';};</script>
Message Structure Changes
Messages now use a parts
array instead of a content
string.
<template> <div v-for="message in messages" :key="message.id"> <div>{{ message.role }}: {{ message.content }}</div> </div></template>
<template> <div v-for="message in chat.messages" :key="message.id"> <div>{{ message.role }}:</div> <div v-for="part in message.parts" :key="part.type"> <span v-if="part.type === 'text'">{{ part.text }}</span> </div> </div></template>
@ai-sdk/svelte
Changes
The Svelte integration has also been updated with new constructor patterns and readonly properties.
Constructor API Changes
import { chat } from '@ai-sdk/svelte';
const chatInstance = chat({ api: '/api/chat',});
import { chat } from '@ai-sdk/svelte';import { DefaultChatTransport } from 'ai';
const chatInstance = chat(() => ({ transport: new DefaultChatTransport({ api: '/api/chat' }),}));
Properties Made Readonly
Properties are now readonly and must be updated using setter methods.
// Direct property mutation was allowedchatInstance.messages = [...chatInstance.messages, newMessage];
// Must use setter methodschatInstance.setMessages([...chatInstance.messages, newMessage]);
Removed Managed Input
Like React and Vue, input management has been removed from the Svelte integration.
// Input was managed internallyconst { messages, input, handleSubmit } = chatInstance;
// Must manage input state manuallylet input = '';const { messages, sendMessage } = chatInstance;
const handleSubmit = () => { sendMessage({ text: input }); input = '';};
@ai-sdk/ui-utils
Package Removal
The @ai-sdk/ui-utils
package has been removed and its exports moved to the main ai
package.
import { getTextFromDataUrl } from '@ai-sdk/ui-utils';
import { getTextFromDataUrl } from 'ai';
useCompletion Changes
The data
property has been removed from the useCompletion
hook.
const { completion, handleSubmit, data, // No longer available} = useCompletion();
const { completion, handleSubmit, // data property removed entirely} = useCompletion();
useAssistant Removal
The useAssistant
hook has been removed.
import { useAssistant } from '@ai-sdk/react';
// useAssistant has been removed// Use useChat with appropriate configuration instead
For an implementation of the assistant functionality with AI SDK v5, see this example repository.
Attachments → File Parts
The experimental_attachments
property has been replaced with the parts array.
{ messages.map(message => ( <div className="flex flex-col gap-2"> {message.content}
<div className="flex flex-row gap-2"> {message.experimental_attachments?.map((attachment, index) => attachment.contentType?.includes('image/') ? ( <img src={attachment.url} alt={attachment.name} /> ) : attachment.contentType?.includes('text/') ? ( <div className="w-32 h-24 p-2 overflow-hidden text-xs border rounded-md ellipsis text-zinc-500"> {getTextFromDataUrl(attachment.url)} </div> ) : null, )} </div> </div> ));}
{ messages.map(message => ( <div> {message.parts.map((part, index) => { if (part.type === 'text') { return <div key={index}>{part.text}</div>; }
if (part.type === 'file' && part.mediaType?.startsWith('image/')) { return ( <div key={index}> <img src={part.url} /> </div> ); } })} </div> ));}
Embedding Changes
Provider Options for Embeddings
Embedding model settings now use provider options instead of model parameters.
const { embedding } = await embed({ model: openai('text-embedding-3-small', { dimensions: 10, }),});
const { embedding } = await embed({ model: openai('text-embedding-3-small'), providerOptions: { openai: { dimensions: 10, }, },});
Raw Response → Response
The rawResponse
property has been renamed to response
.
const { rawResponse } = await embed(/* */);
const { response } = await embed(/* */);
Parallel Requests in embedMany
embedMany
now makes parallel requests with a configurable maxParallelCalls
option.
const { embeddings, usage } = await embedMany({ maxParallelCalls: 2, // Limit parallel requests model: openai.embedding('text-embedding-3-small'), values: [ 'sunny day at the beach', 'rainy afternoon in the city', 'snowy night in the mountains', ],});
LangChain Adapter Moved to @ai-sdk/langchain
The LangChainAdapter
has been moved to @ai-sdk/langchain
and the API has been updated to use UI message streams.
import { LangChainAdapter } from 'ai';
const response = LangChainAdapter.toDataStreamResponse(stream);
import { toUIMessageStream } from '@ai-sdk/langchain';import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({ stream: toUIMessageStream(stream),});
Don't forget to install the new package: npm install @ai-sdk/langchain@beta
LlamaIndex Adapter Moved to @ai-sdk/llamaindex
The LlamaIndexAdapter
has been extracted to a separate package @ai-sdk/llamaindex
and follows the same UI message stream pattern.
import { LlamaIndexAdapter } from 'ai';
const response = LlamaIndexAdapter.toDataStreamResponse(stream);
import { toUIMessageStream } from '@ai-sdk/llamaindex';import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({ stream: toUIMessageStream(stream),});
Don't forget to install the new package: npm install @ai-sdk/llamaindex@beta
Streaming Architecture
The streaming architecture has been completely redesigned in v5 to support better content differentiation, concurrent streaming of multiple parts, and improved real-time UX.
Stream Protocol Changes
Stream Protocol: Single Chunks → Start/Delta/End Pattern
The fundamental streaming pattern has changed from single chunks to a three-phase pattern with unique IDs for each content block.
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'text-delta': { process.stdout.write(chunk.textDelta); break; } }}
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'text-start': { // New: Initialize a text block with unique ID console.log(`Starting text block: ${chunk.id}`); break; } case 'text-delta': { // Changed: Now includes ID and uses 'delta' property process.stdout.write(chunk.delta); // Changed from 'textDelta' break; } case 'text-end': { // New: Finalize the text block console.log(`Completed text block: ${chunk.id}`); break; } }}
Reasoning Streaming Pattern
Reasoning content now follows the same start/delta/end pattern:
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'reasoning': { // Single chunk with full reasoning text console.log('Reasoning:', chunk.text); break; } }}
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'reasoning-start': { console.log(`Starting reasoning block: ${chunk.id}`); break; } case 'reasoning-delta': { process.stdout.write(chunk.delta); break; } case 'reasoning-end': { console.log(`Completed reasoning block: ${chunk.id}`); break; } }}
Tool Input Streaming
Tool inputs can now be streamed as they're being generated:
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'tool-input-start': { console.log(`Starting tool input for ${chunk.toolName}: ${chunk.id}`); break; } case 'tool-input-delta': { // Stream the JSON input as it's being generated process.stdout.write(chunk.delta); break; } case 'tool-input-end': { console.log(`Completed tool input: ${chunk.id}`); break; } case 'tool-call': { // Final tool call with complete input console.log('Tool call:', chunk.toolName, chunk.input); break; } }}
onChunk Callback Changes
The onChunk
callback now receives the new streaming chunk types with IDs and the start/delta/end pattern.
const result = streamText({ model: openai('gpt-4.1'), prompt: 'Write a story', onChunk({ chunk }) { switch (chunk.type) { case 'text-delta': { // Single property with text content console.log('Text delta:', chunk.textDelta); break; } } },});
const result = streamText({ model: openai('gpt-4.1'), prompt: 'Write a story', onChunk({ chunk }) { switch (chunk.type) { case 'text': { // Text chunks now use single 'text' type console.log('Text chunk:', chunk.text); break; } case 'reasoning': { // Reasoning chunks use single 'reasoning' type console.log('Reasoning chunk:', chunk.text); break; } case 'source': { console.log('Source chunk:', chunk); break; } case 'tool-call': { console.log('Tool call:', chunk.toolName, chunk.input); break; } case 'tool-input-start': { console.log( `Tool input started for ${chunk.toolName}:`, chunk.toolCallId, ); break; } case 'tool-input-delta': { console.log(`Tool input delta for ${chunk.toolCallId}:`, chunk.delta); break; } case 'tool-result': { console.log('Tool result:', chunk.output); break; } case 'raw': { console.log('Raw chunk:', chunk); break; } } },});
File Stream Parts Restructure
File parts in streams have been flattened.
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'file': { console.log('Media type:', chunk.file.mediaType); console.log('File data:', chunk.file.data); break; } }}
for await (const chunk of result.fullStream) { switch (chunk.type) { case 'file': { console.log('Media type:', chunk.mediaType); console.log('File data:', chunk.data); break; } }}
Source Stream Parts Restructure
Source stream parts have been flattened.
for await (const part of result.fullStream) { if (part.type === 'source' && part.source.sourceType === 'url') { console.log('ID:', part.source.id); console.log('Title:', part.source.title); console.log('URL:', part.source.url); }}
for await (const part of result.fullStream) { if (part.type === 'source' && part.sourceType === 'url') { console.log('ID:', part.id); console.log('Title:', part.title); console.log('URL:', part.url); }}
Finish Event Changes
Stream finish events have been renamed for consistency.
for await (const part of result.fullStream) { switch (part.type) { case 'step-finish': { console.log('Step finished:', part.finishReason); break; } case 'finish': { console.log('Usage:', part.usage); break; } }}
for await (const part of result.fullStream) { switch (part.type) { case 'finish-step': { // Renamed from 'step-finish' console.log('Step finished:', part.finishReason); break; } case 'finish': { console.log('Total Usage:', part.totalUsage); // Changed from 'usage' break; } }}
Stream Protocol Changes
Proprietary Protocol -> Server-Sent Events
The data stream protocol has been updated to use Server-Sent Events.
import { createDataStream, formatDataStreamPart } from 'ai';
const dataStream = createDataStream({ execute: writer => { writer.writeData('initialized call'); writer.write(formatDataStreamPart('text', 'Hello')); writer.writeSource({ type: 'source', sourceType: 'url', id: 'source-1', url: 'https://example.com', title: 'Example Source', }); },});
import { createUIMessageStream } from 'ai';
const stream = createUIMessageStream({ execute: ({ writer }) => { writer.write({ type: 'data', value: ['initialized call'] }); writer.write({ type: 'text', value: 'Hello' }); writer.write({ type: 'source-url', value: { type: 'source', id: 'source-1', url: 'https://example.com', title: 'Example Source', }, }); },});
Data Stream Response Helper Functions Renamed
The streaming API has been completely restructured from data streams to UI message streams.
// Express/Node.js serversapp.post('/stream', async (req, res) => { const result = streamText({ model: openai('gpt-4.1'), prompt: 'Generate content', });
result.pipeDataStreamToResponse(res);});
// Next.js API routesconst result = streamText({ model: openai('gpt-4.1'), prompt: 'Generate content',});
return result.toDataStreamResponse();
// Express/Node.js serversapp.post('/stream', async (req, res) => { const result = streamText({ model: openai('gpt-4.1'), prompt: 'Generate content', });
result.pipeUIMessageStreamToResponse(res);});
// Next.js API routesconst result = streamText({ model: openai('gpt-4.1'), prompt: 'Generate content',});
return result.toUIMessageStreamResponse();
Stream Transform Function Renaming
Various stream-related functions have been renamed for consistency.
import { DataStreamToSSETransformStream } from 'ai';
import { JsonToSseTransformStream } from 'ai';
Utility Changes
ID Generation Changes
The createIdGenerator()
function now requires a size
argument.
const generator = createIdGenerator({ prefix: 'msg' });const id = generator(16); // Custom size at call time
const generator = createIdGenerator({ prefix: 'msg', size: 16 });const id = generator(); // Fixed size from creation
IDGenerator → IdGenerator
The type name has been updated.
import { IDGenerator } from 'ai';
import { IdGenerator } from 'ai';
Provider Interface Changes
Language Model V2 Import
LanguageModelV2
must now be imported from @ai-sdk/provider
.
import { LanguageModelV2 } from 'ai';
import { LanguageModelV2 } from '@ai-sdk/provider';
Middleware Rename
LanguageModelV1Middleware
has been renamed and moved.
import { LanguageModelV1Middleware } from 'ai';
import { LanguageModelV2Middleware } from '@ai-sdk/provider';
Usage Token Properties
Token usage properties have been renamed for consistency.
// In language model implementations{ usage: { promptTokens: 10, completionTokens: 20 }}
// In language model implementations{ usage: { inputTokens: 10, outputTokens: 20, totalTokens: 30 // Now required }}
Stream Part Type Changes
The LanguageModelV2StreamPart
type has been expanded to support the new streaming architecture with start/delta/end patterns and IDs.
// V4: Simple stream partstype LanguageModelV2StreamPart = | { type: 'text-delta'; textDelta: string } | { type: 'reasoning'; text: string } | { type: 'tool-call'; toolCallId: string; toolName: string; input: string };
// V5: Enhanced stream parts with IDs and lifecycle eventstype LanguageModelV2StreamPart = // Text blocks with start/delta/end pattern | { type: 'text-start'; id: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'text-delta'; id: string; delta: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'text-end'; id: string; providerMetadata?: SharedV2ProviderMetadata; }
// Reasoning blocks with start/delta/end pattern | { type: 'reasoning-start'; id: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'reasoning-delta'; id: string; delta: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'reasoning-end'; id: string; providerMetadata?: SharedV2ProviderMetadata; }
// Tool input streaming | { type: 'tool-input-start'; id: string; toolName: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'tool-input-delta'; id: string; delta: string; providerMetadata?: SharedV2ProviderMetadata; } | { type: 'tool-input-end'; id: string; providerMetadata?: SharedV2ProviderMetadata; }
// Enhanced tool calls | { type: 'tool-call'; toolCallId: string; toolName: string; input: string; providerMetadata?: SharedV2ProviderMetadata; }
// Stream lifecycle events | { type: 'stream-start'; warnings: Array<LanguageModelV2CallWarning> } | { type: 'finish'; usage: LanguageModelV2Usage; finishReason: LanguageModelV2FinishReason; providerMetadata?: SharedV2ProviderMetadata; };
Raw Response → Response
Provider response objects have been updated.
// In language model implementations{ rawResponse: { /* ... */ }}
// In language model implementations{ response: { /* ... */ }}
wrapLanguageModel
now stable
import { experimental_wrapLanguageModel } from 'ai';
import { wrapLanguageModel } from 'ai';
activeTools
No Longer Experimental
const result = await generateText({ model: openai('gpt-4'), messages, tools: { weatherTool, locationTool }, experimental_activeTools: ['weatherTool'],});
const result = await generateText({ model: openai('gpt-4'), messages, tools: { weatherTool, locationTool }, activeTools: ['weatherTool'], // No longer experimental});
prepareStep
No Longer Experimental
The experimental_prepareStep
option has been promoted and no longer requires the experimental prefix.
const result = await generateText({ model: openai('gpt-4'), messages, tools: { weatherTool, locationTool }, experimental_prepareStep: ({ steps, stepNumber, model }) => { console.log('Preparing step:', stepNumber); return { activeTools: ['weatherTool'], system: 'Be helpful and concise.', }; },});
const result = await generateText({ model: openai('gpt-4'), messages, tools: { weatherTool, locationTool }, prepareStep: ({ steps, stepNumber, model }) => { console.log('Preparing step:', stepNumber); return { activeTools: ['weatherTool'], system: 'Be helpful and concise.', // Can also configure toolChoice, model, etc. }; },});
The prepareStep
function receives { steps, stepNumber, model }
and can return:
model
: Different model for this stepactiveTools
: Which tools to make availabletoolChoice
: Tool selection strategysystem
: System message for this stepundefined
: Use default settings
Temperature Default Removal
Temperature is no longer set to 0
by default.
await generateText({ model: openai('gpt-4'), prompt: 'Write a creative story', // Implicitly temperature: 0});
await generateText({ model: openai('gpt-4'), prompt: 'Write a creative story', temperature: 0, // Must explicitly set});
Provider & Model Changes
OpenAI
Structured Outputs Default
Structured outputs are now enabled by default for supported OpenAI models.
const result = await generateText({ model: openai('gpt-4.1-2024-08-06', { structuredOutputs: true }),});
const result = await generateText({ model: openai('gpt-4.1-2024-08-06'), // structuredOutputs: true is now the default});
Compatibility Option Removal
The compatibility
option has been removed; strict mode is now the default.
const openai = createOpenAI({ compatibility: 'strict',});
const openai = createOpenAI({ // strict compatibility is now the default});
Legacy Function Calls Removal
The useLegacyFunctionCalls
option has been removed.
const result = streamText({ model: openai('gpt-4.1', { useLegacyFunctionCalls: true }),});
const result = streamText({ model: openai('gpt-4.1'),});
Simulate Streaming
The simulateStreaming
model option has been replaced with middleware.
const result = generateText({ model: openai('gpt-4.1', { simulateStreaming: true }), prompt: 'Hello, world!',});
import { simulateStreamingMiddleware, wrapLanguageModel } from 'ai';
const model = wrapLanguageModel({ model: openai('gpt-4.1'), middleware: simulateStreamingMiddleware(),});
const result = generateText({ model, prompt: 'Hello, world!',});
Amazon Bedrock
Snake Case → Camel Case
Provider options have been updated to use camelCase.
const result = await generateText({ model: bedrock('amazon.titan-tg1-large'), prompt: 'Hello, world!', providerOptions: { bedrock: { reasoning_config: { /* ... */ }, }, },});
const result = await generateText({ model: bedrock('amazon.titan-tg1-large'), prompt: 'Hello, world!', providerOptions: { bedrock: { reasoningConfig: { /* ... */ }, }, },});
Provider-Utils Changes
Deprecated CoreTool*
types have been removed.
import { CoreToolCall, CoreToolResult, CoreToolResultUnion, CoreToolCallUnion, CoreToolChoice,} from '@ai-sdk/provider-utils';
import { ToolCall, ToolResult, ResultUnion, ToolCallUnion, ToolChoice,} from '@ai-sdk/provider-utils';