Migrate AI SDK 4.0 to 5.0 Beta

AI SDK 5.0 is currently in beta and introduces significant improvements in type safety, consistency, and developer experience. This guide will help you migrate from AI SDK 4.0 to 5.0. We are continuously improving our automated migration tools to make the upgrade process smoother. Note that you may want to wait until GA for production projects - this is for early adopters who want to help us.

  1. Backup your project. If you use a versioning control system, make sure all previous versions are committed.
  2. Upgrade to AI SDK 5.0.
  3. Follow the breaking changes guide below.
  4. Verify your project is working as expected.
  5. Commit your changes.

AI SDK 5.0 Package Versions

You need to update the following packages to the following versions in your package.json file(s):

  • ai package: 5.0.0-beta
  • @ai-sdk/provider package: 2.0.0-beta
  • @ai-sdk/provider-utils package: 3.0.0-beta
  • @ai-sdk/* packages: 2.0.0-beta (other @ai-sdk packages)

AI SDK Core Changes

generateText and streamText Changes

Maximum Output Tokens

The maxTokens parameter has been renamed to maxOutputTokens for clarity.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4.1'),
maxTokens: 1024,
prompt: 'Hello, world!',
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4.1'),
maxOutputTokens: 1024,
prompt: 'Hello, world!',
});

Message and Type System Changes

Core Type Renames

CoreMessageModelMessage
AI SDK 4.0
import { CoreMessage } from 'ai';
AI SDK 5.0
import { ModelMessage } from 'ai';
MessageUIMessage
AI SDK 4.0
import { Message, CreateMessage } from 'ai';
AI SDK 5.0
import { UIMessage, CreateUIMessage } from 'ai';

UIMessage Changes

Content → Parts Array

For UIMessages (previously called Message), the .content property has been replaced with a parts array structure.

AI SDK 4.0
import { type Message } from 'ai'; // v4 Message type
// Messages (useChat) - had content property
const message: Message = {
id: '1',
role: 'user',
content: 'Bonjour!',
};
AI SDK 5.0
import { type UIMessage, type ModelMessage } from 'ai';
// UIMessages (useChat) - now use parts array
const uiMessage: UIMessage = {
id: '1',
role: 'user',
parts: [{ type: 'text', text: 'Bonjour!' }],
};

Data Role Removed

The data role has been removed from UI messages.

AI SDK 4.0
const message = {
role: 'data',
content: 'Some content',
data: { customField: 'value' },
};
AI SDK 5.0
// V5: Use UI message streams with custom data parts
const stream = createUIMessageStream({
execute({ writer }) {
// Write custom data instead of message annotations
writer.write({
type: 'data-custom',
id: 'custom-1',
data: { customField: 'value' },
});
},
});

UIMessage Reasoning Structure

The reasoning property on UI messages has been moved to parts.

AI SDK 4.0
const message: Message = {
role: 'assistant',
content: 'Hello',
reasoning: 'I will greet the user',
};
AI SDK 5.0
const message: UIMessage = {
role: 'assistant',
parts: [
{
type: 'reasoning',
text: 'I will greet the user',
},
{
type: 'text',
text: 'Hello',
},
],
};

Reasoning Part Property Rename

The reasoning property on reasoning UI parts has been renamed to text.

AI SDK 4.0
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.reasoning}
</div>
);
}
});
}
AI SDK 5.0
{
message.parts.map((part, index) => {
if (part.type === 'reasoning') {
return (
<div key={index} className="reasoning-display">
{part.text}
</div>
);
}
});
}

File Part Changes

File parts now use .url instead of .data and .mimeType.

AI SDK 4.0
{
messages.map(message => (
<div key={message.id}>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
} else if (part.type === 'file' && part.mimeType.startsWith('image/')) {
return (
<img
key={index}
src={`data:${part.mimeType};base64,${part.data}`}
/>
);
}
})}
</div>
));
}
AI SDK 5.0
{
messages.map(message => (
<div key={message.id}>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
} else if (
part.type === 'file' &&
part.mediaType.startsWith('image/')
) {
return <img key={index} src={part.url} />;
}
})}
</div>
));
}

Stream Data Removal

The StreamData class has been completely removed and replaced with UI message streams for custom data.

AI SDK 4.0
import { StreamData } from 'ai';
const streamData = new StreamData();
streamData.append('custom-data');
streamData.close();
AI SDK 5.0
import { createUIMessageStream, createUIMessageStreamResponse } from 'ai';
const stream = createUIMessageStream({
execute({ writer }) {
// Write custom data parts
writer.write({
type: 'data-custom',
id: 'custom-1',
data: 'custom-data',
});
// Can merge with LLM streams
const result = streamText({
model: openai('gpt-4.1'),
messages,
});
writer.merge(result.toUIMessageStream());
},
});
return createUIMessageStreamResponse({ stream });
Provider Metadata → Provider Options

The providerMetadata input parameter has been renamed to providerOptions. Note that the returned metadata in results is still called providerMetadata.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
providerMetadata: {
openai: { store: false },
},
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
providerOptions: {
// Input parameter renamed
openai: { store: false },
},
});
// Returned metadata still uses providerMetadata:
console.log(result.providerMetadata?.openai);

Tool Definition Changes (parameters → inputSchema)

Tool definitions have been updated to use inputSchema instead of parameters and error classes have been renamed.

AI SDK 4.0
import { tool } from 'ai';
const weatherTool = tool({
description: 'Get the weather for a city',
parameters: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return `Weather in ${city}`;
},
});
AI SDK 5.0
import { tool } from 'ai';
const weatherTool = tool({
description: 'Get the weather for a city',
inputSchema: z.object({
city: z.string(),
}),
execute: async ({ city }) => {
return `Weather in ${city}`;
},
});

Tool Property Changes (args/result → input/output)

Tool call and result properties have been renamed for better consistency with schemas.

AI SDK 4.0
// Tool calls used "args" and "result"
for await (const part of result.fullStream) {
switch (part.type) {
case 'tool-call':
console.log('Tool args:', part.args);
break;
case 'tool-result':
console.log('Tool result:', part.result);
break;
}
}
AI SDK 5.0
// Tool calls now use "input" and "output"
for await (const part of result.fullStream) {
switch (part.type) {
case 'tool-call':
console.log('Tool input:', part.input);
break;
case 'tool-result':
console.log('Tool output:', part.output);
break;
}
}

Tool Part Type Changes (UIMessage)

In v5, UI tool parts use typed naming: tool-${toolName} instead of generic types.

AI SDK 4.0
// Generic tool-invocation type
{
message.parts.map(part => {
if (part.type === 'tool-invocation') {
return <div>{part.toolInvocation.toolName}</div>;
}
});
}
AI SDK 5.0
// Type-safe tool parts with specific names
{
message.parts.map(part => {
switch (part.type) {
case 'tool-getWeatherInformation':
return <div>Getting weather...</div>;
case 'tool-askForConfirmation':
return <div>Asking for confirmation...</div>;
}
});
}

Media Type Standardization

mimeType has been renamed to mediaType for consistency. Both image and file types are supported in model messages.

AI SDK 4.0
const result = await generateText({
model: someModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see?' },
{
type: 'image',
image: new Uint8Array([0, 1, 2, 3]),
mimeType: 'image/png',
},
{
type: 'file',
data: contents,
mimeType: 'application/pdf',
},
],
},
],
});
AI SDK 5.0
const result = await generateText({
model: someModel,
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see?' },
{
type: 'image',
image: new Uint8Array([0, 1, 2, 3]),
mediaType: 'image/png',
},
{
type: 'file',
data: contents,
mediaType: 'application/pdf',
},
],
},
],
});

Reasoning Support

Reasoning Text Property Rename

The .reasoning property has been renamed to .reasoningText for multi-step generations.

AI SDK 4.0
for (const step of steps) {
console.log(step.reasoning);
}
AI SDK 5.0
for (const step of steps) {
console.log(step.reasoningText);
}

Generate Text Reasoning Property Changes

In generateText() and streamText() results, reasoning properties have been renamed.

AI SDK 4.0
const result = await generateText({
model: anthropic('claude-4-sonnet-20250514'),
prompt: 'Explain your reasoning',
});
console.log(result.reasoning); // String reasoning text
console.log(result.reasoningDetails); // Array of reasoning details
AI SDK 5.0
const result = await generateText({
model: anthropic('claude-4-sonnet-20250514'),
prompt: 'Explain your reasoning',
});
console.log(result.reasoningText); // String reasoning text
console.log(result.reasoning); // Array of reasoning details

Continuation Steps Removal

The experimental_continueSteps option has been removed from generateText().

AI SDK 4.0
const result = await generateText({
experimental_continueSteps: true,
// ...
});
AI SDK 5.0
const result = await generateText({
// experimental_continueSteps has been removed
// Use newer models with higher output token limits instead
// ...
});

Image Generation Changes

Image model settings have been moved to providerOptions.

AI SDK 4.0
await generateImage({
model: luma.image('photon-flash-1', {
maxImagesPerCall: 5,
pollIntervalMillis: 500,
}),
prompt,
n: 10,
});
AI SDK 5.0
await generateImage({
model: luma.image('photon-flash-1'),
prompt,
n: 10,
maxImagesPerCall: 5,
providerOptions: {
luma: { pollIntervalMillis: 500 },
},
});

Step Result Changes

Step Type Removal

The stepType property has been removed from step results.

AI SDK 4.0
steps.forEach(step => {
switch (step.stepType) {
case 'initial':
console.log('Initial step');
break;
case 'tool-result':
console.log('Tool result step');
break;
case 'done':
console.log('Final step');
break;
}
});
AI SDK 5.0
steps.forEach((step, index) => {
if (index === 0) {
console.log('Initial step');
} else if (step.toolResults.length > 0) {
console.log('Tool result step');
} else {
console.log('Final step');
}
});

Step Control: maxSteps → stopWhen

For core functions like generateText and streamText, the maxSteps parameter has been replaced with stopWhen, which provides more flexible control over multi-step execution. The stopWhen parameter defines conditions for stopping the generation when the last step contains tool results. When multiple conditions are provided as an array, the generation stops if any condition is met.

Recommended Pattern:

  • Use stopWhen in server-side functions (generateText/streamText) for your main stopping logic
  • Use maxSteps > 1 in useChat for client-side tool execution limits
AI SDK 4.0
// V4: Simple numeric limit
const result = await generateText({
model: openai('gpt-4'),
messages,
maxSteps: 5, // Stop after a maximum of 5 steps
});
// useChat with maxSteps
const { messages } = useChat({
maxSteps: 3, // Stop after a maximum of 3 steps
});
AI SDK 5.0
import { stepCountIs, hasToolCall } from 'ai';
// V5: Server-side - flexible stopping conditions with stopWhen
const result = await generateText({
model: openai('gpt-4'),
messages,
// Only triggers when last step has tool results
stopWhen: stepCountIs(5), // Stop at step 5 if tools were called
});
// Server-side - stop when specific tool is called
const result = await generateText({
model: openai('gpt-4'),
messages,
stopWhen: hasToolCall('finalizeTask'), // Stop when finalizeTask tool is called
});
// Client-side - useChat still uses maxSteps for tool execution limits
const { messages } = useChat({
maxSteps: 3, // Limit client-side tool execution rounds
});

Common stopping patterns:

AI SDK 5.0
// Stop after N steps (equivalent to old maxSteps)
// Note: Only applies when the last step has tool results
stopWhen: stepCountIs(5);
// Stop when specific tool is called
stopWhen: hasToolCall('finalizeTask');
// Multiple conditions (stops if ANY condition is met)
stopWhen: [
stepCountIs(10), // Maximum 10 steps
hasToolCall('submitOrder'), // Or when order is submitted
];
// Custom condition based on step content
stopWhen: ({ steps }) => {
const lastStep = steps[steps.length - 1];
// Custom logic - only triggers if last step has tool results
return lastStep?.text?.includes('COMPLETE');
};

Important: The stopWhen conditions are only evaluated when the last step contains tool results.

Usage vs Total Usage

Usage properties now distinguish between single step and total usage.

AI SDK 4.0
// usage contained total token usage across all steps
console.log(result.usage);
AI SDK 5.0
// usage contains token usage from the final step only
console.log(result.usage);
// totalUsage contains total token usage across all steps
console.log(result.totalUsage);

AI SDK UI Changes

Package Structure Changes

@ai-sdk/rsc Package Extraction

The ai/rsc export has been extracted to a separate package @ai-sdk/rsc.

AI SDK 4.0
import { createStreamableValue } from 'ai/rsc';
AI SDK 5.0
import { createStreamableValue } from '@ai-sdk/rsc';
Don't forget to install the new package: npm install @ai-sdk/rsc

React UI Hooks Moved to @ai-sdk/react

The deprecated ai/react export has been removed in favor of @ai-sdk/react.

AI SDK 4.0
import { useChat } from 'ai/react';
AI SDK 5.0
import { useChat } from '@ai-sdk/react';

Don't forget to install the new package: npm install @ai-sdk/react@beta

useChat Changes

The useChat hook has undergone significant changes in v5, with new transport architecture, removal of managed input state, and more.

Chat Transport Architecture

Configuration is now handled through transport objects instead of direct API options.

AI SDK 4.0
import { useChat } from '@ai-sdk/react';
const { messages } = useChat({
api: '/api/chat',
credentials: 'include',
headers: { 'Custom-Header': 'value' },
});
AI SDK 5.0
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
const { messages } = useChat({
transport: new DefaultChatTransport({
api: '/api/chat',
credentials: 'include',
headers: { 'Custom-Header': 'value' },
}),
});

Removed Managed Input State

The useChat hook no longer manages input state internally. You must now manage input state manually.

AI SDK 4.0
import { useChat } from '@ai-sdk/react';
export default function Page() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
});
return (
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
);
}
AI SDK 5.0
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
export default function Page() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
const handleSubmit = e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
};
return (
<form onSubmit={handleSubmit}>
<input value={input} onChange={e => setInput(e.target.value)} />
<button type="submit">Send</button>
</form>
);
}

Message Sending: appendsendMessage

The append function has been replaced with sendMessage and requires structured message format.

AI SDK 4.0
const { append } = useChat();
// Simple text message
append({ role: 'user', content: 'Hello' });
// With custom body
append(
{
role: 'user',
content: 'Hello',
},
{ body: { imageUrl: 'https://...' } },
);
AI SDK 5.0
const { sendMessage } = useChat();
// Simple text message (most common usage)
sendMessage({ text: 'Hello' });
// Or with explicit parts array
sendMessage({
parts: [{ type: 'text', text: 'Hello' }],
});
// With custom body (via request options)
sendMessage(
{ role: 'user', parts: [{ type: 'text', text: 'Hello' }] },
{ body: { imageUrl: 'https://...' } },
);

Message Regeneration: reloadregenerate

The reload function has been renamed to regenerate with enhanced functionality.

AI SDK 4.0
const { reload } = useChat();
// Regenerate last message
reload();
AI SDK 5.0
const { regenerate } = useChat();
// Regenerate last message
regenerate();
// Regenerate specific message
regenerate({ messageId: 'message-123' });

onResponse Removal

The onResponse callback has been removed from useChat and useCompletion.

AI SDK 4.0
const { messages } = useChat({
onResponse(response) {
// handle response
},
});
AI SDK 5.0
const { messages } = useChat({
// onResponse is no longer available
});

Send Extra Message Fields Default

The sendExtraMessageFields option has been removed and is now the default behavior.

AI SDK 4.0
const { messages } = useChat({
sendExtraMessageFields: true,
});
AI SDK 5.0
const { messages } = useChat({
// sendExtraMessageFields is now the default
});

Keep Last Message on Error Removal

The keepLastMessageOnError option has been removed as it's no longer needed.

AI SDK 4.0
const { messages } = useChat({
keepLastMessageOnError: true,
});
AI SDK 5.0
const { messages } = useChat({
// keepLastMessageOnError is no longer needed
});

Chat Request Options Changes

The data and allowEmptySubmit options have been removed from ChatRequestOptions.

AI SDK 4.0
handleSubmit(e, {
data: { imageUrl: 'https://...' },
body: { custom: 'value' },
allowEmptySubmit: true,
});
AI SDK 5.0
sendMessage(
{
/* yourMessage */
},
{
body: {
custom: 'value',
imageUrl: 'https://...', // Move data to body
},
},
);

Request Options Type Rename

RequestOptions has been renamed to CompletionRequestOptions.

AI SDK 4.0
import type { RequestOptions } from 'ai';
AI SDK 5.0
import type { CompletionRequestOptions } from 'ai';

Loading State Changes

The deprecated isLoading helper has been removed in favor of status.

AI SDK 4.0
const { isLoading } = useChat();
AI SDK 5.0
const { status } = useChat();
// Use state instead of isLoading for more granular control

Resume Stream Support

The resume functionality has been moved from experimental_resume to resumeStream.

AI SDK 4.0
// Resume was experimental
const { messages } = useChat({
experimental_resume: true,
});
AI SDK 5.0
const { messages } = useChat({
resumeStream: true, // Resume interrupted streams
});

@ai-sdk/vue Changes

The Vue.js integration has been completely restructured, replacing the useChat composable with a Chat class.

useChat Replaced with Chat Class

@ai-sdk/vue v1
<script setup>
import { useChat } from '@ai-sdk/vue';
const { messages, input, handleSubmit } = useChat({
api: '/api/chat',
});
</script>
@ai-sdk/vue v2
<script setup>
import { Chat } from '@ai-sdk/vue';
import { DefaultChatTransport } from 'ai';
import { ref } from 'vue';
const input = ref('');
const chat = new Chat({
transport: new DefaultChatTransport({ api: '/api/chat' }),
});
const handleSubmit = () => {
chat.sendMessage({ text: input.value });
input.value = '';
};
</script>

Message Structure Changes

Messages now use a parts array instead of a content string.

@ai-sdk/vue v1
<template>
<div v-for="message in messages" :key="message.id">
<div>{{ message.role }}: {{ message.content }}</div>
</div>
</template>
@ai-sdk/vue v2
<template>
<div v-for="message in chat.messages" :key="message.id">
<div>{{ message.role }}:</div>
<div v-for="part in message.parts" :key="part.type">
<span v-if="part.type === 'text'">{{ part.text }}</span>
</div>
</div>
</template>

@ai-sdk/svelte Changes

The Svelte integration has also been updated with new constructor patterns and readonly properties.

Constructor API Changes

@ai-sdk/svelte v1
import { chat } from '@ai-sdk/svelte';
const chatInstance = chat({
api: '/api/chat',
});
@ai-sdk/svelte v2
import { chat } from '@ai-sdk/svelte';
import { DefaultChatTransport } from 'ai';
const chatInstance = chat(() => ({
transport: new DefaultChatTransport({ api: '/api/chat' }),
}));
Properties Made Readonly

Properties are now readonly and must be updated using setter methods.

@ai-sdk/svelte v1
// Direct property mutation was allowed
chatInstance.messages = [...chatInstance.messages, newMessage];
@ai-sdk/svelte v2
// Must use setter methods
chatInstance.setMessages([...chatInstance.messages, newMessage]);
Removed Managed Input

Like React and Vue, input management has been removed from the Svelte integration.

@ai-sdk/svelte v1
// Input was managed internally
const { messages, input, handleSubmit } = chatInstance;
@ai-sdk/svelte v2
// Must manage input state manually
let input = '';
const { messages, sendMessage } = chatInstance;
const handleSubmit = () => {
sendMessage({ text: input });
input = '';
};

@ai-sdk/ui-utils Package Removal

The @ai-sdk/ui-utils package has been removed and its exports moved to the main ai package.

AI SDK 4.0
import { getTextFromDataUrl } from '@ai-sdk/ui-utils';
AI SDK 5.0
import { getTextFromDataUrl } from 'ai';

useCompletion Changes

The data property has been removed from the useCompletion hook.

AI SDK 4.0
const {
completion,
handleSubmit,
data, // No longer available
} = useCompletion();
AI SDK 5.0
const {
completion,
handleSubmit,
// data property removed entirely
} = useCompletion();

useAssistant Removal

The useAssistant hook has been removed.

AI SDK 4.0
import { useAssistant } from '@ai-sdk/react';
AI SDK 5.0
// useAssistant has been removed
// Use useChat with appropriate configuration instead

For an implementation of the assistant functionality with AI SDK v5, see this example repository.

Attachments → File Parts

The experimental_attachments property has been replaced with the parts array.

AI SDK 4.0
{
messages.map(message => (
<div className="flex flex-col gap-2">
{message.content}
<div className="flex flex-row gap-2">
{message.experimental_attachments?.map((attachment, index) =>
attachment.contentType?.includes('image/') ? (
<img src={attachment.url} alt={attachment.name} />
) : attachment.contentType?.includes('text/') ? (
<div className="w-32 h-24 p-2 overflow-hidden text-xs border rounded-md ellipsis text-zinc-500">
{getTextFromDataUrl(attachment.url)}
</div>
) : null,
)}
</div>
</div>
));
}
AI SDK 5.0
{
messages.map(message => (
<div>
{message.parts.map((part, index) => {
if (part.type === 'text') {
return <div key={index}>{part.text}</div>;
}
if (part.type === 'file' && part.mediaType?.startsWith('image/')) {
return (
<div key={index}>
<img src={part.url} />
</div>
);
}
})}
</div>
));
}

Embedding Changes

Provider Options for Embeddings

Embedding model settings now use provider options instead of model parameters.

AI SDK 4.0
const { embedding } = await embed({
model: openai('text-embedding-3-small', {
dimensions: 10,
}),
});
AI SDK 5.0
const { embedding } = await embed({
model: openai('text-embedding-3-small'),
providerOptions: {
openai: {
dimensions: 10,
},
},
});

Raw Response → Response

The rawResponse property has been renamed to response.

AI SDK 4.0
const { rawResponse } = await embed(/* */);
AI SDK 5.0
const { response } = await embed(/* */);

Parallel Requests in embedMany

embedMany now makes parallel requests with a configurable maxParallelCalls option.

AI SDK 5.0
const { embeddings, usage } = await embedMany({
maxParallelCalls: 2, // Limit parallel requests
model: openai.embedding('text-embedding-3-small'),
values: [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
],
});

LangChain Adapter Moved to @ai-sdk/langchain

The LangChainAdapter has been moved to @ai-sdk/langchain and the API has been updated to use UI message streams.

AI SDK 4.0
import { LangChainAdapter } from 'ai';
const response = LangChainAdapter.toDataStreamResponse(stream);
AI SDK 5.0
import { toUIMessageStream } from '@ai-sdk/langchain';
import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});

Don't forget to install the new package: npm install @ai-sdk/langchain@beta

LlamaIndex Adapter Moved to @ai-sdk/llamaindex

The LlamaIndexAdapter has been extracted to a separate package @ai-sdk/llamaindex and follows the same UI message stream pattern.

AI SDK 4.0
import { LlamaIndexAdapter } from 'ai';
const response = LlamaIndexAdapter.toDataStreamResponse(stream);
AI SDK 5.0
import { toUIMessageStream } from '@ai-sdk/llamaindex';
import { createUIMessageStreamResponse } from 'ai';
const response = createUIMessageStreamResponse({
stream: toUIMessageStream(stream),
});

Don't forget to install the new package: npm install @ai-sdk/llamaindex@beta

Streaming Architecture

The streaming architecture has been completely redesigned in v5 to support better content differentiation, concurrent streaming of multiple parts, and improved real-time UX.

Stream Protocol Changes

Stream Protocol: Single Chunks → Start/Delta/End Pattern

The fundamental streaming pattern has changed from single chunks to a three-phase pattern with unique IDs for each content block.

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text-delta': {
process.stdout.write(chunk.textDelta);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'text-start': {
// New: Initialize a text block with unique ID
console.log(`Starting text block: ${chunk.id}`);
break;
}
case 'text-delta': {
// Changed: Now includes ID and uses 'delta' property
process.stdout.write(chunk.delta); // Changed from 'textDelta'
break;
}
case 'text-end': {
// New: Finalize the text block
console.log(`Completed text block: ${chunk.id}`);
break;
}
}
}

Reasoning Streaming Pattern

Reasoning content now follows the same start/delta/end pattern:

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'reasoning': {
// Single chunk with full reasoning text
console.log('Reasoning:', chunk.text);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'reasoning-start': {
console.log(`Starting reasoning block: ${chunk.id}`);
break;
}
case 'reasoning-delta': {
process.stdout.write(chunk.delta);
break;
}
case 'reasoning-end': {
console.log(`Completed reasoning block: ${chunk.id}`);
break;
}
}
}

Tool Input Streaming

Tool inputs can now be streamed as they're being generated:

AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'tool-input-start': {
console.log(`Starting tool input for ${chunk.toolName}: ${chunk.id}`);
break;
}
case 'tool-input-delta': {
// Stream the JSON input as it's being generated
process.stdout.write(chunk.delta);
break;
}
case 'tool-input-end': {
console.log(`Completed tool input: ${chunk.id}`);
break;
}
case 'tool-call': {
// Final tool call with complete input
console.log('Tool call:', chunk.toolName, chunk.input);
break;
}
}
}

onChunk Callback Changes

The onChunk callback now receives the new streaming chunk types with IDs and the start/delta/end pattern.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Write a story',
onChunk({ chunk }) {
switch (chunk.type) {
case 'text-delta': {
// Single property with text content
console.log('Text delta:', chunk.textDelta);
break;
}
}
},
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Write a story',
onChunk({ chunk }) {
switch (chunk.type) {
case 'text': {
// Text chunks now use single 'text' type
console.log('Text chunk:', chunk.text);
break;
}
case 'reasoning': {
// Reasoning chunks use single 'reasoning' type
console.log('Reasoning chunk:', chunk.text);
break;
}
case 'source': {
console.log('Source chunk:', chunk);
break;
}
case 'tool-call': {
console.log('Tool call:', chunk.toolName, chunk.input);
break;
}
case 'tool-input-start': {
console.log(
`Tool input started for ${chunk.toolName}:`,
chunk.toolCallId,
);
break;
}
case 'tool-input-delta': {
console.log(`Tool input delta for ${chunk.toolCallId}:`, chunk.delta);
break;
}
case 'tool-result': {
console.log('Tool result:', chunk.output);
break;
}
case 'raw': {
console.log('Raw chunk:', chunk);
break;
}
}
},
});

File Stream Parts Restructure

File parts in streams have been flattened.

AI SDK 4.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'file': {
console.log('Media type:', chunk.file.mediaType);
console.log('File data:', chunk.file.data);
break;
}
}
}
AI SDK 5.0
for await (const chunk of result.fullStream) {
switch (chunk.type) {
case 'file': {
console.log('Media type:', chunk.mediaType);
console.log('File data:', chunk.data);
break;
}
}
}

Source Stream Parts Restructure

Source stream parts have been flattened.

AI SDK 4.0
for await (const part of result.fullStream) {
if (part.type === 'source' && part.source.sourceType === 'url') {
console.log('ID:', part.source.id);
console.log('Title:', part.source.title);
console.log('URL:', part.source.url);
}
}
AI SDK 5.0
for await (const part of result.fullStream) {
if (part.type === 'source' && part.sourceType === 'url') {
console.log('ID:', part.id);
console.log('Title:', part.title);
console.log('URL:', part.url);
}
}

Finish Event Changes

Stream finish events have been renamed for consistency.

AI SDK 4.0
for await (const part of result.fullStream) {
switch (part.type) {
case 'step-finish': {
console.log('Step finished:', part.finishReason);
break;
}
case 'finish': {
console.log('Usage:', part.usage);
break;
}
}
}
AI SDK 5.0
for await (const part of result.fullStream) {
switch (part.type) {
case 'finish-step': {
// Renamed from 'step-finish'
console.log('Step finished:', part.finishReason);
break;
}
case 'finish': {
console.log('Total Usage:', part.totalUsage); // Changed from 'usage'
break;
}
}
}

Stream Protocol Changes

Proprietary Protocol -> Server-Sent Events

The data stream protocol has been updated to use Server-Sent Events.

AI SDK 4.0
import { createDataStream, formatDataStreamPart } from 'ai';
const dataStream = createDataStream({
execute: writer => {
writer.writeData('initialized call');
writer.write(formatDataStreamPart('text', 'Hello'));
writer.writeSource({
type: 'source',
sourceType: 'url',
id: 'source-1',
url: 'https://example.com',
title: 'Example Source',
});
},
});
AI SDK 5.0
import { createUIMessageStream } from 'ai';
const stream = createUIMessageStream({
execute: ({ writer }) => {
writer.write({ type: 'data', value: ['initialized call'] });
writer.write({ type: 'text', value: 'Hello' });
writer.write({
type: 'source-url',
value: {
type: 'source',
id: 'source-1',
url: 'https://example.com',
title: 'Example Source',
},
});
},
});

Data Stream Response Helper Functions Renamed

The streaming API has been completely restructured from data streams to UI message streams.

AI SDK 4.0
// Express/Node.js servers
app.post('/stream', async (req, res) => {
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
result.pipeDataStreamToResponse(res);
});
// Next.js API routes
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
return result.toDataStreamResponse();
AI SDK 5.0
// Express/Node.js servers
app.post('/stream', async (req, res) => {
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
result.pipeUIMessageStreamToResponse(res);
});
// Next.js API routes
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Generate content',
});
return result.toUIMessageStreamResponse();

Stream Transform Function Renaming

Various stream-related functions have been renamed for consistency.

AI SDK 4.0
import { DataStreamToSSETransformStream } from 'ai';
AI SDK 5.0
import { JsonToSseTransformStream } from 'ai';

Utility Changes

ID Generation Changes

The createIdGenerator() function now requires a size argument.

AI SDK 4.0
const generator = createIdGenerator({ prefix: 'msg' });
const id = generator(16); // Custom size at call time
AI SDK 5.0
const generator = createIdGenerator({ prefix: 'msg', size: 16 });
const id = generator(); // Fixed size from creation

IDGenerator → IdGenerator

The type name has been updated.

AI SDK 4.0
import { IDGenerator } from 'ai';
AI SDK 5.0
import { IdGenerator } from 'ai';

Provider Interface Changes

Language Model V2 Import

LanguageModelV2 must now be imported from @ai-sdk/provider.

AI SDK 4.0
import { LanguageModelV2 } from 'ai';
AI SDK 5.0
import { LanguageModelV2 } from '@ai-sdk/provider';

Middleware Rename

LanguageModelV1Middleware has been renamed and moved.

AI SDK 4.0
import { LanguageModelV1Middleware } from 'ai';
AI SDK 5.0
import { LanguageModelV2Middleware } from '@ai-sdk/provider';

Usage Token Properties

Token usage properties have been renamed for consistency.

AI SDK 4.0
// In language model implementations
{
usage: {
promptTokens: 10,
completionTokens: 20
}
}
AI SDK 5.0
// In language model implementations
{
usage: {
inputTokens: 10,
outputTokens: 20,
totalTokens: 30 // Now required
}
}

Stream Part Type Changes

The LanguageModelV2StreamPart type has been expanded to support the new streaming architecture with start/delta/end patterns and IDs.

AI SDK 4.0
// V4: Simple stream parts
type LanguageModelV2StreamPart =
| { type: 'text-delta'; textDelta: string }
| { type: 'reasoning'; text: string }
| { type: 'tool-call'; toolCallId: string; toolName: string; input: string };
AI SDK 5.0
// V5: Enhanced stream parts with IDs and lifecycle events
type LanguageModelV2StreamPart =
// Text blocks with start/delta/end pattern
| {
type: 'text-start';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'text-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'text-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Reasoning blocks with start/delta/end pattern
| {
type: 'reasoning-start';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'reasoning-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'reasoning-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Tool input streaming
| {
type: 'tool-input-start';
id: string;
toolName: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'tool-input-delta';
id: string;
delta: string;
providerMetadata?: SharedV2ProviderMetadata;
}
| {
type: 'tool-input-end';
id: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Enhanced tool calls
| {
type: 'tool-call';
toolCallId: string;
toolName: string;
input: string;
providerMetadata?: SharedV2ProviderMetadata;
}
// Stream lifecycle events
| { type: 'stream-start'; warnings: Array<LanguageModelV2CallWarning> }
| {
type: 'finish';
usage: LanguageModelV2Usage;
finishReason: LanguageModelV2FinishReason;
providerMetadata?: SharedV2ProviderMetadata;
};

Raw Response → Response

Provider response objects have been updated.

AI SDK 4.0
// In language model implementations
{
rawResponse: {
/* ... */
}
}
AI SDK 5.0
// In language model implementations
{
response: {
/* ... */
}
}

wrapLanguageModel now stable

AI SDK 4.0
import { experimental_wrapLanguageModel } from 'ai';
AI SDK 5.0
import { wrapLanguageModel } from 'ai';

activeTools No Longer Experimental

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
experimental_activeTools: ['weatherTool'],
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
activeTools: ['weatherTool'], // No longer experimental
});

prepareStep No Longer Experimental

The experimental_prepareStep option has been promoted and no longer requires the experimental prefix.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
experimental_prepareStep: ({ steps, stepNumber, model }) => {
console.log('Preparing step:', stepNumber);
return {
activeTools: ['weatherTool'],
system: 'Be helpful and concise.',
};
},
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4'),
messages,
tools: { weatherTool, locationTool },
prepareStep: ({ steps, stepNumber, model }) => {
console.log('Preparing step:', stepNumber);
return {
activeTools: ['weatherTool'],
system: 'Be helpful and concise.',
// Can also configure toolChoice, model, etc.
};
},
});

The prepareStep function receives { steps, stepNumber, model } and can return:

  • model: Different model for this step
  • activeTools: Which tools to make available
  • toolChoice: Tool selection strategy
  • system: System message for this step
  • undefined: Use default settings

Temperature Default Removal

Temperature is no longer set to 0 by default.

AI SDK 4.0
await generateText({
model: openai('gpt-4'),
prompt: 'Write a creative story',
// Implicitly temperature: 0
});
AI SDK 5.0
await generateText({
model: openai('gpt-4'),
prompt: 'Write a creative story',
temperature: 0, // Must explicitly set
});

Provider & Model Changes

OpenAI

Structured Outputs Default

Structured outputs are now enabled by default for supported OpenAI models.

AI SDK 4.0
const result = await generateText({
model: openai('gpt-4.1-2024-08-06', { structuredOutputs: true }),
});
AI SDK 5.0
const result = await generateText({
model: openai('gpt-4.1-2024-08-06'),
// structuredOutputs: true is now the default
});

Compatibility Option Removal

The compatibility option has been removed; strict mode is now the default.

AI SDK 4.0
const openai = createOpenAI({
compatibility: 'strict',
});
AI SDK 5.0
const openai = createOpenAI({
// strict compatibility is now the default
});

Legacy Function Calls Removal

The useLegacyFunctionCalls option has been removed.

AI SDK 4.0
const result = streamText({
model: openai('gpt-4.1', { useLegacyFunctionCalls: true }),
});
AI SDK 5.0
const result = streamText({
model: openai('gpt-4.1'),
});

Simulate Streaming

The simulateStreaming model option has been replaced with middleware.

AI SDK 4.0
const result = generateText({
model: openai('gpt-4.1', { simulateStreaming: true }),
prompt: 'Hello, world!',
});
AI SDK 5.0
import { simulateStreamingMiddleware, wrapLanguageModel } from 'ai';
const model = wrapLanguageModel({
model: openai('gpt-4.1'),
middleware: simulateStreamingMiddleware(),
});
const result = generateText({
model,
prompt: 'Hello, world!',
});

Amazon Bedrock

Snake Case → Camel Case

Provider options have been updated to use camelCase.

AI SDK 4.0
const result = await generateText({
model: bedrock('amazon.titan-tg1-large'),
prompt: 'Hello, world!',
providerOptions: {
bedrock: {
reasoning_config: {
/* ... */
},
},
},
});
AI SDK 5.0
const result = await generateText({
model: bedrock('amazon.titan-tg1-large'),
prompt: 'Hello, world!',
providerOptions: {
bedrock: {
reasoningConfig: {
/* ... */
},
},
},
});

Provider-Utils Changes

Deprecated CoreTool* types have been removed.

AI SDK 4.0
import {
CoreToolCall,
CoreToolResult,
CoreToolResultUnion,
CoreToolCallUnion,
CoreToolChoice,
} from '@ai-sdk/provider-utils';
AI SDK 5.0
import {
ToolCall,
ToolResult,
ResultUnion,
ToolCallUnion,
ToolChoice,
} from '@ai-sdk/provider-utils';