Stopping Streams

Cancelling ongoing streams is often needed. For example, users might want to stop a stream when they realize that the response is not what they want.

The different parts of the AI SDK support cancelling streams in different ways.

AI SDK Core

The AI SDK functions have an abortSignal argument that you can use to cancel a stream. You would use this if you want to cancel a stream from the server side to the LLM API, e.g. by forwarding the abortSignal from the request.

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = streamText({
model: openai('gpt-4.1'),
prompt,
// forward the abort signal:
abortSignal: req.signal,
onAbort: ({ steps }) => {
// Handle cleanup when stream is aborted
console.log('Stream aborted after', steps.length, 'steps');
// Persist partial results to database
},
});
return result.toTextStreamResponse();
}

AI SDK UI

The hooks, e.g. useChat or useCompletion, provide a stop helper function that can be used to cancel a stream. This will cancel the stream from the client side to the server.

'use client';
import { useCompletion } from '@ai-sdk/react';
export default function Chat() {
const { input, completion, stop, status, handleSubmit, handleInputChange } =
useCompletion();
return (
<div>
{(status === 'submitted' || status === 'streaming') && (
<button type="button" onClick={() => stop()}>
Stop
</button>
)}
{completion}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
</form>
</div>
);
}

Handling stream abort cleanup

When streams are aborted, you may need to perform cleanup operations such as persisting partial results or cleaning up resources. The onAbort callback provides a way to handle these scenarios on the server side.

Unlike onFinish, which is called when a stream completes normally, onAbort is specifically called when a stream is aborted via AbortSignal. This distinction allows you to handle normal completion and aborted streams differently.

For UI message streams (toUIMessageStreamResponse), the onFinish callback also receives an isAborted parameter that indicates whether the stream was aborted. This allows you to handle both completion and abort scenarios in a single callback.

import { streamText } from 'ai';
const result = streamText({
model: openai('gpt-4.1'),
prompt: 'Write a long story...',
abortSignal: controller.signal,
onAbort: ({ steps }) => {
// Called when stream is aborted - persist partial results
await savePartialResults(steps);
await logAbortEvent(steps.length);
},
onFinish: ({ steps, totalUsage }) => {
// Called when stream completes normally
await saveFinalResults(steps, totalUsage);
},
});

The onAbort callback receives:

  • steps: Array of all completed steps before the abort occurred

This is particularly useful for:

  • Persisting partial conversation history to database
  • Saving partial progress for later continuation
  • Cleaning up server-side resources or connections
  • Logging abort events for analytics

You can also handle abort events directly in the stream using the abort stream part:

for await (const part of result.fullStream) {
switch (part.type) {
case 'text-delta':
// Handle text delta content
break;
case 'abort':
// Handle abort event directly in stream
console.log('Stream was aborted');
break;
// ... other cases
}
}

UI Message Streams

When using toUIMessageStreamResponse, you need to handle stream abortion slightly differently. The onFinish callback receives an isAborted parameter, and you should pass the consumeStream function to ensure proper abort handling:

import { openai } from '@ai-sdk/openai';
import {
consumeStream,
convertToModelMessages,
streamText,
UIMessage,
} from 'ai';
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: openai('gpt-4o'),
messages: convertToModelMessages(messages),
abortSignal: req.signal,
});
return result.toUIMessageStreamResponse({
onFinish: async ({ isAborted }) => {
if (isAborted) {
console.log('Stream was aborted');
// Handle abort-specific cleanup
} else {
console.log('Stream completed normally');
// Handle normal completion
}
},
consumeSseStream: consumeStream,
});
}

The consumeStream function is necessary for proper abort handling in UI message streams. It ensures that the stream is properly consumed even when aborted, preventing potential memory leaks or hanging connections.

AI SDK RSC

The AI SDK RSC does not currently support stopping streams.