AI Gateway Provider
The AI Gateway provider connects you to models from multiple AI providers through a single interface. Instead of integrating with each provider separately, you can access OpenAI, Anthropic, Google, Meta, xAI, and other providers and their models.
Features
- Access models from multiple providers without having to install additional provider modules/dependencies
- Use the same code structure across different AI providers
- Switch between models and providers easily
- Automatic authentication when deployed on Vercel
- View pricing information across providers
- Observability for AI model usage through the Vercel dashboard
Setup
The AI Gateway provider is available in the @ai-sdk/gateway
module. You can install it with
pnpm add @ai-sdk/gateway
Basic Usage
For most use cases, you can use the AI Gateway directly with a model string:
// use plain model string with global providerimport { generateText } from 'ai';
const { text } = await generateText({ model: 'openai/gpt-4o', prompt: 'Hello world',});
// use provider instanceimport { generateText } from 'ai';import { gateway } from '@ai-sdk/gateway';
const { text } = await generateText({ model: gateway('openai/gpt-4o'), prompt: 'Hello world',});
The AI SDK automatically uses the AI Gateway when you pass a model string in the creator/model-name
format.
Provider Instance
You can also import the default provider instance gateway
from @ai-sdk/gateway
:
import { gateway } from '@ai-sdk/gateway';
You may want to create a custom provider instance when you need to:
- Set custom configuration options (API key, base URL, headers)
- Use the provider in a provider registry
- Wrap the provider with middleware
- Use different settings for different parts of your application
To create a custom provider instance, import createGateway
from @ai-sdk/gateway
:
import { createGateway } from '@ai-sdk/gateway';
const gateway = createGateway({ apiKey: process.env.AI_GATEWAY_API_KEY ?? '',});
You can use the following optional settings to customize the AI Gateway provider instance:
-
baseURL string
Use a different URL prefix for API calls. The default prefix is
https://ai-gateway.vercel.sh/v1/ai
. -
apiKey string
API key that is being sent using the
Authorization
header. It defaults to theAI_GATEWAY_API_KEY
environment variable. -
headers Record<string,string>
Custom headers to include in the requests.
-
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. Defaults to the global
fetch
function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing. -
metadataCacheRefreshMillis number
How frequently to refresh the metadata cache in milliseconds. Defaults to 5 minutes (300,000ms).
Authentication
The Gateway provider supports two authentication methods:
API Key Authentication
Set your API key via environment variable:
AI_GATEWAY_API_KEY=your_api_key_here
Or pass it directly to the provider:
import { createGateway } from '@ai-sdk/gateway';
const gateway = createGateway({ apiKey: 'your_api_key_here',});
OIDC Authentication (Vercel Deployments)
When deployed to Vercel, the AI Gateway provider supports authenticating using OIDC (OpenID Connect) tokens without API Keys.
How OIDC Authentication Works
-
In Production/Preview Deployments:
- OIDC authentication is automatically handled
- No manual configuration needed
- Tokens are automatically obtained and refreshed
-
In Local Development:
- First, install and authenticate with the Vercel CLI
- Run
vercel env pull
to download your project's OIDC token locally - For automatic token management:
- Use
vercel dev
to start your development server - this will handle token refreshing automatically
- Use
- For manual token management:
- If not using
vercel dev
, note that OIDC tokens expire after 12 hours - You'll need to run
vercel env pull
again to refresh the token before it expires
- If not using
If an API Key is present (either passed directly or via environment), it will always be used, even if invalid.
Read more about using OIDC tokens in the Vercel AI Gateway docs.
Language Models
You can create language models using a provider instance. The first argument is the model ID in the format creator/model-name
:
import { gateway } from '@ai-sdk/gateway';import { generateText } from 'ai';
const { text } = await generateText({ model: gateway('openai/gpt-4o'), prompt: 'Explain quantum computing in simple terms',});
AI Gateway language models can also be used in the streamText
, generateObject
, and streamObject
functions (see AI SDK Core).
Available Models
The AI Gateway supports models from OpenAI, Anthropic, Google, Meta, xAI, Mistral, DeepSeek, Amazon Bedrock, Cohere, Perplexity, Alibaba, and other providers.
For the complete list of available models, see the AI Gateway documentation.
Dynamic Model Discovery
You can discover available models programmatically:
import { gateway } from '@ai-sdk/gateway';import { generateText } from 'ai';
const availableModels = await gateway.getAvailableModels();
// List all available modelsavailableModels.models.forEach(model => { console.log(`${model.id}: ${model.name}`); if (model.description) { console.log(` Description: ${model.description}`); } if (model.pricing) { console.log(` Input: $${model.pricing.input}/token`); console.log(` Output: $${model.pricing.output}/token`); }});
// Use any discovered model with plain stringconst { text } = await generateText({ model: availableModels.models[0].id, // e.g., 'openai/gpt-4o' prompt: 'Hello world',});
Examples
Basic Text Generation
import { gateway } from '@ai-sdk/gateway';import { generateText } from 'ai';
const { text } = await generateText({ model: gateway('anthropic/claude-3.5-sonnet'), prompt: 'Write a haiku about programming',});
console.log(text);
Streaming
import { gateway } from '@ai-sdk/gateway';import { streamText } from 'ai';
const { textStream } = await streamText({ model: gateway('openai/gpt-4o'), prompt: 'Explain the benefits of serverless architecture',});
for await (const textPart of textStream) { process.stdout.write(textPart);}
Tool Usage
import { gateway } from '@ai-sdk/gateway';import { generateText, tool } from 'ai';import { z } from 'zod';
const { text } = await generateText({ model: gateway('meta/llama-3.3-70b'), prompt: 'What is the weather like in San Francisco?', tools: { getWeather: tool({ description: 'Get the current weather for a location', parameters: z.object({ location: z.string().describe('The location to get weather for'), }), execute: async ({ location }) => { // Your weather API call here return `It's sunny in ${location}`; }, }), },});
Provider Options
When using provider-specific options, use the actual provider name (e.g. anthropic
not gateway
) as the key:
// with model stringimport { generateText } from 'ai';
const { text } = await generateText({ model: 'anthropic/claude-3-5-sonnet', prompt: 'Explain quantum computing', providerOptions: { anthropic: { thinking: { type: 'enabled', budgetTokens: 12000 }, }, },});
// with provider instanceimport { generateText } from 'ai';import { gateway } from '@ai-sdk/gateway';
const { text } = await generateText({ model: gateway('anthropic/claude-3-5-sonnet'), prompt: 'Explain quantum computing', providerOptions: { anthropic: { thinking: { type: 'enabled', budgetTokens: 12000 }, }, },});
The AI Gateway provider also accepts its own set of options. Refer to the AI Gateway provider options documentation.
Model Capabilities
Model capabilities depend on the specific provider and model you're using. For detailed capability information, see:
- AI Gateway provider options for an overview of available providers
- Individual AI SDK provider pages for specific model capabilities and features