Groq Provider

The Groq provider contains language model support for the Groq API.

Setup

The Groq provider is available via the @ai-sdk/groq module. You can install it with

pnpm
npm
yarn
pnpm add @ai-sdk/groq

Provider Instance

You can import the default provider instance groq from @ai-sdk/groq:

import { groq } from '@ai-sdk/groq';

If you need a customized setup, you can import createGroq from @ai-sdk/groq and create a provider instance with your settings:

import { createGroq } from '@ai-sdk/groq';
const groq = createGroq({
// custom settings
});

You can use the following optional settings to customize the Groq provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://api.groq.com/openai/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the GROQ_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Language Models

You can create Groq models using a provider instance. The first argument is the model id, e.g. gemma2-9b-it.

const model = groq('gemma2-9b-it');

Reasoning Models

Groq offers several reasoning models such as qwen-qwq-32b and deepseek-r1-distill-llama-70b. You can configure how the reasoning is exposed in the generated text by using the reasoningFormat option. It supports the options parsed, hidden, and raw.

import { groq } from '@ai-sdk/groq';
import { generateText } from 'ai';
const result = await generateText({
model: groq('qwen-qwq-32b'),
providerOptions: {
groq: {
reasoningFormat: 'parsed',
parallelToolCalls: true, // Enable parallel function calling (default: true)
user: 'user-123', // Unique identifier for end-user (optional)
},
},
prompt: 'How many "r"s are in the word "strawberry"?',
});

The following optional provider options are available for Groq language models:

  • reasoningFormat 'parsed' | 'raw' | 'hidden'

    Controls how reasoning is exposed in the generated text. Only supported by reasoning models like qwen-qwq-32b and deepseek-r1-distill-* models.

    For a complete list of reasoning models and their capabilities, see Groq's reasoning models documentation.

  • structuredOutputs boolean

    Whether to use structured outputs.

    Defaults to true.

    When enabled, object generation will use the json_schema format instead of json_object format, providing more reliable structured outputs.

  • parallelToolCalls boolean

    Whether to enable parallel function calling during tool use. Defaults to true.

  • user string

    A unique identifier representing your end-user, which can help with monitoring and abuse detection.

Only Groq reasoning models support the reasoningFormat option.

Structured Outputs

Structured outputs are enabled by default for Groq models. You can disable them by setting the structuredOutputs option to false.

import { groq } from '@ai-sdk/groq';
import { generateObject } from 'ai';
import { z } from 'zod';
const result = await generateObject({
model: groq('moonshotai/kimi-k2-instruct'),
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
instructions: z.array(z.string()),
}),
}),
prompt: 'Generate a simple pasta recipe.',
});
console.log(JSON.stringify(result.object, null, 2));

You can disable structured outputs for models that don't support them:

import { groq } from '@ai-sdk/groq';
import { generateObject } from 'ai';
import { z } from 'zod';
const result = await generateObject({
model: groq('gemma2-9b-it'),
providerOptions: {
groq: {
structuredOutputs: false,
},
},
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.string()),
instructions: z.array(z.string()),
}),
}),
prompt: 'Generate a simple pasta recipe in JSON format.',
});
console.log(JSON.stringify(result.object, null, 2));

Structured outputs are only supported by newer Groq models like moonshotai/kimi-k2-instruct. For unsupported models, you can disable structured outputs by setting structuredOutputs: false. When disabled, Groq uses the json_object format which requires the word "JSON" to be included in your messages.

Example

You can use Groq language models to generate text with the generateText function:

import { groq } from '@ai-sdk/groq';
import { generateText } from 'ai';
const { text } = await generateText({
model: groq('gemma2-9b-it'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Image Input

Groq's multi-modal models like meta-llama/llama-4-scout-17b-16e-instruct support image inputs. You can include images in your messages using either URLs or base64-encoded data:

import { groq } from '@ai-sdk/groq';
import { generateText } from 'ai';
const { text } = await generateText({
model: groq('meta-llama/llama-4-scout-17b-16e-instruct'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'What do you see in this image?' },
{
type: 'image',
image: 'https://example.com/image.jpg',
},
],
},
],
});

You can also use base64-encoded images:

import { groq } from '@ai-sdk/groq';
import { generateText } from 'ai';
import { readFileSync } from 'fs';
const imageData = readFileSync('path/to/image.jpg', 'base64');
const { text } = await generateText({
model: groq('meta-llama/llama-4-scout-17b-16e-instruct'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe this image in detail.' },
{
type: 'image',
image: `data:image/jpeg;base64,${imageData}`,
},
],
},
],
});

Model Capabilities

ModelImage InputObject GenerationTool UsageTool Streaming
gemma2-9b-it
llama-3.1-8b-instant
llama-3.3-70b-versatile
meta-llama/llama-guard-4-12b
deepseek-r1-distill-llama-70b
meta-llama/llama-4-maverick-17b-128e-instruct
meta-llama/llama-4-scout-17b-16e-instruct
meta-llama/llama-prompt-guard-2-22m
meta-llama/llama-prompt-guard-2-86m
mistral-saba-24b
moonshotai/kimi-k2-instruct
qwen/qwen3-32b
llama-guard-3-8b
llama3-70b-8192
llama3-8b-8192
mixtral-8x7b-32768
qwen-qwq-32b
qwen-2.5-32b
deepseek-r1-distill-qwen-32b

The tables above list the most commonly used models. Please see the Groq docs for a complete list of available models. You can also pass any available provider model ID as a string if needed.

Transcription Models

You can create models that call the Groq transcription API using the .transcription() factory method.

The first argument is the model id e.g. whisper-large-v3.

const model = groq.transcription('whisper-large-v3');

You can also pass additional provider-specific options using the providerOptions argument. For example, supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

import { experimental_transcribe as transcribe } from 'ai';
import { groq } from '@ai-sdk/groq';
import { readFile } from 'fs/promises';
const result = await transcribe({
model: groq.transcription('whisper-large-v3'),
audio: await readFile('audio.mp3'),
providerOptions: { groq: { language: 'en' } },
});

The following provider options are available:

  • timestampGranularities string[] The granularity of the timestamps in the transcription. Defaults to ['segment']. Possible values are ['word'], ['segment'], and ['word', 'segment']. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.

  • language string The language of the input audio. Supplying the input language in ISO-639-1 format (e.g. 'en') will improve accuracy and latency. Optional.

  • prompt string An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language. Optional.

  • temperature number The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. Defaults to 0. Optional.

Model Capabilities

ModelTranscriptionDurationSegmentsLanguage
whisper-large-v3
whisper-large-v3-turbo
distil-whisper-large-v3-en