Get started with Gemini 2.5
With the release of Gemini 2.5, there has never been a better time to start building AI applications, particularly those that require complex reasoning capabilities and advanced intelligence.
The AI SDK is a powerful TypeScript toolkit for building AI applications with large language models (LLMs) like Gemini 2.5 alongside popular frameworks like React, Next.js, Vue, Svelte, Node.js, and more.
Gemini 2.5
Gemini 2.5 is Google's most advanced model family to date, offering exceptional capabilities across reasoning, instruction following, coding, and knowledge tasks. The Gemini 2.5 model family consists of:
- Gemini 2.5 Pro: Best for coding and highly complex tasks
- Gemini 2.5 Flash: Fast performance on everyday tasks
- Gemini 2.5 Flash-Lite: Best for high volume cost-efficient tasks
Getting Started with the AI SDK
The AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more. Integrating LLMs into applications is complicated and heavily dependent on the specific model provider you use.
The AI SDK abstracts away the differences between model providers, eliminates boilerplate code for building chatbots, and allows you to go beyond text output to generate rich, interactive components.
At the center of the AI SDK is AI SDK Core, which provides a unified API to call any LLM. The code snippet below is all you need to call Gemini 2.5 with the AI SDK:
import { google } from '@ai-sdk/google';import { generateText } from 'ai';
const { text } = await generateText({ model: google('gemini-2.5-flash'), prompt: 'Explain the concept of the Hilbert space.',});console.log(text);
Thinking Capability
The Gemini 2.5 series models use an internal "thinking process" that significantly improves their reasoning and multi-step planning abilities, making them highly effective for complex tasks such as coding, advanced mathematics, and data analysis.
You can control the amount of thinking using the thinkingConfig
provider option and specifying a thinking budget in tokens. Additionally, you can request thinking summaries by setting includeThoughts
to true
.
import { google } from '@ai-sdk/google';import { generateText } from 'ai';
const { text, reasoning } = await generateText({ model: google('gemini-2.5-flash'), prompt: 'What is the sum of the first 10 prime numbers?', providerOptions: { google: { thinkingConfig: { thinkingBudget: 8192, includeThoughts: true, }, }, },});
console.log(text); // text responseconsole.log(reasoning); // reasoning summary
Using Tools with the AI SDK
Gemini 2.5 supports tool calling, allowing it to interact with external systems and perform discrete tasks. Here's an example of using tool calling with the AI SDK:
import { z } from 'zod';import { generateText, tool, stepCountIs } from 'ai';import { google } from '@ai-sdk/google';
const result = await generateText({ model: google('gemini-2.5-flash'), prompt: 'What is the weather in San Francisco?', tools: { weather: tool({ description: 'Get the weather in a location', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), }, stopWhen: stepCountIs(5), // Optional, enables multi step calling});
console.log(result.text);
console.log(result.steps);
Using Google Search with Gemini
With search grounding, Gemini can access to the latest information using Google search. Here's an example of using Google Search with the AI SDK:
import { google } from '@ai-sdk/google';import { GoogleGenerativeAIProviderMetadata } from '@ai-sdk/google';import { generateText } from 'ai';
const { text, sources, providerMetadata } = await generateText({ model: google('gemini-2.5-flash'), tools: { google_search: google.tools.googleSearch({}), }, prompt: 'List the top 5 San Francisco news from the past week.' + 'You must include the date of each article.',});
// access the grounding metadata. Casting to the provider metadata type// is optional but provides autocomplete and type safety.const metadata = providerMetadata?.google as | GoogleGenerativeAIProviderMetadata | undefined;const groundingMetadata = metadata?.groundingMetadata;const safetyRatings = metadata?.safetyRatings;
Building Interactive Interfaces
AI SDK Core can be paired with AI SDK UI, another powerful component of the AI SDK, to streamline the process of building chat, completion, and assistant interfaces with popular frameworks like Next.js, Nuxt, SvelteKit, and SolidStart.
AI SDK UI provides robust abstractions that simplify the complex tasks of managing chat streams and UI updates on the frontend, enabling you to develop dynamic AI-driven interfaces more efficiently.
With four main hooks — useChat
, useCompletion
, useObject
, and useAssistant
— you can incorporate real-time chat capabilities, text completions, streamed JSON, and interactive assistant features into your app.
Let's explore building a chatbot with Next.js, the AI SDK, and Gemini 2.5 Flash:
In a new Next.js application, first install the AI SDK and the Google Generative AI provider:
pnpm install ai @ai-sdk/google
Then, create a route handler for the chat endpoint:
import { google } from '@ai-sdk/google';import { streamText, UIMessage, convertToModelMessages } from 'ai';
// Allow streaming responses up to 30 secondsexport const maxDuration = 30;
export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({ model: google('gemini-2.5-flash'), messages: convertToModelMessages(messages), });
return result.toUIMessageStreamResponse();}
Finally, update the root page (app/page.tsx
) to use the useChat
hook:
'use client';
import { useChat } from '@ai-sdk/react';import { useState } from 'react';
export default function Chat() { const [input, setInput] = useState(''); const { messages, sendMessage } = useChat(); return ( <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch"> {messages.map(message => ( <div key={message.id} className="whitespace-pre-wrap"> {message.role === 'user' ? 'User: ' : 'Gemini: '} {message.parts.map((part, i) => { switch (part.type) { case 'text': return <div key={`${message.id}-${i}`}>{part.text}</div>; } })} </div> ))}
<form onSubmit={e => { e.preventDefault(); sendMessage({ text: input }); setInput(''); }} > <input className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl" value={input} placeholder="Say something..." onChange={e => setInput(e.currentTarget.value)} /> </form> </div> );}
The useChat hook on your root page (app/page.tsx
) will make a request to your AI provider endpoint (app/api/chat/route.ts
) whenever the user submits a message. The messages are then displayed in the chat UI.
Get Started
Ready to dive in? Here's how you can begin:
- Explore the documentation at ai-sdk.dev/docs to understand the capabilities of the AI SDK.
- Check out practical examples at ai-sdk.dev/examples to see the SDK in action.
- Dive deeper with advanced guides on topics like Retrieval-Augmented Generation (RAG) at ai-sdk.dev/docs/guides.
- Use ready-to-deploy AI templates at vercel.com/templates?type=ai.
- Read more about the Google Generative AI provider.