Vercel AI SDK
The @doubleword/vercel-ai package provides a Doubleword provider for the Vercel AI SDK, with automatic API key resolution and a pre-configured base URL.
Install
npm install @doubleword/vercel-ai aiChat / Text Generation
import { createDoubleword } from "@doubleword/vercel-ai";
import { generateText } from "ai";
const doubleword = createDoubleword({
apiKey: "{{apiKey}}",
});
const result = await generateText({
model: doubleword("{{selectedModel.id}}"),
prompt: "Say hello.",
});
console.log(result.text);Tool calling
The provider supports multi-step tool use via generateText. The model decides when to call tools, receives the results, and formulates a final answer:
import { createDoubleword } from "@doubleword/vercel-ai";
import { generateText, tool, jsonSchema, stepCountIs } from "ai";
const doubleword = createDoubleword({
apiKey: "{{apiKey}}",
});
const result = await generateText({
model: doubleword("{{selectedModel.id}}"),
tools: {
calculator: tool({
description: "Evaluate a basic arithmetic expression",
inputSchema: jsonSchema({
type: "object",
properties: {
expression: { type: "string", description: "The expression to evaluate" },
},
required: ["expression"],
additionalProperties: false,
}),
execute: async ({ expression }: { expression: string }) => {
return String(new Function(`return (${expression})`)());
},
}),
},
stopWhen: stepCountIs(5),
prompt: "What is 137 * 49?",
});
console.log(result.text);stopWhen: stepCountIs(5) allows up to 5 model→tool→model round-trips before returning. Each step where the model calls a tool automatically feeds the result back for the next step.
Streaming
import { createDoubleword } from "@doubleword/vercel-ai";
import { streamText } from "ai";
const doubleword = createDoubleword({
apiKey: "{{apiKey}}",
});
const stream = streamText({
model: doubleword("{{selectedModel.id}}"),
prompt: "Say hello.",
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}Embeddings
import { createDoubleword } from "@doubleword/vercel-ai";
import { embed } from "ai";
const doubleword = createDoubleword({
apiKey: "{{apiKey}}",
});
const result = await embed({
model: doubleword.embeddingModel("Qwen/Qwen3-Embedding-8B"),
value: "Hello world",
});
console.log(result.embedding.length); // 4096Default singleton
For convenience, a pre-configured singleton is also exported that reads DOUBLEWORD_API_KEY from the environment:
import { doubleword } from "@doubleword/vercel-ai";
import { generateText } from "ai";
const result = await generateText({
model: doubleword("{{selectedModel.id}}"),
prompt: "Say hello.",
});Batch pricing
For background workloads where latency is not critical, use createDoublewordBatch to transparently route requests through Doubleword's Batch API — cutting inference costs by up to 90% with the Doubleword Inference API. Powered by autobatcher under the hood.
import { createDoublewordBatch } from "@doubleword/vercel-ai";
import { generateText } from "ai";
const doubleword = createDoublewordBatch({
apiKey: "{{apiKey}}",
batchWindowSeconds: 2.5,
});
const result = await generateText({
model: doubleword("{{selectedModel.id}}"),
prompt: "Summarize this document.",
});
console.log(result.text);
await doubleword.close();Concurrent generateText calls are automatically collected into batch submissions. The interface is identical to the real-time provider — only streaming is not supported.
Try it end-to-end
A full tool-calling example lives in the repo at examples/tool-calling/. It runs a calculator agent against concurrent arithmetic queries, demonstrating the multi-step agentic loop.