Mastra
Mastra uses the Vercel AI SDK under the hood. Use the @doubleword/vercel-ai provider package to point it at the Doubleword API.
Install
npm install mastra @mastra/core @doubleword/vercel-aiConfigure
import { Agent } from "@mastra/core/agent";
import { createDoubleword } from "@doubleword/vercel-ai";
const doubleword = createDoubleword({
apiKey: "{{apiKey}}",
});
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: doubleword("{{selectedModel.id}}"),
});
const result = await agent.generate("Say hello.");
console.log(result.text);The @doubleword/vercel-ai package automatically configures the base URL and supports credential resolution from the DOUBLEWORD_API_KEY environment variable.
Batch pricing
For background workloads where latency is not critical, use createDoublewordBatch to transparently route requests through Doubleword's Batch API at reduced cost:
npm install mastra @mastra/core @doubleword/vercel-ai autobatcherimport { Agent } from "@mastra/core/agent";
import { createDoublewordBatch } from "@doubleword/vercel-ai";
const doubleword = createDoublewordBatch({
apiKey: "{{apiKey}}",
});
const agent = new Agent({
name: "my-agent",
instructions: "You are a helpful assistant.",
model: doubleword("{{selectedModel.id}}"),
});
const result = await agent.generate("Summarize this document.");
console.log(result.text);
await doubleword.close();Concurrent calls are automatically collected into batch submissions, cutting inference costs by up to 90%. The interface is identical to the real-time provider — only streaming is not supported.