DoublewordDoubleword

OpenAI Agents SDK

The OpenAI Agents SDK can be pointed at any OpenAI-compatible endpoint by configuring a custom provider.

Install

pip install openai-agents

Configure

Create an OpenAIProvider that points at the Doubleword API and pass it via RunConfig:

from openai import AsyncOpenAI
from agents import Agent, Runner, RunConfig
from agents.models.openai_provider import OpenAIProvider

provider = OpenAIProvider(
    openai_client=AsyncOpenAI(
        base_url="https://api.doubleword.ai/v1",
        api_key="{{apiKey}}",
    ),
)

agent = Agent(
    name="my-agent",
    model="{{selectedModel.id}}",
    instructions="You are a helpful assistant.",
)

import asyncio
result = asyncio.run(
    Runner.run(
        agent,
        "Say hello.",
        run_config=RunConfig(model_provider=provider),
    )
)
print(result.final_output)

The Doubleword API supports both the Responses API and the Chat Completions API, so the SDK works with its default settings.

Batch pricing with Autobatcher

For background tasks where latency is not critical, use Autobatcher to transparently route requests through the Batch API at reduced cost:

pip install openai-agents autobatcher
from autobatcher import BatchOpenAI
from agents import Agent, Runner, RunConfig
from agents.models.openai_provider import OpenAIProvider

client = BatchOpenAI(
    api_key="{{apiKey}}",
    base_url="https://api.doubleword.ai/v1",
)

provider = OpenAIProvider(
    openai_client=client,
    use_responses=False,
)

agent = Agent(
    name="my-agent",
    model="{{selectedModel.id}}",
    instructions="You are a helpful assistant.",
)

import asyncio
result = asyncio.run(
    Runner.run(
        agent,
        "Say hello.",
        run_config=RunConfig(model_provider=provider),
    )
)
print(result.final_output)

BatchOpenAI is a drop-in AsyncOpenAI subclass that collects requests and submits them as batch jobs automatically, cutting inference costs by up to 90%.

Why OpenAIProvider instead of set_default_openai_client

The Agents SDK also offers set_default_openai_client as a simpler global configuration. However, it will fail for model names that contain a / that doesn't match a known provider prefix — for example Qwen/Qwen3-30B or meta-llama/Llama-3.1-8B. The OpenAIProvider approach bypasses the SDK's built-in provider routing and sends the model name directly to your endpoint.