Getting Started
By the end of this tutorial, you'll have the Control Layer running locally and will have sent your first request to an LLM through it.
Prerequisites
- Docker and Docker Compose installed
- An API key from a model provider. If you don't have one:
- Doubleword: app.doubleword.ai/api-keys
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com/settings/keys
- Or any OpenAI-compatible endpoint (Together, Groq, local vLLM, etc.)
Step 1: Start the Control Layer
Download the Docker Compose file and start the stack:
wget https://raw.githubusercontent.com/doublewordai/control-layer/refs/heads/main/docker-compose.yml
docker compose up -dWait about 30 seconds for the services to initialize1.
Open http://localhost:3001 in your browser. You should see the login page.
Step 2: Log in to the dashboard
Sign in with the default admin credentials:
| Field | Value |
|---|---|
test@doubleword.ai | |
| Password | hunter2 |
You see the Control Layer dashboard with Models, Endpoints, Playground, and other items in the sidebar.
Step 3: Add an endpoint
Click Endpoints in the sidebar, then click Add Endpoint.
In the dialog:
- Use the dropdown in the Base URL field to select a popular endpoint (OpenAI, Anthropic, Google), or enter a custom URL
- Paste your API key in the API Key field
- Click Discover Models
The Control Layer connects to your provider and fetches available models.
- Select the models you want to enable, then click Save
Go to Models in the sidebar. You should see your provider's models listed.
Step 4: Grant access to a group
Models must be added to a group before users can access them.
On any model card, click + Add groups in the top right corner. Select Everyone (the default group that includes all users), then click Done.
The model card now shows the group badge.
Step 5: Test in the Playground
On a model card, click Playground.
Type a message and press Enter.
You receive a response from the model.
Step 6: Send a request via the API
Create an API key
- Click API Keys in the sidebar
- Click Create API Key
- Enter a name (e.g., "test-key") and click Create Key
- Copy the key now - you won't see it again
Make a request
Using curl (replace YOUR_API_KEY with the key you copied):
curl http://localhost:3001/ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'Or using Python:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:3001/ai/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)You receive a JSON response (curl) or printed output (Python) with the model's reply.
Done!
You have a working Control Layer instance routing requests to your AI provider.
Next steps:
- Add more endpoints to access additional providers
- Set up users and groups to manage team access
- Configure for production
The default configuration is for local development only. Before exposing the Control Layer:
- Change the admin password -
hunter2is not secure - Set a secret key - generate with
openssl rand -base64 32and set viaSECRET_KEYenvironment variable - Use a production database - set
DATABASE_URLto a real PostgreSQL instance - Configure CORS - update
auth.security.cors.allowed_originsfor your domain
See Configuration Reference for details.
Footnotes
-
On first run, Docker downloads the required images which may take several minutes. Run
docker compose logs -fto watch startup progress. ↩