Stratus is a drop-in replacement for the OpenAI API. Change the baseURL, swap the model name, and your existing code gains prediction, planning, and semantic state understanding on every request.
// Before
baseURL : 'https://api.openai.com/v1'
// After — everything else stays the same
baseURL : 'https://api.stratus.run/v1'
That’s it. Pick your framework below.
OpenClaw Native plugin for OpenClaw’s SIGBART agent runtime. Drops the full model catalog, embeddings, and rollout tools directly into your agent.
Framework Integrations
OpenAI SDK The most common path. Works in TypeScript, Python, and anywhere the OpenAI SDK runs.
Anthropic SDK Use the Anthropic SDK with Stratus for Claude-backed X1 models via the /v1/messages endpoint.
LangChain Plug Stratus into LangChain agents, chains, and memory — no wrappers needed.
Vercel AI SDK Streaming UI in Next.js App Router with server actions and route handlers.
cURL Raw HTTP — for quick tests, CI pipelines, and shell scripts.
Migration Guide
Already on OpenAI or Anthropic? The switch takes seconds.
From OpenAI Direct
// Before
const client = new OpenAI ({
apiKey: process . env . OPENAI_API_KEY
});
// After
const client = new OpenAI ({
baseURL: 'https://api.stratus.run/v1' ,
apiKey: process . env . STRATUS_API_KEY
});
Model name mapping:
OpenAI Stratus gpt-4ostratus-x1ac-base-gpt-4ogpt-4o-ministratus-x1ac-small-gpt-4o-mini
OpenRouter models are also available directly via slash notation — stratus-x1ac-base-deepseek/deepseek-r1, stratus-x1ac-base-meta-llama/llama-3.3-70b-instruct, stratus-x1ac-base-google/gemini-2.5-pro, and more. See Models for the full list.
From Anthropic Direct
// Before
const client = new Anthropic ({
apiKey: process . env . ANTHROPIC_API_KEY
});
// After
const client = new Anthropic ({
baseURL: 'https://api.stratus.run' ,
apiKey: process . env . STRATUS_API_KEY
});
Model name mapping:
Anthropic Stratus claude-sonnet-4-5-20250514stratus-x1ac-base-claude-sonnet-4-5
OpenAI SDK
The most common integration path. Works in TypeScript, Node.js, Python, and anywhere the OpenAI SDK runs.
Installation
npm install openai dotenv
TypeScript / Node.js
import OpenAI from 'openai' ;
import 'dotenv/config' ;
const client = new OpenAI ({
baseURL: 'https://api.stratus.run/v1' ,
apiKey: process . env . STRATUS_API_KEY
});
const response = await client . chat . completions . create ({
model: 'stratus-x1ac-base-gpt-4o' ,
messages: [
{
role: 'system' ,
content: 'Current state: User on homepage, search box visible.'
},
{
role: 'user' ,
content: 'Search for "best laptops 2024"'
}
]
});
console . log ( response . choices [ 0 ]. message . content );
console . log ( response . stratus . overall_confidence ); // X1 confidence score
console . log ( response . stratus . planning_time_ms ); // planning overhead
Streaming
const stream = await client . chat . completions . create ({
model: 'stratus-x1ac-base-gpt-4o' ,
messages: [{ role: 'user' , content: 'Navigate to checkout' }],
stream: true
});
for await ( const chunk of stream ) {
const content = chunk . choices [ 0 ]?. delta ?. content ;
if ( content ) process . stdout . write ( content );
}
Error Handling
try {
const response = await client . chat . completions . create ({
model: 'stratus-x1ac-base-gpt-4o' ,
messages: [ ... ]
});
} catch ( error ) {
if ( error . status === 401 ) console . error ( 'Invalid API key' );
else if ( error . status === 402 ) console . error ( 'Insufficient credits' );
else if ( error . status === 429 ) console . error ( 'Rate limit exceeded' );
else console . error ( 'API error:' , error . message );
}
Anthropic SDK
Use the Anthropic SDK to hit Stratus via the /v1/messages endpoint — useful when your codebase is already Anthropic-shaped.
Installation
npm install @anthropic-ai/sdk dotenv
Stratus supports both /v1/chat/completions (OpenAI format) and /v1/messages (Anthropic format). Use whichever SDK fits your codebase.
TypeScript / Node.js
import Anthropic from '@anthropic-ai/sdk' ;
import 'dotenv/config' ;
const client = new Anthropic ({
baseURL: 'https://api.stratus.run' ,
apiKey: process . env . STRATUS_API_KEY
});
const response = await client . messages . create ({
model: 'stratus-x1ac-base-claude-sonnet-4-5' ,
max_tokens: 1024 ,
messages: [
{
role: 'user' ,
content: 'Current state: Amazon product page. Goal: Add to cart and proceed to checkout.'
}
]
});
console . log ( response . content [ 0 ]. text );
Streaming
const stream = await client . messages . create ({
model: 'stratus-x1ac-base-claude-sonnet-4-5' ,
max_tokens: 1024 ,
messages: [{ role: 'user' , content: 'Navigate to settings' }],
stream: true
});
for await ( const event of stream ) {
if ( event . type === 'content_block_delta' ) {
process . stdout . write ( event . delta . text );
}
}
LangChain
Plug Stratus into LangChain agents and chains without any custom wrappers.
Installation
npm install @langchain/openai dotenv
Basic Usage
import { ChatOpenAI } from '@langchain/openai' ;
import 'dotenv/config' ;
const model = new ChatOpenAI ({
openAIApiKey: process . env . STRATUS_API_KEY ,
configuration: { baseURL: 'https://api.stratus.run/v1' },
modelName: 'stratus-x1ac-base-gpt-4o'
});
const response = await model . invoke ([
{ role: 'system' , content: 'Current state: Google search results for "restaurants near me"' },
{ role: 'user' , content: 'Click on the first result' }
]);
console . log ( response . content );
With Agents
import { AgentExecutor , createOpenAIFunctionsAgent } from 'langchain/agents' ;
import { pull } from 'langchain/hub' ;
const model = new ChatOpenAI ({
openAIApiKey: process . env . STRATUS_API_KEY ,
configuration: { baseURL: 'https://api.stratus.run/v1' },
modelName: 'stratus-x1ac-base-gpt-4o'
});
const prompt = await pull ( 'hwchase17/openai-functions-agent' );
const agent = await createOpenAIFunctionsAgent ({ llm: model , tools: yourTools , prompt });
const executor = new AgentExecutor ({ agent , tools: yourTools });
const result = await executor . invoke ({
input: 'Navigate to the checkout page and fill in shipping details'
});
console . log ( result . output );
With Chains
import { PromptTemplate } from '@langchain/core/prompts' ;
import { RunnableSequence } from '@langchain/core/runnables' ;
const chain = RunnableSequence . from ([
PromptTemplate . fromTemplate ( 'Current state: {state} \n Goal: {goal} \n What should I do next?' ),
model
]);
const result = await chain . invoke ({
state: 'Amazon homepage' ,
goal: 'Find and purchase wireless headphones under $100'
});
console . log ( result . content );
Vercel AI SDK
Streaming UI in Next.js with server actions and route handlers.
Installation
npm install ai @ai-sdk/openai dotenv
Server Action + Streaming UI
// app/actions.ts
'use server' ;
import { openai } from '@ai-sdk/openai' ;
import { streamText } from 'ai' ;
import { createStreamableValue } from 'ai/rsc' ;
const model = openai ( 'stratus-x1ac-base-gpt-4o' , {
baseURL: 'https://api.stratus.run/v1' ,
apiKey: process . env . STRATUS_API_KEY
});
export async function navigateAction ( state : string , goal : string ) {
const stream = createStreamableValue ( '' );
( async () => {
const { textStream } = await streamText ({
model ,
messages: [
{ role: 'system' , content: `Current state: ${ state } ` },
{ role: 'user' , content: goal }
]
});
for await ( const delta of textStream ) stream . update ( delta );
stream . done ();
})();
return { output: stream . value };
}
// app/page.tsx
'use client' ;
import { useState } from 'react' ;
import { readStreamableValue } from 'ai/rsc' ;
import { navigateAction } from './actions' ;
export default function Page () {
const [ response , setResponse ] = useState ( '' );
const handleNavigate = async () => {
const { output } = await navigateAction ( 'Amazon homepage' , 'Search for wireless headphones' );
for await ( const delta of readStreamableValue ( output )) {
setResponse ( prev => prev + delta );
}
};
return (
< div >
< button onClick = { handleNavigate } > Navigate </ button >
< div > { response } </ div >
</ div >
);
}
API Route Handler
// app/api/navigate/route.ts
import { openai } from '@ai-sdk/openai' ;
import { streamText } from 'ai' ;
const model = openai ( 'stratus-x1ac-base-gpt-4o' , {
baseURL: 'https://api.stratus.run/v1' ,
apiKey: process . env . STRATUS_API_KEY
});
export async function POST ( req : Request ) {
const { state , goal } = await req . json ();
const result = await streamText ({
model ,
messages: [
{ role: 'system' , content: `Current state: ${ state } ` },
{ role: 'user' , content: goal }
]
});
return result . toDataStreamResponse ();
}
cURL
For quick tests, CI pipelines, and shell scripts.
Basic Request
curl https://api.stratus.run/v1/chat/completions \
-H "Authorization: Bearer $STRATUS_API_KEY " \
-H "Content-Type: application/json" \
-d '{
"model": "stratus-x1ac-base-gpt-4o",
"messages": [
{"role": "system", "content": "Current state: Google homepage. Search box visible."},
{"role": "user", "content": "Search for best laptops 2024"}
]
}'
Streaming
curl https://api.stratus.run/v1/chat/completions \
-H "Authorization: Bearer $STRATUS_API_KEY " \
-H "Content-Type: application/json" \
-d '{"model":"stratus-x1ac-base-gpt-4o","messages":[{"role":"user","content":"Navigate to checkout"}],"stream":true}'
# Just the response text
curl -s https://api.stratus.run/v1/chat/completions \
-H "Authorization: Bearer $STRATUS_API_KEY " \
-H "Content-Type: application/json" \
-d '{"model":"stratus-x1ac-base-gpt-4o","messages":[{"role":"user","content":"Test"}]}' \
| jq -r '.choices[0].message.content'
# X1 planning metadata
... | jq '.stratus'
Other Languages
import os
from openai import OpenAI
client = OpenAI(
base_url = "https://api.stratus.run/v1" ,
api_key = os.environ[ "STRATUS_API_KEY" ]
)
response = client.chat.completions.create(
model = "stratus-x1ac-base-gpt-4o" ,
messages = [
{ "role" : "system" , "content" : "Current state: E-commerce homepage" },
{ "role" : "user" , "content" : "Search for wireless headphones" }
]
)
print (response.choices[ 0 ].message.content)
print (response.stratus) # X1 metadata
package main
import (
" bytes "
" encoding/json "
" fmt "
" io "
" net/http "
" os "
)
func main () {
body , _ := json . Marshal ( map [ string ] any {
"model" : "stratus-x1ac-base-gpt-4o" ,
"messages" : [] map [ string ] string {
{ "role" : "system" , "content" : "Current state: Homepage" },
{ "role" : "user" , "content" : "Navigate to products" },
},
})
req , _ := http . NewRequest ( "POST" , "https://api.stratus.run/v1/chat/completions" , bytes . NewBuffer ( body ))
req . Header . Set ( "Authorization" , "Bearer " + os . Getenv ( "STRATUS_API_KEY" ))
req . Header . Set ( "Content-Type" , "application/json" )
resp , _ := http . DefaultClient . Do ( req )
defer resp . Body . Close ()
result , _ := io . ReadAll ( resp . Body )
fmt . Println ( string ( result ))
}
use reqwest :: Client ;
use serde_json :: json;
use std :: env;
#[tokio :: main]
async fn main () -> Result <(), Box < dyn std :: error :: Error >> {
let api_key = env :: var ( "STRATUS_API_KEY" ) ? ;
let response = Client :: new ()
. post ( "https://api.stratus.run/v1/chat/completions" )
. header ( "Authorization" , format! ( "Bearer {}" , api_key ))
. json ( & json! ({
"model" : "stratus-x1ac-base-gpt-4o" ,
"messages" : [
{ "role" : "system" , "content" : "Current state: Homepage" },
{ "role" : "user" , "content" : "Navigate to products" }
]
}))
. send ()
. await ? ;
println! ( "{}" , response . text () . await ? );
Ok (())
}
Troubleshooting
“Invalid API Key” — Confirm the key starts with stratus_sk_live_ and the env var is actually loaded (echo $STRATUS_API_KEY).
“Model Not Found” — Use the Stratus model name, not the upstream provider name. stratus-x1ac-base-gpt-4o ✅ · gpt-4o ❌
SSL errors — Always https://api.stratus.run/v1, never http://.
402 Insufficient Credits — Top up from the dashboard .
Next Steps
API Reference Full endpoint docs, parameters, and response shapes.
Models Every available model across all size tiers.
Authentication API keys, BYOK, and key resolution priority.