Skip to main content
@formthefog/stratus-sdk-ts is the native TypeScript client for Stratus X1 — it exposes every endpoint, handles retries with exponential backoff, and ships compression utilities and vector DB adapters alongside the core API surface.

Installation

npm install @formthefog/stratus-sdk-ts
Import the client:
import { StratusClient } from '@formthefog/stratus-sdk-ts'

Quick Start

A minimal working example from zero to first response:
import { StratusClient } from '@formthefog/stratus-sdk-ts'

const client = new StratusClient({
  apiKey: process.env.STRATUS_API_KEY
})

const response = await client.chat.completions.create({
  model: 'stratus-x1ac-base-gpt-4o',
  messages: [
    { role: 'system', content: 'Current page: checkout form, 3 items in cart' },
    { role: 'user', content: 'Proceed to payment' }
  ]
})

console.log(response.choices[0].message.content)
// → "Click the Proceed to Payment button"

console.log(response.stratus?.action_sequence)
// → ['click', 'wait', 'verify']

console.log(response.stratus?.overall_confidence)
// → 0.94
STRATUS_API_KEY is the only required credential. Formation’s shared pool handles LLM calls automatically on day one — no provider key needed to get started. Add your own key at any time to remove the 25% pool markup. See Authentication for the full key resolution flow.

Constructor options

const client = new StratusClient({
  apiKey: string,                                          // required
  apiUrl?: string,                                         // default: 'https://api.stratus.run'
  timeout?: number,                                        // ms, default: 30000
  retries?: number,                                        // default: 3
  compressionProfile?: 'Low' | 'Medium' | 'High' | 'VeryHigh'  // default: 'Medium'
})

Chat Completions

client.chat.completions.create(request)

Posts to POST /v1/chat/completions and returns a Promise<ChatCompletionResponse>. Retries up to config.retries (default 3) on transient failures with exponential backoff — 1s, 2s, 4s. Non-retried on 400, 401, and 422.
const response = await client.chat.completions.create({
  model: 'stratus-x1ac-base-gpt-4o',
  messages: [
    { role: 'system', content: 'Current state: product listing page, 24 results visible' },
    { role: 'user', content: 'Add the first result to cart' }
  ],
  max_tokens: 512,
  temperature: 0.7
})

console.log(response.choices[0].message.content)
console.log(response.stratus?.planning_time_ms)  // time spent in world model
console.log(response.stratus?.execution_time_ms) // time spent in LLM

Hybrid orchestration extensions

The stratus field on the request activates advanced planning modes:
const response = await client.chat.completions.create({
  model: 'stratus-x1ac-base-gpt-4o',
  messages: [...],
  stratus: {
    mode: 'hybrid',               // 'plan' | 'validate' | 'rank' | 'hybrid'
    validation_threshold: 0.85,   // minimum confidence to accept a plan
    max_validation_retries: 2,    // retry planning if below threshold
    num_candidates: 3,            // candidate sequences to evaluate
    return_action_sequence: true  // include planned actions in response
  }
})
The stratus.mode field controls how the X1 world model engages. hybrid combines planning and validation in a single pass — it plans a full action sequence, verifies the predicted outcome against your threshold, and re-plans automatically if confidence falls short.

Inline LLM keys

Pass provider keys per-request rather than storing them in vault. Useful for CI, multi-tenant setups, or quick testing:
const response = await client.chat.completions.create({
  model: 'stratus-x1ac-base-gpt-4o',
  messages: [...],
  openai_key: process.env.OPENAI_API_KEY  // bypasses Formation pool immediately
})

// response.stratus.key_source === 'user'
// response.stratus.formation_markup_applied === null
Supported fields: openai_key, anthropic_key, gemini_key, openrouter_key.

client.chat.completions.stream(request)

Forces stream: true and returns AsyncGenerator<ChatCompletionChunk>. Planning metadata appears on the first chunk’s stratus field.
const stream = client.chat.completions.stream({
  model: 'stratus-x1ac-base-gpt-4o',
  messages: [
    { role: 'system', content: 'Current state: deployment pipeline, all tests passing' },
    { role: 'user', content: 'Describe the rollout steps' }
  ]
})

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content ?? '')
}
{
  model: string
  messages: Message[]
  max_tokens?: number
  max_completion_tokens?: number       // newer OpenAI field — either accepted
  temperature?: number
  top_p?: number
  n?: number
  stream?: boolean                     // forced false by create(), forced true by stream()
  stop?: string[]
  presence_penalty?: number
  frequency_penalty?: number
  user?: string
  tools?: ToolDefinition[]
  tool_choice?: string | ToolChoiceObject
  stream_options?: { include_usage?: boolean }

  stratus?: {
    mode?: 'plan' | 'validate' | 'rank' | 'hybrid'
    validation_threshold?: number
    max_validation_retries?: number
    num_candidates?: number
    return_action_sequence?: boolean
  }

  openai_key?: string
  anthropic_key?: string
  gemini_key?: string
  openrouter_key?: string
}
{
  stratus_model: string                    // e.g., 'x1ac-base'
  execution_llm: string                    // e.g., 'gpt-4o'
  action_sequence?: string[]               // planned action chain
  predicted_state_changes?: number[]
  confidence_labels?: string[]             // 'High' | 'Medium' | 'Low' per step
  overall_confidence?: number              // 0–1
  steps_to_goal?: number
  planning_time_ms?: number
  execution_time_ms?: number
  total_steps_executed?: number
  execution_trace?: Array<{
    step: number
    action: string
    response_summary: string
  }>
  brain_signal?: {
    action_type: string
    confidence: number
    plan_ahead: string[]
    simulation_confirmed: boolean
    goal_proximity: number
  }
  key_source?: 'user' | 'formation'
  formation_markup_applied?: number        // 0.25 when pool used; null for BYOK
}
brain_signal surfaces the X1 brain’s internal read on the current state — it tells you which action type the planner committed to, whether a simulation confirmed that choice, and how far the predicted outcome sits from the goal. Use brain_signal.goal_proximity as a normalized progress indicator across a multi-step task.

Messages (Anthropic Format)

client.messages(request) posts to POST /v1/messages — the Anthropic-native endpoint. Use this if your codebase already speaks the Anthropic SDK format. The request takes an AnthropicRequest and the response is a standard AnthropicResponse extended with stratus?: StratusMetadata.
import { StratusClient } from '@formthefog/stratus-sdk-ts'

const client = new StratusClient({ apiKey: process.env.STRATUS_API_KEY })

const message = await client.messages({
  model: 'stratus-x1ac-base-gpt-4o',
  max_tokens: 1024,
  system: 'Current state: user is on the billing settings page, upgrade button visible',
  messages: [
    { role: 'user', content: 'What will happen if I click Upgrade?' }
  ]
})

console.log(message.content[0].text)
max_tokens is required in the Anthropic format — unlike the OpenAI endpoint where it is optional. The system parameter is the natural place to describe environment state for Stratus world model planning.
Alternatively, point the official Anthropic SDK at Stratus directly — no StratusClient required:
import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic({
  baseURL: 'https://api.stratus.run',
  apiKey: process.env.STRATUS_API_KEY
})

const message = await client.messages.create({
  model: 'stratus-x1ac-base-gpt-4o',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Plan the next steps.' }]
})

Embeddings

client.embeddings(request) posts to POST /v1/embeddings. For embeddings, use the model name without an LLM suffix — stratus-x1ac-base (not stratus-x1ac-base-gpt-4o). The encoder produces 768-dimensional vectors for the base model.
const response = await client.embeddings({
  model: 'stratus-x1ac-base',
  input: 'Amazon product page, Add to Cart button visible, price $49.99'
})

const vector = response.data[0].embedding
// 768-dimensional float array
console.log(vector.length) // 768
Batch multiple texts in a single call:
const response = await client.embeddings({
  model: 'stratus-x1ac-base',
  input: [
    'Homepage with search box focused',
    'Search results for "NYC hotels", 24 listings',
    'Hotel detail page, booking form visible',
    'Confirmation page, booking reference #A4221'
  ]
})

response.data.forEach((item, i) => {
  console.log(`State ${i}:`, item.embedding.slice(0, 4))
})
Use encoding_format: 'base64' for high-throughput pipelines — the payload is ~25% smaller and decodes cleanly with Buffer.from(str, 'base64').
Stratus embeddings are optimized for agent state and action semantics — not general-purpose text. They excel at state similarity, goal matching, and pattern retrieval from agent memory. For document search or FAQ matching, general-purpose embeddings (OpenAI text-embedding-3, Cohere embed-v3) are a better fit.

Rollout

client.rollout(request) posts to POST /v1/rollout — pre-execution simulation. Give it a goal and an initial state description; it plans a full action sequence through the world model and returns predicted outcomes at each step before anything executes.
const plan = await client.rollout({
  goal: 'Complete the checkout flow and confirm the order',
  initial_state: 'Cart page, 3 items, coupon field visible, total $127.50',
  max_steps: 6,
  return_intermediate: true
})

// Check which planning path was used
console.log(plan.summary.planner)
// → 'brain' | 'action_planner'

// Walk the predicted sequence
for (const prediction of plan.predictions) {
  console.log(`Step ${prediction.step}: ${prediction.action.action_name}`)
  console.log(`  Confidence: ${prediction.current_state.confidence}`)
  console.log(`  State change: ${prediction.state_change}`)
}

// Summary
console.log(plan.summary.outcome)
// → 'Goal likely achieved (large cumulative change)'
console.log(plan.summary.action_path)
// → ['click', 'type', 'submit', 'wait', 'verify']
Use rollout as a pre-flight check before committing real actions:
const plan = await client.rollout({
  goal: 'Submit the support ticket form',
  initial_state: 'Support form, subject and body filled, attachment uploaded',
  max_steps: 4,
  return_intermediate: true
})

if (plan.summary.total_state_change < 10) {
  console.warn('Low confidence plan — refine state description or goal before executing')
} else {
  executePlan(plan.summary.action_path)
}
The summary.planner field tells you which path the X1 brain took: brain means the top-level policy head selected actions directly; action_planner means a forward search was run through the world model to find the best sequence. Either path produces valid plans — the distinction is useful for debugging and cost tracking since forward search is more compute-intensive.
// Request
{
  goal: string
  initial_state: string
  max_steps?: number           // default 5, 1–10 recommended
  return_intermediate?: boolean // must be true (known issue with false)
}

// RolloutSummary
{
  planner: 'brain' | 'action_planner'
  total_steps: number
  initial_magnitude: number
  final_magnitude: number
  total_state_change: number
  outcome: string
  action_path: string[]
}

// StatePrediction
{
  step: number
  action: {
    step: number
    action_id: number
    action_name: string
    action_category: 'retrieval' | 'navigation' | 'interaction' | 'system'
  }
  current_state: { step: number, magnitude: number, confidence: string }
  predicted_state: { step: number, magnitude: number, confidence: string }
  state_change: number
  interpretation: string
}

Models

client.listModels() calls GET /v1/models. No authentication required. Returns the full list of available stratus-x1ac-{size}-{llm} combinations — every planning model size paired with every supported downstream LLM.
const { data: models } = await client.listModels()

models.forEach(m => console.log(m.id))
// stratus-x1ac-small-gpt-4o-mini
// stratus-x1ac-small-gpt-4o
// stratus-x1ac-base-gpt-4o
// stratus-x1ac-large-gpt-4o
// stratus-x1ac-base-claude-sonnet-4-20250514
// stratus-x1ac-base-deepseek/deepseek-r1
// ... 2,050+ combinations
Start with stratus-x1ac-base-gpt-4o — it’s the production-tested default. Reach for small when latency is the constraint, large when accuracy on complex multi-step tasks isn’t sufficient with base. Do not default to large as a safety measure.

LLM Key Management

Store your provider keys once in Stratus vault — encrypted at rest with AES-256-GCM — and every future request uses them automatically, bypassing the Formation pool markup entirely.

client.account.llmKeys.set(keys)

Posts to POST /v1/account/llm-keys. All fields optional — provide any combination. Omitted keys remain unchanged.
await client.account.llmKeys.set({
  openai_key: process.env.OPENAI_API_KEY,         // sk-proj-...
  anthropic_key: process.env.ANTHROPIC_API_KEY,   // sk-ant-...
  google_key: process.env.GOOGLE_API_KEY,          // AIza...
  openrouter_key: process.env.OPENROUTER_API_KEY   // sk-or-...
})
Stratus validates each key against its provider before storing. If validation fails, the request returns a 400 with the provider name and rejection reason.

client.account.llmKeys.get()

Calls GET /v1/account/llm-keys. Returns presence and last-validated timestamps — never the raw key values.
const status = await client.account.llmKeys.get()

console.log(status.has_openai_key)              // true
console.log(status.openai_last_validated)       // '2026-03-01T14:22:00Z'
console.log(status.formation_keys_available)    // true — Formation pool is always available as fallback
formation_keys_available is true for all active accounts. Even after you store your own keys, Formation’s pool remains available as a fallback — it activates only when no native key is found for a given provider. You can remove it as the path for a specific provider by storing that provider’s key.

client.account.llmKeys.delete(provider?)

Calls DELETE /v1/account/llm-keys. Pass a provider name to remove a single key, or omit to delete all stored keys.
// Delete a specific provider
await client.account.llmKeys.delete('openai')

// Delete all stored keys — Formation pool becomes the only path
await client.account.llmKeys.delete()
Accepted values: 'openai', 'anthropic', 'google', 'openrouter'.

Credits

client.credits.packages()

Calls GET /v1/credits/packages. No authentication required. Lists available credit packages with current pricing.
const { packages } = await client.credits.packages()

// Available packages:
// starter:    1,000 credits  / $50 USDC
// pro:        5,000 credits  / $250 USDC
// enterprise: 25,000 credits / $1,250 USDC

client.credits.purchase(pkg, paymentHeader)

Posts to POST /v1/credits/purchase/{pkg} with an X-PAYMENT header containing a signed x402 transaction. On success, returns a CreditPurchaseResponse — and for first-ever account creation, the response includes stratus_api_key with your new key.
const result = await client.credits.purchase('starter', signedPaymentHeader)

if (result.stratus_api_key) {
  // First purchase — account was created in this call
  console.log('New API key:', result.stratus_api_key)
}
See Credits & Billing for the full x402 payment flow and card purchase instructions.

Error Handling

The SDK throws StratusAPIError for any non-2xx response. Catch it and branch on errorType:
import { StratusClient, StratusAPIError } from '@formthefog/stratus-sdk-ts'

const client = new StratusClient({ apiKey: process.env.STRATUS_API_KEY })

try {
  const response = await client.chat.completions.create({
    model: 'stratus-x1ac-base-gpt-4o',
    messages: [{ role: 'user', content: 'Plan the next action' }]
  })
} catch (error) {
  if (error instanceof StratusAPIError) {
    switch (error.errorType) {
      case 'insufficient_credits':
        console.error('Balance too low — top up at stratus.run/dashboard')
        break
      case 'authentication_error':
        console.error('Invalid or missing API key')
        break
      case 'rate_limit':
        console.error('Rate limit hit — back off and retry')
        break
      default:
        console.error(`Stratus error ${error.status}: ${error.message}`)
    }
  }
}

StratusAPIError shape

class StratusAPIError extends Error {
  status: number
  errorType: StratusErrorType   // 'insufficient_credits' | 'authentication_error' | 'rate_limit' | ...
  param?: string
  code?: string
}

Retry behavior

The client retries automatically on any error that is not 400, 401, or 422. Backoff is exponential starting at 1s: 1s → 2s → 4s. The number of retries is controlled by config.retries (default 3). To disable retries entirely, set retries: 0 in the constructor.
authentication_error (401) and validation errors (400, 422) are never retried — retrying them without fixing the underlying problem burns your retry budget without any chance of success. Fix the request before re-sending.

Compression Utilities

@formthefog/stratus-sdk-ts ships a suite of vector compression and quality analysis utilities alongside the API client. Use them to reduce embedding storage costs by 10–20× with 99%+ retention quality.
import {
  compress,
  compressBatch,
  decompress,
  decompressBatch,
  analyzeQuality,
  cosineSimilarity
} from '@formthefog/stratus-sdk-ts'

// Single vector
const compressed = compress(embedding)
const restored = decompress(compressed)

// Batch
const compressedBatch = compressBatch(embeddings)
const restoredBatch = decompressBatch(compressedBatch)

// Verify before deploying to production
const report = analyzeQuality(embeddings, restoredBatch)
console.log(report.summary)
// → "GOOD (97.2%). Cosine: 99.23%, Recall@10: 95.8%"

// Compute similarity directly
const similarity = cosineSimilarity(vectorA, vectorB)

Compression profiles

Pre-tuned profiles for common embedding shapes:
import {
  OPENAI_HIGH_QUALITY,
  OPENAI_BALANCED,
  OPENAI_HIGH_COMPRESSION,
  OPENAI_ULTRA_COMPRESSION,
  MJEPA_768_BALANCED,
  MJEPA_512_BALANCED
} from '@formthefog/stratus-sdk-ts'

const compressed = compress(embedding, OPENAI_BALANCED)
Use MJEPA_768_* profiles for vectors produced by stratus-x1ac-base. Use OPENAI_* profiles for OpenAI text-embedding-3 vectors.

Vector DB adapters

Drop-in adapters for Pinecone, Weaviate, and Qdrant that compress vectors before upsert and decompress on fetch — transparently:
import { StratusPinecone, StratusWeaviate, StratusQdrant } from '@formthefog/stratus-sdk-ts'

const index = new StratusPinecone(pineconeIndex, { profile: MJEPA_768_BALANCED })

await index.upsert([{ id: 'state-001', values: embedding, metadata: { page: 'checkout' } }])

const results = await index.query({ vector: queryEmbedding, topK: 5 })
// Returned vectors are already decompressed

Health Check

client.health() calls GET /health. No authentication required. Returns current system status — use it to check whether vault is available before calling account.llmKeys.set().
const health = await client.health()

console.log(health.status)               // 'healthy'
console.log(health.stratus_models_loaded) // ['base']
console.log(health.llm_providers)         // ['openai', 'anthropic', 'google', 'openrouter']
console.log(health.vault)                 // 'connected' | 'disabled'
console.log(health.brain.loaded)          // true
console.log(health.brain.num_actions)     // 903
console.log(health.version)              // '0.1.0'
LLM key vault storage (account.llmKeys.set) requires health.vault === 'connected'. If vault is disabled, stored-key calls fall back to the Formation pool automatically — but new keys cannot be persisted until the vault reconnects.

Next Steps

Authentication

Set up your Stratus API key, configure BYOK, and understand the three-tier key resolution flow.

API Reference

Full endpoint docs, request parameters, response shapes, and error codes.

Tutorials

Real-world agents — web navigation, cascade prediction, and concurrent task orchestration.