Resource Provisioning

Compute &
Provisioning Scales.

Transparent access to model inference. Scaling from individual research environments to federated enterprise compute grids.

Free

Local Runtime — Open-source experimentation & local model inference

Free
  • COMPUTE ALLOCATION
  • 50 Fast Requests / day
  • Unlimited Local Models (Ollama)
  • RUNTIME CAPABILITIES
  • Horizon Mode: DISABLED (Linear Inference Only)
  • Context: Standard Linear Window
  • State Virtualization: INACTIVE
  • COLLABORATION
  • 5 Active Share Links (24h TTL)
Recommended

Pro

Professional Engineer — Senior developers requiring deep reasoning and infinite context

$15
Billed Monthly
  • COMPUTE ALLOCATION
  • Unlimited Local & Fast Models
  • ~750 Frontier Requests (Claude 4.5 Sonnet, GPT 5.2)
  • HORIZON ARCHITECTURE
  • Scout Swarm: ACTIVE (Exploration Layer)
  • Adversarial Monitor: L1-L2 Verification
  • Impact Analysis Engine
  • MEMORY MANIFOLD
  • Advanced Contextual Understanding
  • Intelligent Learning System (Personal Weights)
  • COLLABORATION
  • Unlimited Share Links (30-day TTL)
  • Share Chat (Read/Fork Only)
Recommended

Teams

Recursive Swarm — Engineering teams requiring shared state and full autonomous verification

$75
Billed Monthly
  • COMPUTE ALLOCATION
  • ~2,250 Frontier Requests (3x Allowance)
  • Pooled Team Credits
  • HORIZON FULL STACK
  • Flash Protocol: ACTIVE (Instant Pruning)
  • Negative Knowledge: ACTIVE (Shared Failure Vectors)
  • Verification: L3-L4 (Sandboxed Execution)
  • DISTRIBUTED STATE
  • Shared Workspace State (Vector Sync)
  • Distributed Knowledge Mesh
  • Real-time Collaborative Editing
  • ENTERPRISE GOVERNANCE
  • SSO / SAML Integration
  • Advanced Audit Logging