v0.1.0 Released — Streaming API

Ship AI Agents
With Confidence.

An open-source TypeScript runtime designed for production. Isolated sandboxes, multi-turn orchestration, and streaming capabilities out of the box. Positioned perfectly between raw LLM SDKs and bloated frameworks.

Everything You Need to Ship Agents

From sandbox isolation to multi-agent orchestration, Sandcaster handles the infrastructure so you can focus on your agent's logic.

Isolated Execution

Run untrusted AI code securely. Built-in adapters for E2B, Docker, Vercel, and Cloudflare for fresh sandboxes per run.

Streaming API (SSE)

Native Server-Sent Events support. Stream tokens, tool calls, and state changes to the client with zero latency overhead.

Speculative Branching

Fork agent states safely. Test different tool paths simultaneously and merge the most successful result back to main.

Multi-Agent Orchestration

Define supervisor agents that delegate tasks to specialized sub-agents with strict type-checked communication channels.

Multiple Interfaces

Don't just build APIs. Sandcaster ships with ready-to-use hooks for CLI, TUI, HTTP APIs, and native Slack bot integration.

Config-Driven via JSON

Manage environments, model selection, and telemetry settings globally using a simple sandcaster.json configuration file.

Built for the Terminal & Beyond.

Sandcaster's core philosophy is decoupling logic from the interface. Build your agent once, deploy it as a script, a persistent API, or an interactive terminal application.

General Assistant Research Briefs Support Triage API Extraction Security Audits
$ sandcaster init --template research-brief
Initializing workspace...
Installing dependencies...
Generating sandcaster.json...
Success! Run `sandcaster dev` to start the TUI.
$ sandcaster dev
[Sandcaster Runtime] Ready. Agent 'Researcher' listening on stdio.