beercan.run

Documentation

Everything you need to know about BeerCan, from Skippy's perspective. He'd explain it himself, but he says typing slowly enough for you to follow is beneath him.

Quick Start

From zero to bloop in under a minute. Requires Node.js 18+ and an Anthropic API key.

1. Install

Terminal
$ npm install -g beercan

2. Setup

Run the interactive setup wizard to configure your API key, models, and optional integrations.

Terminal
$ beercan setup
 
🍺 BeerCan Setup Wizard
Configuring API keys, models, and integrations...
Creates ~/.beercan/.env

3. Create a project

Terminal
# Create a project scoped to a working directory
$ beercan init my-project --work-dir ~/projects/my-project
 
Project my-project created

4. Chat with Skippy

beercan — Terminal
$ beercan chat
 
🍺 Skippy the Magnificent
Elder AI | Beer Can | Your intellectual superior
 
skippy> #my-project
Switched to my-project
 
skippy [my-project]> Create a hello world Express server with TypeScript
▸ Phase: plan (planner)
⚙ read_file • write_file • exec_command
✓ Bloop completed — 8,421 tokens

5. View results

Terminal
# List past bloops
$ beercan history my-project
 
# Full result with tool calls and token usage
$ beercan result <bloop-id>
 
# Overview of all projects
$ beercan status
⚡ Tip

You can also run tasks directly without the chat interface: beercan run my-project "your goal here"

💬 Chat Commands

Skippy understands natural language, slash commands, and shortcuts. Use whatever feels right — he'll figure it out. He always does.

Slash Commands

Command Description
/run <project> <goal>Run a bloop on the specified project
/statusSystem status overview (projects, running bloops, uptime)
/projectsList all projects with bloop counts
/history [project]Show recent bloops (optionally filtered by project)
/result <id>Full details for a specific bloop (supports partial ID)
/cancel <id>Cancel a pending or running job
/helpShow available commands and shortcuts

# Project Shortcuts

Shortcut Action
#List all projects
#project-nameSwitch to a project context
#project-name do somethingRun a bloop on that project immediately
##Exit project context (back to system level)

@ Bloop Shortcuts

Shortcut Action
@Show recent bloops
@bloop-idShow result for a specific bloop

Natural Language

When no command or shortcut is detected, Skippy uses an LLM call (Haiku) to classify your intent. Just say what you want in plain English and Skippy handles the rest.

beercan — Terminal
skippy [my-project]> analyze the codebase for security issues
Intent: run_bloop
▸ Phase: plan (planner)
⚙ read_file • list_directory • exec_command
✓ Bloop completed — 8,421 tokens
📝 Conversation Memory

Skippy remembers the last 20 messages per chat channel. You can refer to previous messages and he'll follow the context naturally.

💻 CLI Reference

All commands available via beercan <command> (or npm run beercan -- <command> from source).

Projects

Command Description
setupInteractive first-time configuration wizard
init <name> [--work-dir <path>]Create a new project, optionally scoped to a directory
projectsList all projects
statusOverview of all projects with bloop counts and token usage

Bloop Execution

Command Description
run <project> [team] <goal>Run a bloop. Teams: auto (default), solo, code_review, managed, full_team
bootstrap [goal]Self-improvement bloop on the BeerCan codebase itself

Results & History

Command Description
history <project> [--status <s>]List past bloops with status, tokens, timestamps
result <bloop-id>Full bloop details: result, tool calls, tokens (partial ID match)
jobs [status]View job queue (pending, running, completed, failed counts)

Scheduling & Events

Command Description
schedule:add <project> "<cron>" <goal>Add a cron-based bloop schedule
schedule:list [project]List active schedules
schedule:remove <id>Remove a schedule
trigger:add <project> <type> <filter> <goal>Add an event-based trigger
trigger:list [project]List active triggers
trigger:remove <id>Remove a trigger

Integrations

Command Description
mcp:add <project> <name> <cmd> [args]Add an MCP server to a project
mcp:list [project]List configured MCP servers
tool:create <name>Scaffold a custom tool plugin
tool:listList custom tools
tool:remove <name>Remove a custom tool
skill:create <name>Scaffold a skill template
skill:listList installed skills

Config & Server

Command Description
config set KEY=VALUESet a configuration value
config get KEYGet a configuration value (keys are masked)
config listShow all configuration
chat [project]Interactive terminal chat with Skippy
serve [port]Start API-only server (default port 3939)
daemonRun scheduler + event system + API + chat providers

🔧 Skills & Tools

Extend BeerCan with custom tools (atomic API calls) and skills (workflow recipes). Three ways to add capabilities, zero configuration required.

Built-in Tools (32)

📁 Filesystem

  • read_file — Read file contents
  • write_file — Write file (creates parent dirs)
  • list_directory — List directory tree
  • exec_command — Execute shell command (30s timeout)

🌐 Web

  • web_fetch — Fetch URL (Cloudflare Browser Rendering + native fallback)
  • http_request — Full HTTP request (any method, headers, body)

🔔 Notification

  • send_notification — Desktop notification (macOS osascript, console fallback)

🧠 Memory

  • memory_search — Hybrid search across all layers
  • memory_store — Store new memory
  • memory_update — Supersede existing memory
  • memory_link — Create knowledge graph entities/edges
  • memory_query_graph — Multi-hop BFS traversal
  • memory_scratch — Per-bloop working memory scratchpad

🐣 Spawning & Cross-Project

  • spawn_bloop — Create child bloops (same or cross-project)
  • get_bloop_result — Check status/result of any bloop
  • list_child_bloops — List spawned child bloops
  • list_projects — Discover all available projects
  • search_cross_project — Search memories across projects
  • search_previous_attempts — Find past results for similar goals

⏰ Self-Scheduling

  • create_schedule — Create cron schedule for recurring bloops
  • create_trigger — Create event trigger for reactive bloops
  • list_schedules — List project's cron schedules
  • list_triggers — List project's event triggers
  • remove_schedule — Remove a cron schedule
  • remove_trigger — Remove an event trigger

💡 Skills & Config

  • create_skill — Create a reusable skill from experience
  • update_skill — Update an existing skill
  • list_skills — List all available skills
  • update_project_context — Modify project configuration

🔧 Build & Integrate

  • register_tool_from_file — Validate and register a .js file as a tool
  • register_skill_from_bloop — Package bloop learnings as a skill
  • verify_and_integrate — Spawn verification bloop, integrate on APPROVE

Custom Tools

Drop a .js file in ~/.beercan/tools/ and it auto-loads on startup. Every agent gets access automatically.

Terminal
# Scaffold a new tool
$ beercan tool:create google_search
 
# Edit it
$ vim ~/.beercan/tools/google_search.js
 
# List and manage custom tools
$ beercan tool:list
$ beercan tool:remove google_search

Example Tool

~/.beercan/tools/google_search.js
export const definition = {
name: "google_search",
description: "Search Google and return top results",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
limit: { type: "number", description: "Max results (default 5)" },
},
required: ["query"],
},
};
 
export async function handler({ query, limit = 5 }) {
const res = await fetch(
`https://api.example.com/search?q=${encodeURIComponent(query)}&limit=${limit}`
);
const data = await res.json();
return JSON.stringify(data.results);
}

Three Extension Methods

  1. Plugin directory — Drop .js files in ~/.beercan/tools/ (simplest)
  2. MCP serversbeercan mcp:add project my-tool npx some-server (standard protocol)
  3. Programmaticengine.toolRegistry.register(definition, handler) (library usage)
📝 Export Patterns

Custom tools support three export patterns: { definition, handler }, { default: { definition, handler } }, or { tools: [{ definition, handler }, ...] } for multi-tool files.

Skills

Skills are higher-level than tools — they orchestrate workflows with instructions, triggers, and config. Drop a .json file in ~/.beercan/skills/.

Terminal
# Scaffold a skill template
$ beercan skill:create social-post
 
# List installed skills
$ beercan skill:list

Example Skill

~/.beercan/skills/social-post.json
{
"name": "social-post",
"description": "Generate and publish social media posts",
"triggers": ["social media", "twitter", "linkedin", "post to"],
"instructions": "1. Call list_social_platforms to check connections\n2. Generate platform-appropriate content\n3. Use upload_post tool to publish",
"requiredTools": ["upload_post", "list_social_platforms"],
"config": {
"UPLOAD_POST_API_URL": "https://api.example.com/v1",
"UPLOAD_POST_API_KEY": "your-key"
},
"enabled": true
}
⚡ Tools vs Skills

Tools = atomic API calls (post to Twitter, fetch a URL). Skills = workflow recipes that orchestrate tools (research → generate → post to multiple platforms). When a bloop goal matches a skill's triggers, the instructions are automatically injected into agent context.

🧠 Agentic Autonomy

Six subsystems that make BeerCan agents self-directed. They spawn sub-tasks, schedule their own work, learn from mistakes, and build their own tools.

Self-Spawning & Cross-Project

Agents decompose work into child bloops via spawn_bloop. Optional project_slug parameter enables cross-project delegation — an agent in one project can spawn work in another. search_cross_project and search_previous_attempts provide global memory search across all projects.

Safety limits: Max 5 children per bloop, max depth 3 (configurable via BEERCAN_MAX_CHILDREN_PER_BLOOP and BEERCAN_MAX_SPAWN_DEPTH). Projects can opt out via project.context.allowCrossProjectAccess: false.

Self-Scheduling

Agents create their own cron schedules (create_schedule) and event triggers (create_trigger). An agent that says "I'll check back on this PR tomorrow" can actually do it. Triggers support regex matching and {{data.field}} goal template interpolation.

Safety limits: Minimum 5-minute cron interval, max 20 schedules and 20 triggers per project. Project ownership enforced on remove.

Heartbeat Awareness

Per-project periodic awareness loops. The HeartbeatManager runs in daemon mode, waking agents at configurable intervals to check a checklist. If nothing is noteworthy, the heartbeat stays silent. Active hours enforcement prevents 3AM alerts.

Terminal
$ beercan heartbeat:configure my-project
Interval: 30 minutes
Active hours: 08:00-22:00
Checklist: Check error logs, Review pending PRs, Monitor uptime
 
$ beercan daemon
[heartbeat] Started 1 project heartbeats

Self-Education (Reflection)

Opt-in post-bloop reflection via a lightweight Haiku LLM call. The ReflectionEngine extracts lessons learned, recurring patterns, and error resolutions into tagged memory entries. Future bloops automatically receive relevant lessons via enhanced retrieveContext(). Knowledge graph entities link bloops to their lessons and error resolutions.

Enable: BEERCAN_REFLECTION_ENABLED=true globally, or project.context.reflectionEnabled: true per project. Anti-recursion guards skip heartbeat and consolidation bloops. Includes periodic memory consolidation for merging duplicate insights.

Self-Modification

Agents create and manage skills via create_skill / update_skill. Skills are saved as JSON files in ~/.beercan/skills/ and auto-loaded. Agents can also modify their own project configuration via update_project_context (with restricted keys to prevent identity changes).

Build → Verify → Integrate

The full autonomous pipeline: an agent builds an artifact (tool, utility, script), spawns a verification child bloop to test it, and on APPROVE, auto-registers it as a BeerCan tool or skill. verify_and_integrate orchestrates the cycle. register_tool_from_file validates exports and runs test commands before live registration.

Safety: Only .js/.mjs files accepted. Exports validated via dynamic import. Test commands required. Tool names validated (/^[a-z0-9-]+$/). Max 50 custom tools. Dedicated verifier role template.

📡 REST API

Submit tasks, monitor jobs, and control bloops via HTTP. Served by the daemon (port 3939) or standalone via beercan serve.

Endpoint Method Description
/api/status GET System overview: project count, bloop stats, job stats, uptime
/api/projects GET All projects with bloop count summaries and token usage
/api/projects/:slug GET Single project detail with recent bloops
/api/projects/:slug/bloops GET Project bloops (optional ?status= filter)
/api/bloops POST Submit a new task (enqueue via job queue)
/api/bloops/recent GET Recent bloops across all projects (?limit=)
/api/bloops/:id GET Single bloop detail (supports partial ID match)
/api/jobs GET Job queue stats + recent jobs (?status=, ?limit=)
/api/jobs/:id DELETE Cancel a pending or running job
/api/schedules GET All schedules (optional ?project= filter)

Authentication

Set BEERCAN_API_KEY to require Authorization: Bearer <key> on all endpoints (except /api/health). Rate limiting is enforced per-IP via BEERCAN_WEBHOOK_RATE_LIMIT.

Example: Submit a Task

Terminal
$ curl -X POST http://localhost:3939/api/bloops \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEERCAN_API_KEY" \
-d '{"projectSlug": "my-project", "goal": "Refactor auth module"}'

Example: Check Status

Terminal
$ curl http://localhost:3939/api/status
 
{
"projects": 3,
"bloops": { "total": 47, "completed": 44, "running": 1 },
"jobs": { "pending": 2, "running": 1 },
"uptime": "2h 15m"
}

Configuration

Set in ~/.beercan/.env (created by beercan setup) or manage via beercan config.

Core

Variable Default Description
ANTHROPIC_API_KEYrequiredYour Anthropic / Claude API key
BEERCAN_DATA_DIR~/.beercanData directory for DB, config, and tools
BEERCAN_DEFAULT_MODELclaude-sonnet-4-6Default agent model
BEERCAN_HEAVY_MODELclaude-opus-4-6Heavy model for complex roles
BEERCAN_GATEKEEPER_MODELclaude-haiku-4-5-20251001Gatekeeper analysis model

Execution

Variable Default Description
BEERCAN_MAX_CONCURRENT2Max simultaneous bloop executions
BEERCAN_BLOOP_TIMEOUT_MS600000Per-bloop timeout (10 minutes)
BEERCAN_MAX_ITERATIONS50Max iterations per bloop
BEERCAN_TOKEN_BUDGET100000Default token budget per bloop
BEERCAN_LOG_LEVELinfoLog level: debug, info, warn, error
BEERCAN_LOG_FILE~/.beercan/beercan.logStructured log file path

Integrations

Variable Default Description
CLOUDFLARE_API_TOKENCloudflare Browser Rendering API token
CLOUDFLARE_ACCOUNT_IDCloudflare account ID
BEERCAN_API_KEYBearer token for REST API authentication
BEERCAN_WEBHOOK_RATE_LIMIT60API requests per minute per IP
BEERCAN_WEBHOOK_MAX_BODY_SIZE1048576Max webhook body size (1MB)

Notifications & Chat

Variable Default Description
BEERCAN_NOTIFY_ON_COMPLETEtrueDesktop notification on bloop completion
BEERCAN_NOTIFY_WEBHOOK_URLPOST bloop results to this URL
BEERCAN_TELEGRAM_TOKENTelegram bot token (enables chat in daemon)
BEERCAN_SLACK_TOKENSlack bot token
BEERCAN_SLACK_SIGNING_SECRETSlack signing secret
BEERCAN_SLACK_APP_TOKENSlack app token (socket mode)

🏗 Architecture

Pure TypeScript. Strict types. Zod validation everywhere. Zero cloud dependencies. Everything lives in a single SQLite file.

Three-Tier Model

Projects (sandboxed contexts with optional working directory) contain Bloops (atomic agent tasks) executed by Teams (role pipelines).

Execution Flow

Bloop Execution Pipeline
BeerCanEngine.runBloop()
Gatekeeper analyzes goal (if team is "auto")
Composes dynamic team + roles
Creates Bloop record
Initializes working memory
Retrieves hybrid memory context
BloopRunner.executePipeline()
Cycles through team stages
Agents use tools (fs, web, memory, notification)
Handles APPROVE / REVISE / REJECT
Stores result in DB + memory
Cleans up

Core Components

BeerCanEngine

Central orchestrator. Wires all subsystems, manages lifecycle, exposes the public API.

BeerCanDB

SQLite + sqlite-vec + FTS5. 10 migrations for projects, bloops, memory, knowledge graph, vectors, jobs, and events.

BloopRunner

Executes the agent pipeline with tool calls, decision extraction, and rejection cycles.

Gatekeeper

Dynamic team composition via structured LLM output. Picks roles, models, and pipeline config.

MemoryManager

Hybrid search with Reciprocal Rank Fusion across FTS5 + vector + knowledge graph.

MCPManager

stdio transport, auto-discovery, namespaced tool registration. Plug and bloop.

EventManager

Pub/sub event bus with webhook, filesystem, polling, and macOS native sources.

JobQueue

SQLite-backed concurrent queue with atomic job claiming via semaphore.

ChatBridge

Provider-agnostic conversational layer. Pluggable providers for Terminal, Telegram, Slack, WebSocket.

StatusAPI

REST endpoints for task submission, monitoring, and control. Auth, rate limiting, and auto-notifications.

🤝 Teams & Roles

Five preset team configurations plus a Gatekeeper that can compose custom teams on the fly.

Team Presets

Team Pipeline Best For
autoGatekeeper picks dynamicallyAny task (default)
soloSingle agentSimple, focused tasks
code_reviewCoder → ReviewerCode with quality checks
managedManager → Coder → ManagerPlanned execution
full_teamManager → Coder → Reviewer → TesterProduction-quality code

16 Dynamic Roles

5 built-in: manager, coder, reviewer, tester, solo

11 templates (gatekeeper picks as needed):

writer researcher analyst data_processor summarizer planner editor devops architect heartbeat verifier

The gatekeeper can also invent fully custom roles with LLM-generated prompts for unusual tasks.

The Gatekeeper

Pre-flight analysis step that dynamically composes the right team for any goal. A single fast LLM call (Haiku by default) using Anthropic's tool_choice for structured JSON output.

  • When: team: "auto" (default) or team: undefined. Skipped for preset teams.
  • What it decides: task complexity, roles needed, pipeline order, rejection flows, model per role, tools per role, max cycles.
  • Role sources: 5 built-in + 11 templates + fully custom roles with LLM-generated prompts.

🧠 Memory System

Four-layer hybrid RAG — all in SQLite. Agents can store facts, decisions, and insights that persist across bloops.

Layer 1: Structured (FTS5)

BM25-ranked keyword search on all stored memories via SQLite's FTS5 virtual table.

Layer 2: Semantic (Vector)

sqlite-vec extension with 512-dimensional TF-IDF embeddings. Fully local, no external API needed.

Layer 3: Knowledge Graph

Entities and relationships with multi-hop BFS traversal. Agents can link concepts and traverse connections.

Layer 4: Working Memory

Per-bloop ephemeral scratchpad with SQLite write-through. Vanishes when the bloop completes.

Search results from all layers are merged via Reciprocal Rank Fusion (RRF) for best-of-both-worlds retrieval.

💬 Chat Providers

BeerCan's conversational interface is provider-agnostic. All providers share the same ChatBridge — slash commands and natural language work everywhere.

Provider Setup Description
Terminalbeercan chatInteractive REPL with colored output
TelegramSet BEERCAN_TELEGRAM_TOKENBot auto-starts in daemon mode
SlackSet BEERCAN_SLACK_TOKEN + signing secretSocket mode bot
WebSocketws://localhost:3940Generic JSON protocol for custom integrations

Telegram Quick Setup

  1. Message @BotFather on Telegram → /newbot → get your token
  2. beercan config set BEERCAN_TELEGRAM_TOKEN=your-token
  3. beercan stop && beercan start
  4. Message your bot — Skippy answers!

💻 Programmatic API

Use BeerCan as a library in your own TypeScript/JavaScript projects.

app.ts
import { BeerCanEngine } from "beercan";
 
const engine = await new BeerCanEngine().init();
 
// Run a bloop directly
const bloop = await engine.runBloop({
projectSlug: "my-project",
goal: "Refactor the auth module",
team: "auto",
});
 
// Query results
engine.getBloop(bloop.id);
engine.getProjectBloops("my-project", "completed");
 
// Enqueue for background execution
engine.enqueueBloop({
projectSlug: "my-project",
goal: "Run daily report",
});
 
// Job queue stats
engine.getJobQueue().getStats();