Flags
All command-line flags can be used with cos start or other commands. Flags override configuration file settings.
Global flags
Section titled “Global flags”Quick Reference
Section titled “Quick Reference”| Flag | Short | Description | Default |
|---|---|---|---|
--api-key | -k | API token override | From config |
--auto-accept | - | Auto-approve all tools | false |
--auto-accept-plans | - | Auto-approve plan mode | false |
--browser-agent-model | - | Model for browser subagents | gemini-3-flash-preview |
--cdp-url | - | Chrome CDP URL | From config |
--enable-agent-commits | - | Enable Agent Commits | true |
--cwd | - | Working directory | . |
--debug | - | Debug mode | false |
--direct | - | Connect to remote-runtime | - |
--forceUpdate | - | Force update on boot | false |
--host | - | API host/base URL | From config |
--inference | -i | Inference base URL | From config |
--lsp-diagnostics-on-file-read | - | LSP diagnostics on read | false |
--lsp-diagnostics-on-file-write | - | LSP diagnostics on write | false |
--mcp-config | - | MCP config JSON path | - |
--model | - | Model override | From config |
--parallel-tool-calls | - | Allow parallel tools | true |
--profile | - | Config profile | - |
--prompt | -p | One-shot prompt | - |
--reasoning | - | Reasoning level | From config |
--response-format | - | Structured output schema | - |
--system-prompt | -s | System prompt ID | lumen |
--system-prompt-file | - | Custom prompt file | - |
--title-model | - | Model for titles | From config |
--tool-call-limit | - | Per-tool budgets | - |
--workspace | - | Workspace mode | os |
--worktree-name | - | Worktree name | - |
--wsresponses | - | WebSocket mode | false |
--api-key / -k
Section titled “--api-key / -k”API token override for a single session.
# Override API keycos start --api-key sk-abc123
# With one-shot mode for automationcos start -k $OPENAI_API_KEY --prompt "Review code" --auto-accept--auto-accept
Section titled “--auto-accept”⚠️ Use with caution — Puts the agent in fully autonomous mode without confirmations.
# Autonomous refactoringcos start --auto-accept --prompt "Refactor auth to JWT"
# With Agent Commits enabled for trackingcos start --auto-accept --enable-agent-commits=true
# CI/CD automationcos start --auto-accept --prompt "Run tests and fix failures" --cwd /path/to/project--auto-accept-plans
Section titled “--auto-accept-plans”Auto-approve plan mode transitions but review individual edits.
# Auto-approve plans onlycos start --auto-accept-plans
# With high reasoning for better planningcos start --auto-accept-plans --reasoning high --prompt "Design database schema"--browser-agent-model
Section titled “--browser-agent-model”Model for browser automation subagents.
# Faster model for simple taskscos start --cdp-url http://localhost:9222 --browser-agent-model gpt-4o-mini
# More capable model for complex automationcos start --cdp-url http://localhost:9222 --browser-agent-model claude-3-5-sonnet--cdp-url
Section titled “--cdp-url”Chrome DevTools Protocol URL for browser automation.
# Local Chrome with remote debuggingcos start --cdp-url http://localhost:9222
# Remote Chrome instancecos start --cdp-url http://192.168.1.100:9222
# One-shot screenshotcos start --cdp-url http://localhost:9222 --prompt "Screenshot example.com" --auto-acceptStart Chrome with remote debugging:
# macOSopen -n -a "Google Chrome" --args --remote-debugging-port=9222 --user-data-dir=/tmp/cosine-chrome
# Linuxgoogle-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/cosine-chrome--enable-agent-commits
Section titled “--enable-agent-commits”Enable or disable automatic Agent Commits after each turn.
Aliases:
--enable-agent-commit--enable-commit--enable-commits--commit
# Disable Agent Commits for this sessioncos start --enable-agent-commits=false --prompt "Experimental changes"
# Explicitly enable (default)cos start --enable-agent-commits=true
# Short aliascos start --commit=falseSet working directory for the session.
# Work on specific projectcos start --cwd ~/projects/my-app
# With automationcos start --cwd ~/projects/api --prompt "Generate docs" --auto-accept
# Batch script example#!/bin/bashfor project in ~/projects/*/; do cos start --cwd "$project" --prompt "Update deps" --auto-acceptdone--debug
Section titled “--debug”Debug mode dumps context window to /tmp/cosine/window.json.
# Enable debug loggingcos start --debug
# Debug with output to filecos start --debug --prompt "Debug this" 2>&1 | tee debug.log--direct
Section titled “--direct”Connect to remote runtime TCP address.
# Local remote-runtimecos start --direct localhost:9876
# Remote instancecos start --direct 192.168.1.50:9876 --api-key sk-...
# Containerized environmentcos start --direct cos-runtime.internal:9876 --prompt "Heavy computation"--forceUpdate
Section titled “--forceUpdate”Force update check on startup (for testing).
# Force immediate updatecos start --forceUpdate--host
Section titled “--host”API host/base URL override.
# Custom Cosine instancecos start --host https://cosine.mycompany.internal
# Development/testingcos start --host http://localhost:8080 --debug
# Staging with API keycos start --host https://api.staging.cosine.wtf --api-key staging-key-123--inference / -i
Section titled “--inference / -i”OpenAI-compatible inference endpoint.
# Use OpenAI directlycos start --inference https://api.openai.com/v1 --api-key $OPENAI_API_KEY
# Local LLM servercos start -i http://localhost:8000/v1
# Azure OpenAIcos start -i https://my-resource.openai.azure.com/openai/deployments/my-deployment--lsp-diagnostics-on-file-read
Section titled “--lsp-diagnostics-on-file-read”Include LSP diagnostics in read_file output.
# Enable diagnostics on readcos start --lsp-diagnostics-on-file-read
# Fix all TypeScript errorscos start --lsp-diagnostics-on-file-read --prompt "Fix all TS errors"--lsp-diagnostics-on-file-write
Section titled “--lsp-diagnostics-on-file-write”Refresh LSP diagnostics after file writes.
# Enable on writecos start --lsp-diagnostics-on-file-write
# Both read and writecos start --lsp-diagnostics-on-file-read --lsp-diagnostics-on-file-write--mcp-config
Section titled “--mcp-config”Path to MCP server configuration JSON.
# Project-specific MCP configcos start --mcp-config ./project-mcp.json
# Environment-specific configcos start --mcp-config ~/.cosine/mcp-staging.json
# Disable MCP temporarilycos start --mcp-config /dev/null--model
Section titled “--model”Override the inference model.
# Use specific modelcos start --model gpt-5
# Fast model for simple taskscos start --model gpt-4o-mini --prompt "Generate test data"
# Complex tasks with reasoningcos start --model o3 --reasoning high --prompt "Design distributed system"
# One-shot PR reviewcos start --model codex --prompt "Review this PR"--parallel-tool-calls
Section titled “--parallel-tool-calls”Allow/disable parallel tool execution.
# Force sequential for debuggingcos start --parallel-tool-calls=false
# Enable parallel (default)cos start --parallel-tool-calls=true
# Debug race conditionscos start --parallel-tool-calls=false --prompt "Debug file modification issues"--profile
Section titled “--profile”Use configuration profile.
# Work profilecos start --profile work
# Personal profilecos start --profile personal
# Profile with debugcos start --profile staging --debug--prompt / -p
Section titled “--prompt / -p”One-shot mode for automation.
# Quick questioncos start --prompt "What does this function do?"
# Batch review script#!/bin/bashfor file in $(git diff --name-only HEAD~1); do cos start -p "Review $file for security" --cwd . --auto-acceptdone
# CI/CD integrationcos start -p "Run lint and fix issues" --auto-accept --enable-agent-commits=false--reasoning
Section titled “--reasoning”Set reasoning effort: none, low, medium, high, xhigh, adaptive.
# Fast responsescos start --reasoning low --prompt "List TODO comments"
# Balanced (default)cos start --reasoning medium
# Deep analysiscos start --reasoning high --prompt "Refactor monolith to microservices"
# Maximum depth (GPT/Codex only)cos start --reasoning xhigh --model gpt-5 --prompt "Prove algorithm correctness"
# Let supported Claude 4.6 models choose how much thinking they needcos start --reasoning adaptive --model claude-sonnet-4-6-1m --prompt "Investigate a performance regression"For help choosing a level, see Reasoning.
--response-format
Section titled “--response-format”Structured JSON output schema (headless mode).
# Extract structured datacos start --prompt "Extract user info" \ --response-format '{"name":"user_info","strict":true,"schema":{"type":"object","properties":{"name":{"type":"string"},"age":{"type":"integer"}}}}'
# Parse logs to JSONcos start --prompt "Parse logs" \ --response-format '{"name":"errors","schema":{"type":"array","items":{"type":"object","properties":{"ts":{"type":"string"},"msg":{"type":"string"}}}}}'
# Pipe to jqcos start --prompt "Analyze deps" --response-format '...' | jq '.dependencies'--system-prompt / -s
Section titled “--system-prompt / -s”System prompt ID: lumen, lumenLegacy, judge, orchestrator.
# Code reviewcos start --system-prompt judge --prompt "Review this PR"
# Complex orchestrationcos start --system-prompt orchestrator --prompt "Set up microservice"
# General coding (default)cos start --system-prompt lumen--system-prompt-file
Section titled “--system-prompt-file”Path to custom system prompt file.
# Custom prompt from filecos start --system-prompt-file ./prompts/security-expert.txt
# Project AGENTS.mdcos start --system-prompt-file ./AGENTS.md
# With model selectioncos start --system-prompt-file ./custom-prompt.txt --model gpt-5--title-model
Section titled “--title-model”Model for generating session titles.
# Cheap model for titlescos start --model gpt-5 --title-model gpt-4o-mini
# Default (uses main model)cos start --model codex--tool-call-limit
Section titled “--tool-call-limit”Budget tool usage with tool=N pairs.
# Limit expensive operationscos start --tool-call-limit "web_search=5,web_fetch=10"
# Prevent excessive editscos start --tool-call-limit "edit=20,terminal=5,web_search=3"
# CI safety: no terminalcos start --tool-call-limit "terminal=0" --prompt "Review only"--workspace
Section titled “--workspace”Workspace mode: os, vfs, worktree.
# Direct filesystem (default)cos start --workspace os
# Isolated virtual filesystemcos start --workspace vfs --prompt "Experiment safely"
# Git worktreecos start --workspace worktree --worktree-name feature-x
# Named worktree branchcos start --workspace worktree --worktree-name bugfix-123--worktree-name
Section titled “--worktree-name”Name for git worktree (with --workspace worktree).
# Named feature worktreecos start --workspace worktree --worktree-name new-auth-system
# Parallel experimentscos start --workspace worktree --worktree-name experiment-1 --prompt "Try approach A"cos start --workspace worktree --worktree-name experiment-2 --prompt "Try approach B"
# Clean upgit worktree remove experiment-1git worktree remove experiment-2--wsresponses
Section titled “--wsresponses”WebSocket mode for lower latency.
# Enable WebSocketcos start --wsresponses
# With performance flagscos start --wsresponses --reasoning low --model gpt-4o-mini
# Interactive sessionscos start --wsresponses --prompt "Quick design session"Related reference
Section titled “Related reference”- Commands — Command overview
- Overview — Product overview and where to start
- Quickstart — Fastest path to first success
- Configuration — Config files and precedence
- MCP Configuration — Connect external tools