Skip to content

Flags

All command-line flags can be used with cos start or other commands. Flags override configuration file settings.

FlagShortDescriptionDefault
--api-key-kAPI token overrideFrom config
--auto-accept-Auto-approve all toolsfalse
--auto-accept-plans-Auto-approve plan modefalse
--browser-agent-model-Model for browser subagentsgemini-3-flash-preview
--cdp-url-Chrome CDP URLFrom config
--enable-agent-commits-Enable Agent Commitstrue
--cwd-Working directory.
--debug-Debug modefalse
--direct-Connect to remote-runtime-
--forceUpdate-Force update on bootfalse
--host-API host/base URLFrom config
--inference-iInference base URLFrom config
--lsp-diagnostics-on-file-read-LSP diagnostics on readfalse
--lsp-diagnostics-on-file-write-LSP diagnostics on writefalse
--mcp-config-MCP config JSON path-
--model-Model overrideFrom config
--parallel-tool-calls-Allow parallel toolstrue
--profile-Config profile-
--prompt-pOne-shot prompt-
--reasoning-Reasoning levelFrom config
--response-format-Structured output schema-
--system-prompt-sSystem prompt IDlumen
--system-prompt-file-Custom prompt file-
--title-model-Model for titlesFrom config
--tool-call-limit-Per-tool budgets-
--workspace-Workspace modeos
--worktree-name-Worktree name-
--wsresponses-WebSocket modefalse

API token override for a single session.

Terminal window
# Override API key
cos start --api-key sk-abc123
# With one-shot mode for automation
cos start -k $OPENAI_API_KEY --prompt "Review code" --auto-accept

⚠️ Use with caution — Puts the agent in fully autonomous mode without confirmations.

Terminal window
# Autonomous refactoring
cos start --auto-accept --prompt "Refactor auth to JWT"
# With Agent Commits enabled for tracking
cos start --auto-accept --enable-agent-commits=true
# CI/CD automation
cos start --auto-accept --prompt "Run tests and fix failures" --cwd /path/to/project

Auto-approve plan mode transitions but review individual edits.

Terminal window
# Auto-approve plans only
cos start --auto-accept-plans
# With high reasoning for better planning
cos start --auto-accept-plans --reasoning high --prompt "Design database schema"

Model for browser automation subagents.

Terminal window
# Faster model for simple tasks
cos start --cdp-url http://localhost:9222 --browser-agent-model gpt-4o-mini
# More capable model for complex automation
cos start --cdp-url http://localhost:9222 --browser-agent-model claude-3-5-sonnet

Chrome DevTools Protocol URL for browser automation.

Terminal window
# Local Chrome with remote debugging
cos start --cdp-url http://localhost:9222
# Remote Chrome instance
cos start --cdp-url http://192.168.1.100:9222
# One-shot screenshot
cos start --cdp-url http://localhost:9222 --prompt "Screenshot example.com" --auto-accept

Start Chrome with remote debugging:

Terminal window
# macOS
open -n -a "Google Chrome" --args --remote-debugging-port=9222 --user-data-dir=/tmp/cosine-chrome
# Linux
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/cosine-chrome

Enable or disable automatic Agent Commits after each turn.

Aliases:

  • --enable-agent-commit
  • --enable-commit
  • --enable-commits
  • --commit
Terminal window
# Disable Agent Commits for this session
cos start --enable-agent-commits=false --prompt "Experimental changes"
# Explicitly enable (default)
cos start --enable-agent-commits=true
# Short alias
cos start --commit=false

Set working directory for the session.

Terminal window
# Work on specific project
cos start --cwd ~/projects/my-app
# With automation
cos start --cwd ~/projects/api --prompt "Generate docs" --auto-accept
# Batch script example
#!/bin/bash
for project in ~/projects/*/; do
cos start --cwd "$project" --prompt "Update deps" --auto-accept
done

Debug mode dumps context window to /tmp/cosine/window.json.

Terminal window
# Enable debug logging
cos start --debug
# Debug with output to file
cos start --debug --prompt "Debug this" 2>&1 | tee debug.log

Connect to remote runtime TCP address.

Terminal window
# Local remote-runtime
cos start --direct localhost:9876
# Remote instance
cos start --direct 192.168.1.50:9876 --api-key sk-...
# Containerized environment
cos start --direct cos-runtime.internal:9876 --prompt "Heavy computation"

Force update check on startup (for testing).

Terminal window
# Force immediate update
cos start --forceUpdate

API host/base URL override.

Terminal window
# Custom Cosine instance
cos start --host https://cosine.mycompany.internal
# Development/testing
cos start --host http://localhost:8080 --debug
# Staging with API key
cos start --host https://api.staging.cosine.wtf --api-key staging-key-123

OpenAI-compatible inference endpoint.

Terminal window
# Use OpenAI directly
cos start --inference https://api.openai.com/v1 --api-key $OPENAI_API_KEY
# Local LLM server
cos start -i http://localhost:8000/v1
# Azure OpenAI
cos start -i https://my-resource.openai.azure.com/openai/deployments/my-deployment

Include LSP diagnostics in read_file output.

Terminal window
# Enable diagnostics on read
cos start --lsp-diagnostics-on-file-read
# Fix all TypeScript errors
cos start --lsp-diagnostics-on-file-read --prompt "Fix all TS errors"

Refresh LSP diagnostics after file writes.

Terminal window
# Enable on write
cos start --lsp-diagnostics-on-file-write
# Both read and write
cos start --lsp-diagnostics-on-file-read --lsp-diagnostics-on-file-write

Path to MCP server configuration JSON.

Terminal window
# Project-specific MCP config
cos start --mcp-config ./project-mcp.json
# Environment-specific config
cos start --mcp-config ~/.cosine/mcp-staging.json
# Disable MCP temporarily
cos start --mcp-config /dev/null

Override the inference model.

Terminal window
# Use specific model
cos start --model gpt-5
# Fast model for simple tasks
cos start --model gpt-4o-mini --prompt "Generate test data"
# Complex tasks with reasoning
cos start --model o3 --reasoning high --prompt "Design distributed system"
# One-shot PR review
cos start --model codex --prompt "Review this PR"

Allow/disable parallel tool execution.

Terminal window
# Force sequential for debugging
cos start --parallel-tool-calls=false
# Enable parallel (default)
cos start --parallel-tool-calls=true
# Debug race conditions
cos start --parallel-tool-calls=false --prompt "Debug file modification issues"

Use configuration profile.

Terminal window
# Work profile
cos start --profile work
# Personal profile
cos start --profile personal
# Profile with debug
cos start --profile staging --debug

One-shot mode for automation.

Terminal window
# Quick question
cos start --prompt "What does this function do?"
# Batch review script
#!/bin/bash
for file in $(git diff --name-only HEAD~1); do
cos start -p "Review $file for security" --cwd . --auto-accept
done
# CI/CD integration
cos start -p "Run lint and fix issues" --auto-accept --enable-agent-commits=false

Set reasoning effort: none, low, medium, high, xhigh, adaptive.

Terminal window
# Fast responses
cos start --reasoning low --prompt "List TODO comments"
# Balanced (default)
cos start --reasoning medium
# Deep analysis
cos start --reasoning high --prompt "Refactor monolith to microservices"
# Maximum depth (GPT/Codex only)
cos start --reasoning xhigh --model gpt-5 --prompt "Prove algorithm correctness"
# Let supported Claude 4.6 models choose how much thinking they need
cos start --reasoning adaptive --model claude-sonnet-4-6-1m --prompt "Investigate a performance regression"

For help choosing a level, see Reasoning.


Structured JSON output schema (headless mode).

Terminal window
# Extract structured data
cos start --prompt "Extract user info" \
--response-format '{"name":"user_info","strict":true,"schema":{"type":"object","properties":{"name":{"type":"string"},"age":{"type":"integer"}}}}'
# Parse logs to JSON
cos start --prompt "Parse logs" \
--response-format '{"name":"errors","schema":{"type":"array","items":{"type":"object","properties":{"ts":{"type":"string"},"msg":{"type":"string"}}}}}'
# Pipe to jq
cos start --prompt "Analyze deps" --response-format '...' | jq '.dependencies'

System prompt ID: lumen, lumenLegacy, judge, orchestrator.

Terminal window
# Code review
cos start --system-prompt judge --prompt "Review this PR"
# Complex orchestration
cos start --system-prompt orchestrator --prompt "Set up microservice"
# General coding (default)
cos start --system-prompt lumen

Path to custom system prompt file.

Terminal window
# Custom prompt from file
cos start --system-prompt-file ./prompts/security-expert.txt
# Project AGENTS.md
cos start --system-prompt-file ./AGENTS.md
# With model selection
cos start --system-prompt-file ./custom-prompt.txt --model gpt-5

Model for generating session titles.

Terminal window
# Cheap model for titles
cos start --model gpt-5 --title-model gpt-4o-mini
# Default (uses main model)
cos start --model codex

Budget tool usage with tool=N pairs.

Terminal window
# Limit expensive operations
cos start --tool-call-limit "web_search=5,web_fetch=10"
# Prevent excessive edits
cos start --tool-call-limit "edit=20,terminal=5,web_search=3"
# CI safety: no terminal
cos start --tool-call-limit "terminal=0" --prompt "Review only"

Workspace mode: os, vfs, worktree.

Terminal window
# Direct filesystem (default)
cos start --workspace os
# Isolated virtual filesystem
cos start --workspace vfs --prompt "Experiment safely"
# Git worktree
cos start --workspace worktree --worktree-name feature-x
# Named worktree branch
cos start --workspace worktree --worktree-name bugfix-123

Name for git worktree (with --workspace worktree).

Terminal window
# Named feature worktree
cos start --workspace worktree --worktree-name new-auth-system
# Parallel experiments
cos start --workspace worktree --worktree-name experiment-1 --prompt "Try approach A"
cos start --workspace worktree --worktree-name experiment-2 --prompt "Try approach B"
# Clean up
git worktree remove experiment-1
git worktree remove experiment-2

WebSocket mode for lower latency.

Terminal window
# Enable WebSocket
cos start --wsresponses
# With performance flags
cos start --wsresponses --reasoning low --model gpt-4o-mini
# Interactive sessions
cos start --wsresponses --prompt "Quick design session"