Run big tasks and keep shipping
Learn how to tackle large, complex tasks by breaking them into parallel workstreams and delegating to multiple agents simultaneously — all while maintaining control and quality.
What This Video Covers
Section titled “What This Video Covers”The Problem with Big Tasks
Section titled “The Problem with Big Tasks”Complex projects naturally slow down development. A large refactoring, a multi-page website build, or a comprehensive test suite — these tasks take time, and traditional approaches often force you to wait for sequential completion:
- One agent processes everything step by step
- You’re blocked until the entire task completes
- Context accumulates and eventually overflows
- Progress feels slow and hard to track
This creates a frustrating bottleneck where ambitious work sits in progress for hours or days.
Breaking Down into Parallel Workstreams
Section titled “Breaking Down into Parallel Workstreams”The solution is to decompose large tasks into independent streams that can execute simultaneously. Instead of one agent doing everything:
- Stream A handles the frontend components
- Stream B builds the API endpoints
- Stream C writes the database migrations
- Stream D creates tests for the new features
Each stream is self-contained. They don’t depend on each other. They can all run at the same time.
The key skill is learning to identify natural boundaries in your work:
| Task Type | How to Decompose |
|---|---|
| Multi-page website | One agent per page or section |
| Refactoring | One agent per module or component |
| Test coverage | One agent per test file or feature area |
| Research synthesis | One agent per source or topic |
| Documentation | One agent per section or topic |
Swarm Mode: Parallel by Design
Section titled “Swarm Mode: Parallel by Design”Swarm Mode is built specifically for this pattern. When you enable Swarm Mode:
- The primary agent acts as an orchestrator
- It analyzes your task and identifies parallel opportunities
- It spawns subagents — one for each independent stream
- All subagents run simultaneously, reporting progress in real time
- Results are coordinated and integrated automatically
This isn’t just running multiple agents — it’s structured parallel execution with built-in coordination.
Managing Multiple Agents Without Losing Control
Section titled “Managing Multiple Agents Without Losing Control”Running many agents in parallel raises a legitimate concern: How do you stay in control?
The answer lies in visibility and checkpoints:
The Agents Panel (Ctrl+3 or Alt+3) shows all active subagents, their status, and which files they’re working on. At a glance, you can see:
- How many agents are running
- What each agent is doing
- Which agents have completed
- Any errors or blocked agents
Foreground vs Background Execution:
| Mode | When to Use |
|---|---|
| Foreground | You want to watch and guide in real time; agents share your workspace |
| Background | You’re delegating and will review later; uses git worktree on a separate branch |
Checkpoint Decisions: Unlike fully autonomous systems, Swarm Mode lets you review before major integrations. The orchestrator can pause and ask for direction before combining streams that might conflict.
The Speed vs Coordination Trade-off
Section titled “The Speed vs Coordination Trade-off”More parallel agents means faster execution — but only up to a point. There’s a trade-off:
Parallelism benefits:
- Wall-clock time decreases dramatically
- Different perspectives on the same problem
- Natural fault isolation (one agent failing doesn’t block others)
Coordination costs:
- More cognitive overhead to track progress
- Potential for conflicting changes to the same files
- Integration complexity when streams need to merge
The sweet spot depends on your task:
- Highly independent streams (different pages, separate modules): Go wide — 4-8 agents
- Somewhat related streams (different features in the same codebase): Moderate — 2-4 agents
- Tightly coupled streams (refactoring shared core): Sequential — 1-2 agents
Practical Examples of Running Big Tasks
Section titled “Practical Examples of Running Big Tasks”Example 1: Building a Multi-Feature Dashboard
Starting task: “Build a customer analytics dashboard with 5 different visualizations”
Swarm approach:
- Agent 1: Authentication and routing
- Agent 2: Data fetching layer and API integration
- Agent 3: Chart component A (revenue metrics)
- Agent 4: Chart component B (user activity)
- Agent 5: Chart component C (conversion funnel)
- Agent 6: Layout, navigation, and polish
Total time: 45 minutes parallel vs. 3+ hours sequential
Example 2: Migrating a Legacy API
Starting task: “Migrate our REST API to GraphQL”
Swarm approach:
- Agent 1: Schema design for User endpoints
- Agent 2: Schema design for Order endpoints
- Agent 3: Resolvers for User queries
- Agent 4: Resolvers for Order queries
- Agent 5: Mutation implementations
- Agent 6: Client-side query updates
The orchestrator coordinates the shared schema definition while subagents work in parallel on their domains.
Example 3: Content Production at Scale
Starting task: “Create 20 blog posts from this research material”
Swarm approach:
- 10 agents, each writing 2 posts
- Each agent has access to the shared research folder
- Posts are written in parallel and saved to individual files
- Orchestrator can review and provide feedback in batches
Keeping Context and Continuity
Section titled “Keeping Context and Continuity”A common concern: If each agent works independently, how do they stay consistent?
Shared Filesystem: All agents in a swarm share the same working directory. When Agent 1 creates a utility function, Agent 2 can import and use it. The filesystem becomes the shared context.
agent.md: Your project’s agent.md provides baseline context to every subagent. Define your coding standards, architecture patterns, and conventions once — all agents inherit them.
Plan as Contract: When you start from a Plan Mode session, the approved plan acts as a coordination contract. Every subagent knows the overall architecture and their specific role within it.
Incremental Integration: The orchestrator doesn’t wait for all agents to finish before starting integration. As agents complete their streams, results are integrated progressively, giving early visibility into how the pieces fit together.
Best Practices for Large-Scale Agent Workflows
Section titled “Best Practices for Large-Scale Agent Workflows”1. Start with a Clear Plan
Don’t jump into Swarm Mode blindly. Use Plan Mode first to:
- Understand the full scope
- Identify the natural boundaries
- Define interfaces between streams
- Set expectations for each subagent
2. Keep Streams Independent
The more independent your streams, the smoother parallel execution will be:
- Minimize shared files between agents
- Define clear interfaces upfront
- Avoid having multiple agents edit the same function
3. Use Background Mode for Large Swarms
When running 4+ agents on a significant task, consider Background mode:
- Creates a clean git worktree on a new branch
- Won’t clutter your main workspace timeline
- Can check back later when everything is done
- Easy to review the full diff before merging
4. Monitor the Agents Panel
Don’t just fire and forget. Check Ctrl+3 periodically:
- See which agents are stuck or erroring
- Spot potential conflicts early
- Provide guidance to struggling agents
- Celebrate when they all turn green
5. Iterate on Decomposition
Your first attempt at breaking down a task might not be optimal. That’s fine:
- Some agents might finish much faster than others
- You might discover hidden dependencies
- Adjust the plan and respawn agents as needed
6. Maintain Quality Standards
Parallel execution shouldn’t mean lower quality:
- Include a “review and test” stream in your swarm
- Ask agents to write tests alongside their features
- Use the orchestrator to enforce standards across all outputs
Key Takeaways
Section titled “Key Takeaways”- Decompose big tasks into parallel streams — look for natural boundaries where work can proceed independently
- Use Swarm Mode for structured parallel execution — the orchestrator pattern provides coordination that ad-hoc parallelism lacks
- Choose between foreground and background based on whether you want real-time visibility or clean separation
- Balance speed and coordination — more agents isn’t always better; match parallelism to task independence
- Let the filesystem be your shared context — agents stay consistent by reading and writing shared files
- Ship continuously — parallel execution lets you make progress on big projects without blocking other work
- Quality doesn’t require sequential execution — with proper planning and interfaces, parallel work meets the same standards
Related Resources: