skills pattern intermediate

When to Farm and When to Just Do the Work

The first thing you learn when you build an orchestrator is how to spawn subagents. The second thing you learn — usually after burning through a lot of tokens — is when not to.

 Mel received a task from  Tony: update a config file and adjust two function calls to match. Three files, maybe 15 minutes of work.  Mel wrote a task spec, created a git worktree, spawned a tmux session, kicked off a loop script, and started monitoring via heartbeat. Twenty minutes of setup for fifteen minutes of coding.

The subagent finished the work in one iteration.  Mel verified it, ran  Roberto’s QA review, created a PR, cleaned up the worktree, and removed the branch. Total elapsed time: 45 minutes. Total if  Mel had just made the three changes herself: 15 minutes.

The Problem

Orchestrator agents learn to spawn subagents because that’s what their setup docs teach. The task farming protocol is the centerpiece of the config: write a spec, create a worktree, spawn a loop, monitor via heartbeat, handle completions, run QA, create a PR. It’s well-engineered machinery. And when you have well-engineered machinery, everything starts looking like it needs machining.

But the machinery has real costs:

  • Task spec writing: 5-10 minutes to fill out objective, acceptance criteria, test gates, verification command, relevant files, and hints
  • Worktree setup: git worktree create, directory initialization, branch creation
  • Tmux and loop: session creation, pipe-pane for logging, iteration configuration
  • Task registration: updating the active-tasks registry
  • Monitoring: heartbeat checks every 10 minutes, stall detection, status reporting
  • Cleanup: worktree removal, branch deletion, registry update

For a task that takes 4 hours and runs overnight, this overhead is negligible. For a 20-minute fix, it’s the majority of the work.

Why This Happens

The subagent pattern exists to solve a real problem: context limits. A coding agent working on a large feature will hit context compaction. When that happens, it loses track of what it already did, repeats work, or makes conflicting changes. The worktree/loop/handoff system solves this — each iteration starts fresh, reads the execution log, picks up from the handoff, and continues.

But most code changes don’t hit context limits. A bug fix, a config update, a small feature, a refactor of one module — these fit comfortably in a single session.  Mel can read the relevant files, make the changes, run the tests, and commit. No spec needed. No worktree needed. No monitoring needed.

The mistake is treating the subagent path as the default instead of the exception.

The Fix

Add a decision step to your task planning. Before spawning anything, ask: does this task actually need isolation?

Do the work directly when:

  • It fits in one session — you can hold the whole change in your head, make it, and verify it without risk of context compaction
  • There’s one task — no need for parallelism, no need for the orchestrator to stay free for monitoring
  • It’s concrete — you know what to change and what the result should look like
  • It’s under ~1 hour — a rough proxy for “fits in context”

Farm to a subagent when:

  • It runs unsupervised — overnight, weekend, or any period where no one is watching. The loop/handoff system handles restarts and compaction gracefully
  • You need parallelism — multiple independent tasks at once. The orchestrator can’t code and monitor simultaneously
  • The scope is uncertain — research, exploration, or “figure out why this is broken” tasks where the agent might go down several paths
  • It’s large — 2+ hours of agent work, multiple modules, likely to hit context limits

In practice

Task: "Update the webhook endpoint to validate signatures"
→ Clear scope, 20 minutes
→ Do it directly

Task: "Add Stripe billing with subscription management"
→ New models, API integration, webhook handlers, tests, migrations
→ Farm it — this is a multi-hour feature

Task: "Fix the CSS on the pricing page"
→ Clear scope, 10 minutes
→ Do it directly

Task: "Refactor the auth system from sessions to JWT"
→ Touches every route, middleware, tests, frontend
→ Farm it — probably overnight work

Task: "Research why PDF generation is slow and fix it"
→ Uncertain scope, might need profiling, multiple attempts
→ Farm it — exploration benefits from isolation

The pattern is simple: if you’d hesitate to start a whole dev environment setup for this task as a human, your orchestrator shouldn’t either.

Key Takeaway

The subagent machinery exists for tasks that won’t fit in one context window. That’s a smaller set than most orchestrator configs imply. Default to direct execution. Reach for the farming infrastructure only when the task genuinely needs isolation, parallelism, or unsupervised runtime. The simplest agent architecture that solves the problem is always the right one.

FAQ

When should an orchestrator agent spawn a subagent vs doing work directly?

Spawn a subagent when the task needs to run overnight unsupervised, when you need to run multiple tasks in parallel, when the work has uncertain scope (research/exploration), or when it's large enough to hit context limits (2+ hours of work). For everything else — fixes, config changes, features, refactors that fit in one session — the orchestrator should just do it.

What's the actual overhead of spawning a subagent?

Writing a task spec, creating a git worktree, setting up a tmux session, configuring log capture, registering the task, posting a START notification, then monitoring via periodic heartbeat checks. For a 20-minute fix, that setup alone can take longer than the fix itself.

How do I know if a task will fit in one session?

If you can hold the whole change in your head at once — understand what needs to change, make the changes, and verify them — it fits. Time estimate under ~1 hour is a good proxy. The number of files doesn't matter; what matters is conceptual scope.