Never add agents to sequential tasks — only distribute to parallel workstreams where coordination cost is less than throughput gain
Never add agents to sequential reasoning tasks—distribute additional agents only to genuinely parallel workstreams where coordination overhead is measurably less than throughput gain.
Why This Is a Rule
Brooks's Law — "adding manpower to a late software project makes it later" — applies to any system where tasks are sequential and coordination has cost. Adding a second person to a single-threaded reasoning task doesn't halve the time; it adds communication overhead to an inherently serial process. The reasoning chain has dependencies that can't be parallelized: conclusion B requires premise A, which requires observation A1. No amount of additional agents changes this.
The only legitimate reason to add agents is genuinely parallel workstreams: tasks with no dependencies between them where each agent can work independently. In this case, total throughput increases proportionally — but only if the coordination overhead (handoffs, synchronization, communication) is less than the throughput gain. If two agents each produce 80% of one agent's output but require 30% of total time for coordination, you've gained 130% throughput at 130% cost — break-even at best.
The measurability criterion prevents the "more people means more output" fallacy: you must quantify both the throughput gain and the coordination cost before concluding that scaling helps. Intuitive estimates of coordination cost are systematically low (people underestimate how much time they spend coordinating).
When This Fires
- When considering adding people, tools, or parallel processes to a task that's too slow
- When a project is behind schedule and "more resources" is proposed as the solution
- When designing multi-agent cognitive systems and deciding agent count
- When Brooks's Law is being violated in practice ("let's add more people to speed this up")
Common Failure Mode
Adding agents to bottlenecked sequential work: "The analysis is taking too long — let's have two analysts." If the analysis requires understanding the full dataset and building conclusions iteratively, splitting it between two people adds context-sharing overhead without enabling parallelism. The serial nature of the reasoning is the constraint; more agents can't parallelize it.
The Protocol
(1) Before adding agents/resources to any task, classify: is this task sequential (each step depends on the previous) or parallelizable (independent subtasks can run concurrently)? (2) If sequential → do not add agents. Instead, optimize the serial process itself (Only optimize the constraint — verify that improving a loop component would increase total system throughput before investing effort — identify the constraint within the serial chain). (3) If parallelizable → estimate coordination cost: handoff time, synchronization time, communication overhead, duplicate work. Also estimate throughput gain from parallel execution. (4) Add agents only if throughput gain > coordination cost with margin. If the math is marginal → don't add. Coordination costs are always underestimated. (5) After adding, verify (Review new agents 7-14 days after deployment — remove if actual coordination cost exceeds estimate by more than 50%): did actual coordination cost match the estimate?