Your system worked until you improved it
You had a system that functioned. Three tools, four habits, a handful of recurring processes — all coordinated, all producing value. Then you decided to improve it. You added a new app. You introduced a new weekly review. You subscribed to a new information source. You started a new practice. Each addition made perfect sense in isolation. Each one addressed a real gap. And within a month, the system that worked was buried under the system you built on top of it.
This is the most common way multi-agent systems fail. Not through neglect, but through enthusiastic addition. The previous lesson (L-0516) taught you to assess the health of your agent ecosystem as a whole. This lesson teaches you the discipline that keeps it healthy: adding new agents with the same deliberation you would use to introduce a new species into an ecosystem. Because the mathematics of interaction are unforgiving, and they do not care how good your intentions were.
The combinatorial problem: why one more is never just one more
When you add a new agent to a system of existing agents, you are not adding one thing. You are adding one agent plus n new interaction channels, where n is the number of agents already in the system.
Fred Brooks identified this pattern in 1975 in The Mythical Man-Month. His observation — that "adding manpower to a late software project makes it later" — was grounded in a mathematical reality that extends far beyond software. The number of communication channels in a group of n agents is n(n-1)/2. A team of three has three channels. A team of six has fifteen. A team of ten has forty-five. The work capacity grows linearly with each addition. The coordination cost grows quadratically.
Brooks called this the "intercommunication formula," and it explains why most system failures are not caused by bad agents. They are caused by too many agents. A team of three excellent engineers can coordinate through three channels. The same team expanded to eight has twenty-eight channels to manage — nearly ten times the coordination load for less than three times the headcount. The marginal productivity of each new addition decreases as the interaction surface expands, until eventually adding another agent produces negative net value. The system gets worse by getting bigger.
This is not a software-specific problem. It is a universal property of interconnected systems. Every new habit you adopt interacts with every existing habit. Every new tool competes for the same attention, time, and cognitive resources as every other tool. Every new commitment creates coordination demands with every existing commitment. You are not managing a list of agents. You are managing a network — and networks punish naive addition.
Dunbar's limit: the biological ceiling on coordination
The combinatorial problem is not just mathematical. It is biological.
In 1992, anthropologist Robin Dunbar analyzed the relationship between neocortex size and social group size across primate species and extrapolated a cognitive limit for humans: approximately 150 stable relationships. This number — now called Dunbar's number — represents the maximum group size at which an individual can maintain the social knowledge required for coordination: who is connected to whom, who has which skills, who can be trusted with what.
The deeper insight is not the specific number. It is the mechanism. Dunbar's research showed that group cohesion requires cognitive overhead — each member must track relationships, roles, and interaction patterns with every other member. As group size increases, this overhead consumes a progressively larger share of available cognitive capacity. Dunbar estimated that a group of 150 would require approximately 42% of its time devoted to social maintenance — the primate equivalent of grooming — just to maintain cohesion.
Apply this to your personal agent ecosystem. Every agent you manage — every tool, habit, routine, process, or commitment — requires you to track its state, its interactions with other agents, and its coordination requirements. Your neocortex has the same finite capacity whether you are tracking social relationships or tracking the dependencies between your calendar system, your task manager, your note-taking app, your communication channels, and your daily practices. Dunbar's limit is not about people. It is about coordination complexity, and it applies to any system you manage.
Ecology's warning: the invasive species pattern
Ecologists have spent decades studying what happens when you add a new organism to an established ecosystem without understanding the interaction effects. The results are consistently catastrophic.
Research published in Proceedings of the National Academy of Sciences documented how the spiny water flea — a predatory zooplankton — invaded Lake Mendota in 2009 and triggered a trophic cascade that degraded the entire lake ecosystem. The invader did not simply compete with native species. It restructured the food web. Native grazers declined. Algae bloomed. Water clarity dropped by nearly a meter. A single addition — one new agent — propagated through every interaction channel in the system, producing effects that no analysis of the species in isolation could have predicted.
This is the invasive species pattern, and it maps precisely onto what happens when you carelessly add agents to your personal or professional systems. You add a new project management tool. It seems harmless — it solves a real problem. But it introduces a new information silo that competes with your existing note system. It creates a new notification stream that fragments the attention your deep-work practice depends on. It requires a new daily check-in that conflicts with the morning routine your journaling agent occupies. The tool is not bad. The interactions are bad. And you could not see them by evaluating the tool alone.
The ecological lesson is precise: you cannot understand the impact of an addition by studying the addition. You can only understand it by studying the system the addition enters.
Cognitive load: why your working memory is the bottleneck
John Sweller's cognitive load theory, developed through decades of research beginning in the 1980s, provides the psychological mechanism behind the coordination limit. Working memory — the cognitive workspace where you hold, manipulate, and coordinate active information — has a strict capacity of approximately four to seven items. This is not a soft guideline. It is a hardware constraint.
When you manage a system of agents, every active coordination demand occupies working memory. The planning agent needs input. The communication tool has notifications. The fitness routine requires a decision about timing. The meal plan requires a grocery list. Each of these is a claim on working memory, and once you exceed capacity, additional demands do not queue — they displace. You do not slow down gracefully. You drop things. You forget the journaling practice. You skip the review. You stop checking the dashboard. The agents are still running, but you have lost the ability to coordinate them.
This is why adding agents one at a time feels safe but is actually dangerous. Each individual agent seems manageable. "It's just one more thing." But working memory does not evaluate agents individually. It evaluates the total coordination load, and that load grows with every interaction channel — not with every agent. Adding the fifth agent to a system of four does not add one unit of cognitive load. It adds four new interaction channels, each requiring tracking and management. Your working memory does not expand to accommodate your ambitions.
The AI parallel: why multi-agent scaling degrades performance
If you work with AI systems, you are watching the same pattern play out in engineering.
A 2025 research paper from Google and MIT — "Towards a Science of Scaling Agent Systems" — evaluated 180 multi-agent configurations across five coordination architectures and produced a finding that should alarm anyone who adds agents casually: on sequential reasoning tasks, every multi-agent variant tested degraded performance by 39-70% compared to a single agent. The coordination overhead — the cost of agents communicating, reconciling, and synchronizing — consumed so much of the available "cognitive budget" that the agents had insufficient capacity for the actual task.
Even on parallelizable tasks, where multi-agent systems showed improvement, the research revealed that independent agents working without coordination amplified errors by 17.2 times. Adding a centralized orchestrator reduced error amplification to 4.4 times — better, but still nearly five times worse than a single agent operating alone. The orchestrator helped, but the coordination cost was never zero.
The research yielded a predictive model that correctly identified the optimal coordination strategy for 87% of task configurations. The model's core finding aligns with Brooks's law: more agents are only better when the task structure permits parallel decomposition and the coordination architecture can contain the interaction effects. For everything else — for sequential work, for tasks requiring coherent reasoning, for work that demands sustained attention — fewer agents outperform more agents.
Your cognitive system is a sequential reasoner. You do not parallelize attention. You do not run concurrent threads of conscious thought. When you add agents to your personal system, you are adding them to a fundamentally sequential processor with a working memory of four to seven items. The AI research confirms what Brooks predicted and what Dunbar measured: coordination overhead scales faster than coordination capacity.
The addition protocol: how to add agents deliberately
Given the combinatorial cost of addition, you need a protocol — not willpower, not good judgment, but a structured process that forces you to account for interaction effects before they occur.
Step 1: Name the value. What specific output does this agent produce that no existing agent produces? If the answer is "it does what Agent X does but better," you are not adding an agent — you are replacing one. Do the replacement. Do not run both.
Step 2: Map the interactions. List every existing agent the new one will interact with. For each, specify the interaction channel: shared time, shared attention, shared data, shared goals, or shared resources. If you have n existing agents, you should identify up to n potential interactions. Any interaction you cannot name is an interaction you have not thought through.
Step 3: Estimate the coordination cost. For each interaction channel, estimate the recurring cost in time, attention, or cognitive load. Be specific. "The new morning meditation competes with the journaling agent for the 6:30-7:00 AM slot" is an estimate. "It might take some time" is not.
Step 4: Apply the subtraction test. If adding this agent requires removing or downgrading an existing agent to stay within your coordination capacity, name which agent you will remove. If you cannot name one, you are assuming your capacity will expand to accommodate the addition. It will not.
Step 5: Set a review date. Schedule a specific date — seven to fourteen days after addition — to evaluate whether the agent is producing the value you predicted at the coordination cost you estimated. If the cost exceeds the estimate by more than 50%, remove the agent. No negotiation. The protocol is the protocol.
This is not bureaucracy. This is the minimum viable discipline for managing a combinatorial system. Every agent you add without this protocol is an invasive species introduced without an environmental impact assessment. Some will integrate cleanly. Others will trigger cascades you did not predict. The protocol does not eliminate risk. It makes the risk visible before the damage occurs.
From addition to removal: the complementary discipline
Adding agents carefully is half of the equation. The other half — removing agents cleanly when they no longer serve the system — is the subject of the next lesson (L-0518). Together, these two disciplines form the lifecycle management of your agent ecosystem: deliberate addition and deliberate subtraction.
The core principle of this lesson is structural, not aspirational: every new agent interacts with all existing agents. The interaction surface grows quadratically. Your coordination capacity does not. This means that the default answer to "Should I add this agent?" is no — not because the agent lacks value, but because the interaction cost is almost always higher than it appears.
Brooks saw it in software teams. Dunbar measured it in primate brains. Ecologists documented it in lake ecosystems. AI researchers confirmed it in multi-agent architectures. The pattern is universal. Systems that add agents without accounting for interaction effects do not become more capable. They become more complex. And complexity without coordination is just noise.
The discipline is simple: before you add, count the channels. If the channels exceed your capacity to manage them, the right answer is not to add the agent. The right answer is to make your existing agents work better — or to remove one before adding another.
Sources:
- Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
- Dunbar, R. I. M. (1992). "Neocortex size as a constraint on group size in primates." Journal of Human Evolution, 22(6), 469-493.
- Walsh, J. R., et al. (2016). "Invasive species triggers a massive loss of ecosystem services through a trophic cascade." Proceedings of the National Academy of Sciences, 113(15), 4081-4085.
- Sweller, J. (1988). "Cognitive load during problem solving: Effects on learning." Cognitive Science, 12(2), 257-285.
- Guo, T., et al. (2025). "Towards a Science of Scaling Agent Systems: When and Why Agent Systems Work." arXiv:2512.08296.
- Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
- Carver, C. S., & Scheier, M. F. (1998). On the Self-Regulation of Behavior. Cambridge University Press.