Your agents are already connected. You just cannot see how.
In L-0509, you learned that context must transfer cleanly between agents. But that lesson addressed a single hand-off — one agent finishing, another starting. The reality of your cognitive infrastructure is far more tangled. You do not run two agents in sequence. You run dozens of agents with overlapping inputs, shared outputs, hidden prerequisites, and cascading failure modes. Some agents depend on others you have never explicitly connected. Some produce outputs that three other agents silently consume. Some sit on critical paths where a single failure propagates through your entire system.
You cannot coordinate what you cannot see. And right now, you cannot see the dependency structure of your own cognitive architecture. You have agents — routines, habits, processes, decision frameworks, communication protocols — and they interact. But the interaction pattern is invisible. It lives in your intuition, not in an explicit structure you can inspect, debug, and redesign.
This lesson gives you the tool to make that structure visible: the dependency map.
What a dependency is and why it matters
A dependency exists whenever one agent requires the output of another agent before it can operate effectively. This is not a preference or a suggestion. It is a structural constraint. If Agent B needs the output of Agent A, then Agent B cannot run correctly until Agent A has completed — or at minimum, until Agent A has produced the specific artifact that Agent B consumes.
Herbert Simon, in his landmark paper "The Architecture of Complexity" (1962), identified a property he called near-decomposability. Viable complex systems — whether biological, social, or artificial — organize into hierarchical layers where interactions within components are much stronger than interactions between components. The dependencies inside a subsystem are dense. The dependencies between subsystems are sparse. This is not accidental. It is how complex systems survive disruption. When a disturbance hits, it propagates within the affected subsystem but does not cascade across the entire architecture — because the cross-subsystem dependencies are weak and few.
Simon illustrated this with a parable of two watchmakers. One assembles watches from a thousand individual parts. Every interruption forces him to start over from scratch. The other organizes his watch into stable subassemblies of ten parts each, grouped into super-assemblies of ten subassemblies. When interrupted, he loses at most nine parts of work — the incomplete subassembly — not the entire watch. The second watchmaker's design acknowledges dependencies explicitly and groups tightly-coupled parts together.
Your cognitive agents have the same structure. Some are tightly coupled — your journaling agent and your planning agent may be nearly inseparable. Others are loosely coupled — your exercise routine and your email triage probably share no direct dependencies. The question is whether you have made this coupling structure explicit. If you have not, you are the first watchmaker: every disruption cascades unpredictably because you do not know which parts depend on which.
The directed acyclic graph: your primary tool
The formal structure for representing dependencies is a directed acyclic graph, or DAG. In a DAG, each node represents an agent, and each directed edge represents a dependency — an arrow from Agent A to Agent B means "Agent B depends on Agent A's output." The graph is directed because dependencies flow in one direction: A produces, B consumes. The graph is acyclic because circular dependencies — where A depends on B, which depends on C, which depends back on A — create deadlocks where no agent can start because every agent is waiting for another.
DAGs are not an obscure computer science abstraction. They are the backbone of every modern build system, workflow orchestrator, and pipeline manager. Apache Airflow, the industry-standard workflow orchestration tool, requires users to define their workflows as DAGs explicitly — you cannot schedule tasks without first declaring which tasks depend on which. Git, the version control system, represents its entire commit history as a DAG. Package managers like npm resolve installation order by constructing a DAG of library dependencies and performing a topological sort — processing each node only after all its dependencies have been processed.
The reason every serious coordination system uses DAGs is that dependencies are the fundamental constraint on execution order. You cannot build the roof before the walls. You cannot deploy code before it compiles. You cannot plan your day before you know your priorities. The DAG makes these constraints visible and computable. Without it, you are scheduling by intuition, and intuition does not scale.
Cognitive task analysis: the research foundation
The practice of mapping dependencies between cognitive tasks has deep roots in educational psychology and instructional design. Cognitive task analysis, developed by researchers including Richard Clark and Jeroen van Merrienboer, is a methodology for identifying the knowledge representations, mental strategies, and goal structures that underlie expert performance. A central output of cognitive task analysis is a prerequisite map — a structured representation of which knowledge and skills must be acquired before other knowledge and skills can be learned.
Robert Gagne's learning hierarchies formalize this further. Gagne demonstrated that complex skills decompose into prerequisite sub-skills arranged in a strict dependency order. You cannot learn long division until you have learned subtraction. You cannot learn subtraction until you have learned number comparison. The dependency is not pedagogical preference — it is cognitive architecture. The higher skill literally cannot execute without the lower skill's output as input.
This same principle applies to your cognitive agents. Your strategic-planning agent cannot produce meaningful output until your situation-assessment agent has produced a current-state analysis. Your creative agent cannot generate novel solutions until your problem-framing agent has defined the problem space. These are not scheduling preferences. They are structural dependencies. Map them or suffer the consequences of running agents out of order.
How to build your dependency map
Building a dependency map requires four steps. Do not skip any of them.
Step 1: Enumerate your agents. List every cognitive agent you operate. Include routines (morning review, weekly planning, daily shutdown), processes (how you triage email, how you prepare for meetings, how you make decisions), and habits (journaling, exercise, reading). Be exhaustive. Most people undercount their agents by half because they do not recognize automated habits as agents. If it runs on a recurring cycle and produces output that affects your behavior, it is an agent.
Step 2: Identify inputs and outputs. For each agent, answer two questions. What does this agent need before it can run? What does this agent produce that something else consumes? Be specific. "Clarity" is not an input. "A ranked list of today's three priorities" is an input. "Feeling good" is not an output. "A completed task log with time stamps" is an output. The more concrete your inputs and outputs, the more useful your dependency map.
Step 3: Draw the edges. For each input that one agent needs, identify which agent produces it. Draw a directed arrow from the producer to the consumer. If no agent produces a required input, you have found a gap — an unmet dependency that explains why that agent sometimes fails for no apparent reason. If an agent produces output that nothing consumes, you have found waste — an agent doing work that does not contribute to your system.
Step 4: Identify critical nodes. Count the incoming and outgoing edges for each agent. Agents with many outgoing edges are force multipliers — when they work well, everything downstream benefits. Agents with many incoming edges are fragility points — they fail whenever any upstream dependency fails. Agents with both high fan-in and high fan-out are your system's load-bearing walls. Protect them accordingly.
The AI parallel: orchestration graphs in multi-agent systems
If the dependency graph sounds like engineering rather than personal development, consider that every multi-agent AI system in production today is built on exactly this structure. Microsoft's AI agent orchestration patterns define three primary coordination architectures — sequential, parallel, and supervisor — and all three require explicit dependency declarations between agents.
In sequential orchestration, Agent B cannot start until Agent A completes. In parallel orchestration, Agents B and C can run simultaneously but both must complete before Agent D starts. In supervisor orchestration, a central coordinator decomposes a task, delegates subtasks to specialized agents, and synthesizes their outputs — but only after explicitly mapping which subtasks depend on which.
Research published in 2025 on multi-agent AI systems documents a consistent finding: error cascades occur when sub-agents produce outputs that downstream agents cannot consume. The failure is not in any individual agent. It is in the unmapped or incorrectly mapped dependency between agents. When the dependency graph is wrong — when Agent B assumes it will receive structured data from Agent A but actually receives unstructured text — the entire pipeline fails. The graph is not documentation. It is the coordination mechanism itself.
Your cognitive agents operate on the same principle. When your planning agent assumes your journaling agent has already produced a priority list, but you skipped journaling today, the planning agent runs on stale or missing input. It does not fail loudly. It produces a vague, unfocused plan — and you spend the rest of your day executing tasks that do not matter, never connecting the symptom (wasted afternoon) to the cause (skipped journaling, which broke the dependency chain).
What the map reveals that intuition hides
Once you have a dependency map, three patterns become immediately visible that intuition alone cannot detect.
Hidden critical paths. Your system has a longest dependency chain — a sequence of agents where each must complete before the next can start. This is your critical path. Any delay on this path delays everything downstream. Most people cannot identify their critical path by introspection because they do not see the full chain. The map shows it instantly.
Single points of failure. If one agent has five outgoing edges, its failure breaks five downstream agents. You may not realize that your morning review is a single point of failure for your entire day until you see five arrows emanating from it. The map converts "I had a bad day" into "my morning review failed, which broke planning, which broke prioritization, which broke deep work, which broke the project deadline."
Unnecessary dependencies. Some dependencies are real. Others are artifacts of habit. Your communication agent may depend on your planning agent only because you always do them in that order — not because communication actually requires planning's output. The map lets you distinguish structural dependencies (Agent B literally cannot run without Agent A's output) from accidental dependencies (Agent B runs after Agent A out of habit, not necessity). Removing accidental dependencies increases parallelism — you can run more agents simultaneously, which increases throughput.
From map to coordination protocol
The dependency map is not the destination. It is the diagnostic instrument. Once you can see your dependencies, you can make three kinds of improvements.
First, you can protect critical nodes. If your morning review has five downstream dependents, you do not skip it. You build redundancy around it. You create a fallback protocol for the days when it cannot run at full capacity. You treat it with the seriousness its position in the graph demands.
Second, you can break unnecessary chains. Every accidental dependency you remove is a constraint you eliminate. If your exercise routine does not actually depend on your journaling output, decouple them. Run them in parallel. Gain time.
Third, you can detect cycles. If Agent A depends on Agent B and Agent B depends on Agent A, you have a circular dependency — and circular dependencies produce deadlocks. Neither agent can start because each is waiting for the other's output. This is exactly what the next lesson, L-0511, addresses: deadlock prevention. But you cannot prevent deadlocks you cannot see, and the dependency map is how you see them.
The map is not a one-time exercise. Your agents evolve. You add new routines, retire old ones, restructure workflows. Every change potentially alters the dependency graph. Treat the map as a living artifact — update it when your system changes, and review it when your system breaks.
Your agents are already connected by dependencies. The only question is whether you see those connections or suffer from them blindly.
Sources:
- Simon, H. A. (1962). "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106(6), 467-482.
- Clark, R. E., Feldon, D., van Merrienboer, J., Yates, K., & Early, S. (2008). "Cognitive Task Analysis." Handbook of Research on Educational Communications and Technology, 577-593.
- Gagne, R. M. (1968). "Learning Hierarchies." Educational Psychologist, 6(1), 1-9.
- Microsoft Azure. (2025). "AI Agent Orchestration Patterns." Azure Architecture Center.
- Apache Software Foundation. (2025). "DAGs." Apache Airflow Documentation.
- Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
- Carver, C. S., & Scheier, M. F. (1998). On the Self-Regulation of Behavior. Cambridge University Press.