Order is not optional
In the previous lesson, you learned that priority ordering resolves conflicts between agents competing for the same cognitive territory. But priority tells you which agent wins when two collide. It does not tell you which agent should run first when one depends on another's output.
This is a different problem entirely. Conflict resolution is about dominance. Sequencing is about dependency. You can have agents that never conflict — they operate in completely separate domains — but that still must execute in a specific order because one produces the input the next one requires. Your energy-assessment agent and your priority-setting agent do not compete. They do not overlap in scope. But if you set priorities without first assessing energy, you will commit to a plan your body cannot execute. The dependency is invisible until you violate it, and then the failure looks like poor planning rather than what it actually is: agents running out of order.
Most people never define their cognitive sequences explicitly. They rely on intuition, habit, and whatever order feels natural in the moment. This works when cognitive load is low and the stakes are small. It fails precisely when it matters most — under pressure, under fatigue, when the cost of getting the order wrong compounds through every downstream step.
Herbert Simon and the sequential nature of attention
Herbert Simon spent decades studying how humans actually make decisions, as opposed to how economic models assumed they did. His central insight, which earned him the Nobel Prize in Economics in 1978, was bounded rationality: human decision-makers do not optimize across all variables simultaneously. They cannot. The cognitive architecture will not support it.
Simon's less famous but equally important contribution was his analysis of sequential attention. In Administrative Behavior (1947) and subsequent work, Simon demonstrated that human cognition is fundamentally serial at the level of deliberate reasoning. You can hold multiple items in working memory, but you can only reason about one thing at a time. This means that when you face a multi-step cognitive task, the order in which you attend to each step shapes the outcome as powerfully as the quality of reasoning you apply to any individual step.
Simon described this as "the allocation of scarce attention." In a world where information is abundant and cognitive capacity is limited, the sequence in which you process information determines which information influences which decisions. Attend to budget constraints before creative options and you generate conservative solutions. Attend to creative options before budget constraints and you generate ambitious solutions that may need trimming. Same information, same reasoning ability, different sequence — different outcome.
This is not a flaw to correct. It is a structural feature of human cognition to engineer around. You cannot make attention parallel for deliberate reasoning. What you can do is define the sequence deliberately instead of letting it emerge from whichever stimulus grabs your attention first.
Dependency is the reason sequence exists
Not all cognitive agents need to run in sequence. Some are genuinely independent — your agent that tracks hydration has no dependency on your agent that reviews your calendar. But many agents that feel independent actually have hidden dependencies, and those dependencies are the reason sequencing matters.
A dependency exists when one agent requires the output of another as its input. This sounds obvious in the abstract, but in practice, most people have never mapped the actual dependency structure of their cognitive routines. Consider a common decision-making process: you evaluate the options, you assess the risks, you check alignment with your values, and you commit. Four agents. The natural assumption is that these can run in any order — or all at once. But examine the dependencies:
Risk assessment requires knowing what the options are. You cannot assess the risk of an option you have not yet identified. Values alignment requires both the options and the risk profile — you need to know what you are choosing between and what each choice costs. Commitment requires all three previous outputs. The dependency structure is not flat. It is layered, and the layers dictate a sequence: options first, then risks, then values alignment, then commitment.
In computer science, this dependency structure is formalized as a Directed Acyclic Graph, or DAG. Each node is a task. Each directed edge means "this task must complete before that task can start." The constraint that the graph must be acyclic — no circular dependencies — ensures that a valid execution order exists. The algorithm that computes this order is called topological sort, and it has been a foundational tool in computer science since the 1960s, used in everything from build systems to package managers to spreadsheet formula evaluation.
You do not need to know the algorithm. You need to know the principle: if Agent B requires the output of Agent A, then A must run before B. If Agent C requires the output of both A and B, then both must complete before C starts. This is not a suggestion. It is a logical constraint. Violating it does not produce suboptimal results — it produces incoherent ones, because the downstream agent is operating on inputs that do not yet exist.
The cognitive cost of implicit sequencing
Research on task switching demonstrates exactly what happens when you fail to define sequence explicitly. A 2024 study from Wake Forest University documented what cognitive psychologists call "switch cost" — the measurable time and accuracy penalty the brain pays when it disengages from one task and engages with another. The frontal and parietal lobes must reconfigure their processing context at each switch, and this reconfiguration is not free.
But switch cost is only the surface problem. The deeper cost of implicit sequencing is that you repeatedly attempt steps before their inputs are ready, discover the missing input mid-process, switch to the prerequisite task, then switch back — paying the switch cost twice and often losing the partial work from the first attempt. This is not multitasking. It is failed sequencing manifesting as thrashing.
You see this in meetings constantly. A team attempts to make a decision (agent 4) before someone realizes they have not assessed the risks (agent 2), which triggers a tangent about risk that surfaces an option nobody had considered (agent 1), which invalidates the risk assessment that was half-complete. Forty minutes of thrashing that a five-minute sequence definition — "first enumerate options, then assess risks, then check alignment, then decide" — would have prevented entirely.
The same pattern plays out in individual cognition. You sit down to write a project proposal. You start drafting (agent: compose) before clarifying the audience (agent: scope), realize mid-paragraph you do not know who will read this, switch to thinking about the audience, realize the audience question depends on the project's strategic positioning (agent: frame), switch again, and twenty minutes later you have three half-finished cognitive tasks and zero usable output. The agents were fine. The sequence was missing.
The AI parallel: sequential chains and DAG orchestration
The engineering of AI agent systems has made sequencing patterns explicit in ways that human cognitive design has not — yet.
In multi-agent AI architectures, the sequential chain is the most fundamental orchestration pattern. As described in AWS's agentic AI patterns documentation and Google's Agent Development Kit, a sequential chain works like an assembly line: Agent A completes its task and passes its output to Agent B, which completes and passes to Agent C. Each agent's output becomes the next agent's input. The sequence is defined before execution, not discovered during it.
The more sophisticated version is DAG orchestration, where agents are arranged in a dependency graph rather than a simple linear chain. A DAG allows for both sequential dependencies (A must finish before B starts) and parallelism (C and D can run simultaneously if neither depends on the other). Frameworks like LangGraph and Apache Airflow implement this pattern, representing workflows as directed acyclic graphs where each node is an agent and each edge is a dependency.
The critical engineering insight is that the dependency graph must be defined explicitly. No production AI system discovers its execution order at runtime by trial and error — that would produce exactly the thrashing pattern described in the previous section. Instead, the developer specifies which agents depend on which outputs, and the orchestration framework computes a valid execution order using topological sort. The sequence is a first-class design artifact, not an emergent accident.
This is the same principle applied to your cognitive agents. Your morning routine, your weekly review, your creative process, your decision-making protocol — each of these is a multi-agent workflow. Each has dependencies between steps. And each will produce better output when you define the sequence explicitly rather than letting it emerge from habit or mood.
How to define a cognitive sequence
Defining the sequence for your cognitive agents is a four-step process.
Step 1: List the agents. Identify every distinct cognitive operation in the process you are sequencing. Be specific. "Think about the project" is not an agent. "Identify the three highest-risk unknowns" is an agent. Each agent should have a clear input and a clear output.
Step 2: Map the dependencies. For each pair of agents, ask: does Agent B need the output of Agent A to function? If you removed Agent A's output entirely, could Agent B still produce a meaningful result? If not, there is a dependency. Draw an arrow from A to B.
Step 3: Check for cycles. If Agent A depends on B and B depends on A, you have a circular dependency. This means you have conflated two concerns or defined your agents at the wrong level of abstraction. Break the cycle by decomposing one of the agents into sub-agents, or by recognizing that what you thought was a dependency is actually a preference.
Step 4: Arrange and commit. Order the agents so that every dependency is respected — no agent runs before its inputs are available. Write the sequence down. Not in your head. On paper, in a checklist, in a template. The explicit artifact is the point. A sequence that lives only in memory is a sequence that will degrade under load, which is precisely when you need it most.
This is not overhead. It is infrastructure. The five minutes you spend defining a sequence saves you the twenty minutes of thrashing that an implicit sequence produces every time the process runs under pressure.
Sequences are hypotheses, not commandments
A defined sequence is not a permanent fixture. It is a hypothesis about the dependency structure of your cognitive agents, and like any hypothesis, it should be tested and revised.
After running your explicit sequence several times, you will notice things. You will discover that some dependencies you assumed were real are actually optional — Agent B works fine without Agent A's output, which means you may be able to run them in parallel (the subject of the next lesson, L-0505). You will discover new dependencies you missed — Agent C's output quality drops when Agent D has not run, even though you did not draw that arrow initially. You will discover that the sequence that works on a calm Monday fails on a stressed Friday, because cognitive load changes which dependencies are binding.
This is expected. The sequence is a living document. The value is not in getting the perfect sequence on the first attempt. The value is in having an explicit sequence at all — because an explicit sequence can be examined, tested, and improved. An implicit sequence cannot. It just runs, invisibly, and when it produces bad output, you have no way to diagnose which step failed or which dependency was violated.
Think of your cognitive sequences the way a software engineer thinks of a build pipeline. The first version is never the final version. But the first version that is written down is infinitely more valuable than the tenth version that exists only in someone's head, because the written version can be debugged.
From sequence to architecture
Defining the execution order of your cognitive agents is the bridge between having agents and having a system. Individual agents are capabilities. Priority ordering (L-0503) told you which capabilities take precedence when they conflict. Sequencing tells you which capabilities must execute in what order to produce coherent output.
But notice what you now have the tools to ask: if I have defined my dependencies, and some agents do not depend on each other at all, can they run simultaneously? This is exactly the question that the next lesson — parallel versus sequential agent execution (L-0505) — will answer. Sequencing gives you the map. The next lesson teaches you how to read that map for opportunities to run agents in parallel where no dependency prevents it.
The progression is deliberate: coordination requires priority, priority enables sequencing, sequencing reveals the possibility of parallelism. Each lesson builds the infrastructure for the next. And together, they transform a collection of independent cognitive agents into a coordinated system that produces results no single agent could achieve alone.
Sources:
- Simon, H. A. (1947). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations. Macmillan.
- Simon, H. A. (1971). "Designing Organizations for an Information-Rich World." In Computers, Communications, and the Public Interest. Johns Hopkins Press.
- Kahn, A. B. (1962). "Topological sorting of large networks." Communications of the ACM, 5(11), 558-562.
- Monsell, S. (2003). "Task switching." Trends in Cognitive Sciences, 7(3), 134-140.
- AWS Prescriptive Guidance. (2025). "Workflow for Prompt Chaining." Agentic AI Patterns.
- Google Developers. (2025). "Developer's Guide to Multi-Agent Patterns in ADK." Google Developers Blog.
- Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.