Agents do not emerge. They are built.
In L-0581, you learned that every epistemic agent — every internalized pattern that perceives, decides, and acts on your behalf — has a lifecycle from creation to retirement. That lesson established the frame: agents are not permanent fixtures of your personality. They are constructed, deployed, maintained, and eventually replaced.
But knowing that agents have a lifecycle does not tell you how to create one. And the creation step is where most people fail, because they do not recognize it as a step at all. They treat agent creation as a moment of decision — "I will start exercising" — rather than as a deliberate design process with identifiable stages, each of which can succeed or fail independently.
The difference between people who successfully build new cognitive and behavioral agents and people who cycle through abandoned resolutions is not willpower, motivation, or character. It is design. The successful builders treat agent creation as an engineering problem. The unsuccessful ones treat it as a wish.
This lesson gives you the engineering process.
Herbert Simon: design is changing existing situations into preferred ones
The foundational insight comes from Herbert Simon, who in The Sciences of the Artificial (1969) defined design as "devising courses of action aimed at changing existing situations into preferred ones." This definition is deliberately broad. Simon argued that engineers, physicians, managers, architects, and composers are all doing the same thing when they design: they identify a gap between what exists and what they want to exist, then construct an artifact — a plan, a system, a structure, a process — that closes the gap.
Simon's key contribution was recognizing that design is not a natural science. Natural science studies what is. Design studies what could be. The sciences of the natural describe existing systems. The sciences of the artificial create new ones. Every agent you build is an artifact in Simon's sense — something that does not exist in nature, that you construct deliberately to serve a purpose.
This reframing matters because it immediately disqualifies the most common approach to behavior change: hoping. Hope is not design. A vague aspiration to "be more organized" or "think more clearly" is not a course of action aimed at changing an existing situation into a preferred one. It is an emotional state directed at a fuzzy target. Simon's framework demands specificity: what is the existing situation, precisely? What is the preferred situation, precisely? What artifact will close the gap? Until you answer these questions, you have not begun to design.
The means-end analysis that Simon described — find the difference between the current state and the desired state, then find the process that erases the difference — is the skeleton of every agent creation process. Everything that follows is a refinement of this core operation: specifying the difference with precision, designing the process with rigor, and testing the design against reality before committing to it.
Design thinking: empathize before you engineer
Simon gave us the logic of design. The design thinking movement, formalized by the Hasso Plattner Institute of Design at Stanford (the d.school) and refined through decades of practice, gave us the process.
Design thinking proceeds through five stages: Empathize, Define, Ideate, Prototype, and Test. What makes this framework relevant to agent creation is the first stage — because most people skip it entirely when designing their own cognitive agents.
Empathize means understanding the problem from the perspective of the person who experiences it. In product design, this means interviewing users. In agent creation, the user is you — but a specific version of you. Not the you sitting calmly at your desk drafting plans on a Sunday evening. The you who will encounter the trigger at 6:45 AM on a Tuesday when you slept poorly and the kids are late for school. The you who will face the decision point while tired, distracted, stressed, or bored.
Most agent designs fail not because the logic is wrong but because the designer did not empathize with the deployer. You design an elaborate morning routine from the comfort of your planning session, and it shatters on contact with Tuesday morning because you never asked: what is it actually like to be me at 6:45 AM? What are my real constraints, not my idealized ones? What is my actual energy level, not my aspirational one?
The Define stage translates empathy into a problem statement. Not "I want to be healthier" but "I consistently skip the planned workout at 6 AM because I choose fifteen more minutes of sleep, which means I lose the only available exercise window before my workday begins." That problem statement contains the existing situation, the preferred situation, and the specific mechanism of failure — which is precisely the information you need to design an agent that works.
Ideate generates candidate solutions. Prototype builds a minimal version. Test runs it against reality. This sequence — understand, specify, generate, build small, test fast — is the antidote to the all-or-nothing approach that characterizes most failed behavior change attempts.
Gollwitzer: the if-then plan that bridges intention and action
The research that most directly addresses agent creation comes from Peter Gollwitzer, whose work on implementation intentions has produced one of the most robust findings in the psychology of goal pursuit.
An implementation intention is a plan in the form: "When situation X arises, I will perform behavior Y." It is an if-then specification that links an anticipated trigger to a predetermined response. Where a goal intention says "I intend to exercise more," an implementation intention says "When I finish my morning coffee, I will put on my running shoes and walk to the front door."
The distinction sounds trivial. The effect is not. Gollwitzer and Sheeran's 2006 meta-analysis of 94 independent studies involving over 8,000 participants found that forming implementation intentions produced a medium-to-large effect on goal attainment (Cohen's d = .65). People who specified the when, where, and how of their intended behavior were substantially more likely to follow through than people who held the same goal intention without the implementation plan.
The mechanism Gollwitzer identified explains why this works. An implementation intention creates a mental link between a situational cue and a behavioral response. Once formed, this link operates with characteristics of automaticity: the cue triggers the response without requiring conscious deliberation. In cognitive terms, you are pre-loading a stimulus-response association so that when the situation arises, the behavior initiates immediately rather than waiting for a conscious decision — a decision that, in the moment, competes with fatigue, distraction, and the pull of whatever you were already doing.
This is, in the language of this curriculum, agent creation at its most precise. You are specifying the agent's activation condition (the situation), the agent's behavior (the response), and the cognitive mechanism by which they connect (automatic cue-response association). The implementation intention is the agent's blueprint.
But Gollwitzer's work also reveals why a blueprint alone is insufficient. The 2025 review by Gollwitzer and Sheeran in the Annual Review of Psychology emphasizes that implementation intentions work best when embedded in a broader self-regulatory framework — when the goal is clear, the commitment is genuine, and the if-then plan addresses the actual obstacles rather than imagined ones. An implementation intention for a goal you do not care about produces nothing. An if-then plan that targets the wrong obstacle is precise but useless. The specificity of the plan must be matched by the accuracy of the diagnosis.
BJ Fogg: design the behavior, not the aspiration
BJ Fogg's Behavior Design framework, developed at Stanford's Behavior Design Lab and published as Tiny Habits (2019), operationalizes agent creation into a repeatable method that aligns with Gollwitzer's research while adding critical design constraints.
Fogg's core model states that behavior occurs when three elements converge simultaneously: Motivation, Ability, and a Prompt. If any one is missing, the behavior does not happen. This is the Fogg Behavior Model: B = MAP. A behavior fires when you are sufficiently motivated, the behavior is sufficiently easy, and a prompt arrives at the right moment.
For agent creation, this model provides a diagnostic framework. When an agent fails to activate, the failure is in one of exactly three places: insufficient motivation at the moment of the prompt, insufficient ability (the behavior is too hard given current conditions), or a missing or poorly timed prompt. You do not need to guess why an agent failed. You check the three components.
Fogg's specific contribution to the creation process is the Tiny Habits Recipe: "After I [anchor moment], I will [tiny behavior]." The anchor moment is an existing behavior that reliably occurs in your routine — finishing your morning coffee, sitting down at your desk, closing your laptop at the end of the day. The tiny behavior is the target behavior scaled down to its smallest executable unit — not "meditate for twenty minutes" but "take one conscious breath." Not "write in my journal" but "open my journal to today's page."
The scaling-down is not a compromise. It is a design principle. Fogg's research shows that the critical variable in habit formation is not repetition count but emotional reinforcement — specifically, the feeling of success. A behavior that is tiny enough to always succeed generates a small positive emotion each time. That positive emotion is what wires the neural pathway. You are not building the full agent on day one. You are building the activation circuit — the trigger-response connection — and then expanding the behavior once the circuit is reliable.
This is agent creation as iterative engineering: build the minimal viable agent first, verify that it activates reliably, then increase its scope. The opposite approach — designing the full-complexity agent and deploying it at scale on day one — is the equivalent of shipping untested software to production. It fails for the same reason: the design has not been validated against real conditions.
Architecture Decision Records: documenting the why
In software engineering, the Architecture Decision Record (ADR) pattern addresses a problem that maps directly to agent creation: important design decisions get made, but the reasoning behind them is lost. Six months later, no one remembers why the system was built this way rather than that way — and when conditions change, there is no basis for evaluating whether the original decision still applies.
An ADR captures a single architectural decision along with its context, alternatives considered, and consequences accepted. The format is standardized: Title, Status, Context, Decision, Consequences. The discipline is in the Context section — the explicit statement of what problem this decision addresses, what constraints shaped it, and what alternatives were evaluated before this one was chosen.
The parallel to agent creation is direct. When you design a cognitive or behavioral agent, you are making an architectural decision about your own operating system. The agent you build embodies a specific theory about what problem you face, what solution will work, and what trade-offs you are willing to accept. If you do not document this reasoning, you will face the same problem that software teams face: when the agent stops working, you will not know whether to fix it, replace it, or retire it, because you will not remember what it was designed to solve.
The agent creation equivalent of an ADR is simple but powerful: a written record that states (1) what need this agent addresses, (2) what alternative designs you considered, (3) why you chose this design, (4) what trade-offs you accepted, and (5) what conditions would indicate that this agent needs revision. This document is not bureaucratic overhead. It is the institutional memory of your personal cognitive architecture. When you create agents deliberately, you can maintain them deliberately. When you create them impulsively, you can only abandon them when they stop working.
Ritual design: encoding the agent in structure
Anthropological and behavioral research on ritual provides a final critical insight into agent creation: the role of structure, symbolism, and environmental encoding in making designed behaviors persist.
Nicholas Hobson, Juliana Schroeder, and colleagues published an integrative review of the psychology of rituals in 2018 identifying three regulatory functions: emotion regulation, performance goal-state regulation, and social connection. Their key finding for agent creation is that rituals work not despite their structured rigidity but because of it. The fixed sequence of actions, the specific environmental cues, the predictable timing — these are not incidental features. They are the mechanism. The structure reduces the cognitive load of execution, the environmental cues serve as prompts, and the predictability allows the behavior to become automatic.
Kursat Ozenc and Margaret Hagan's work on Ritual Design at Stanford's d.school extended this research into deliberate practice. They demonstrated that rituals can be intentionally designed rather than organically evolved — that you can engineer a structured behavior sequence with symbolic elements, anchor it to specific times and environments, and achieve the persistence benefits that naturally evolved rituals display. Their framework for team ritual design includes elements directly applicable to individual agent creation: a clear purpose (why the ritual exists), a specific trigger (what initiates it), a defined sequence (what happens and in what order), and a symbolic marker (what signals completion).
The relevance to agent creation is that structure is not optional. An agent that exists only as an intention in your mind — "I will reflect on my decisions each evening" — competes with every other demand on your attention. An agent that is encoded in environmental structure — a specific chair, a specific notebook, a specific time, a specific opening action — has infrastructure. The structure serves as both prompt and scaffold, reducing the cognitive cost of activation and increasing the probability that the behavior executes even when motivation is low.
The five-stage agent creation process
Drawing from all six research domains, here is the integrated process for creating an epistemic agent.
Stage 1: Identify the need. State the gap between your current cognitive or behavioral state and your desired state, using Simon's means-end framework. Be specific. "I want to think more clearly" is not a need statement. "I make impulsive decisions when under time pressure because I skip my evaluation criteria" is a need statement. The need must be concrete enough to suggest what a solution would look like.
Stage 2: Design the agent. Specify the agent using Gollwitzer's implementation intention structure and Fogg's behavior design model. Define the trigger (when and where the agent activates), the behavior (what the agent does, scaled to a reliably executable size), the success criterion (how you know the agent completed its function), and the environmental support (what physical or digital infrastructure the agent requires). Use the design thinking empathy principle: design for the version of you who will actually encounter the trigger, not the idealized version.
Stage 3: Stress-test the design. Before deployment, identify the three most likely failure scenarios — the conditions under which the trigger fires but the agent does not execute. For each scenario, design a degraded-mode response: a simpler version of the behavior that the agent can still perform under adverse conditions. This is architectural redundancy. An agent that only works under ideal conditions is not robust enough for deployment.
Stage 4: Plan the deployment. Specify when the agent will first run, what the first seven days look like, and what environmental preparation is required before the first activation. Encode ritual structure: anchor the agent to existing behaviors, prepare the physical environment, establish the symbolic markers of completion. This is where Fogg's tiny-behavior principle applies — the initial deployment should be the minimal viable version, not the full-complexity target.
Stage 5: Define the review point. Set a specific date — seven days out is effective for most behavioral agents — at which you will evaluate the agent's performance. Define in advance what evidence constitutes success, what evidence constitutes failure, and what modifications you will consider. Create the agent equivalent of an Architecture Decision Record: document the need, the design, the alternatives you considered, and the conditions under which you would revise or retire this agent.
Why "just do it" fails and design succeeds
The reason most people skip the creation process is that it feels like overhead. Why spend thirty minutes designing a habit when you could just start doing it right now? The answer is in the data: unplanned behavior change has a failure rate so high that researchers treat it as the baseline against which interventions are measured. Gollwitzer's meta-analysis showed that the simple act of forming an implementation intention — specifying when, where, and how — nearly doubles the probability of follow-through. Adding Fogg's behavior scaling, environmental design, and emotional reinforcement increases it further. Adding stress-testing and review protocols increases it further still.
Each stage of the creation process is not overhead. It is probability engineering. You are systematically increasing the likelihood that a desired cognitive or behavioral pattern will activate reliably under real-world conditions. Skipping the design phase does not save time. It guarantees that you will spend far more time failing and restarting than you would have spent designing and deploying correctly.
The creation process also gives you something that impulsive action never provides: a theory of the agent. You know what it is designed to do, why it is designed this way, what conditions would cause it to fail, and what you will do when those conditions arise. When the agent encounters difficulty — and it will — you have diagnostic information. You can identify which stage failed and address that stage specifically, rather than concluding that the entire endeavor was hopeless and abandoning it.
From creation to deployment
You now have a process for creating agents deliberately. You understand that agent creation is a design act, not a decision. You have a five-stage framework grounded in research from design science, cognitive psychology, behavior design, software architecture, and ritual studies.
But creating an agent and deploying an agent are different operations. In L-0583, you will confront a reality that surprises most people: deployment is not instant. A newly created agent does not become automatic the moment you finish designing it. There is a gap — sometimes days, sometimes weeks — between the agent's first activation and the point where it runs without conscious effort. That gap has its own dynamics, its own failure modes, and its own management requirements. Designing the agent is stage one. Surviving the deployment period is stage two.
Sources:
- Simon, H. A. (1969). The Sciences of the Artificial. MIT Press. (3rd edition, 1996.)
- Gollwitzer, P. M., & Sheeran, P. (2006). "Implementation Intentions and Goal Achievement: A Meta-Analysis of Effects and Processes." Advances in Experimental Social Psychology, 38, 69-119.
- Gollwitzer, P. M., & Sheeran, P. (2025). "Psychology of Planning." Annual Review of Psychology.
- Fogg, B. J. (2019). Tiny Habits: The Small Changes That Change Everything. Houghton Mifflin Harcourt.
- Hobson, N. M., Schroeder, J., Risen, J. L., Xygalatas, D., & Inzlicht, M. (2018). "The Psychology of Rituals: An Integrative Review and Process-Based Framework." Personality and Social Psychology Review, 22(3), 260-284.
- Ozenc, K., & Hagan, M. (2017). "Ritual Design: Crafting Team Rituals for Meaningful Organizational Change." Stanford d.school.
- Nygard, M., et al. (2011-present). Architecture Decision Records. Documented across AWS Prescriptive Guidance, Microsoft Azure Well-Architected Framework, and adr.github.io.