You cannot manage what you cannot see
You have agents running. Some you created last week. Some have been operating for years without a conscious thought from you. Some are thriving. Some are quietly degrading. And unless you have a practice of tracking where each one sits in its lifecycle, you are allocating your most scarce resource — attention — based on habit, recency, or anxiety rather than actual need.
The previous lesson addressed agent sprawl: the cost of accumulating too many agents without pruning. This lesson addresses a subtler problem. Even when you have the right number of agents, you can still mismanage them by failing to recognize that each one is at a different stage and therefore requires a different kind of attention.
A newly deployed agent needs calibration and close monitoring. A mature agent needs to be left alone. A declining agent needs evaluation and possible retirement. If you treat them all the same — either hovering over everything or neglecting everything — your cognitive infrastructure degrades not from any single failure but from a systemic mismatch between attention and need.
Situational awareness is a three-level skill
Mica Endsley, a human factors engineer, developed the most widely cited model of situational awareness in 1995. Originally designed for fighter pilots and air traffic controllers, the model describes three levels that apply directly to managing cognitive agents.
Level 1 — Perception. You perceive the current state of each element in your environment. For agents, this means you can answer the basic question: what agents do I have, and what is each one doing right now? Endsley's research in aviation found that 76% of situational awareness errors were Level 1 failures — people simply failed to perceive information that was available to them. The equivalent for cognitive agents: you forget that an agent exists, or you cannot recall when you last evaluated it. The information is there. You just stopped looking.
Level 2 — Comprehension. You synthesize what you perceive into meaning. This is where you recognize that your decision-making agent has been producing reliable outputs for months (it is in maturity) while your emotional regulation agent is still misfiring under stress (it is still in deployment). Perception without comprehension produces raw data. Comprehension produces a picture of relative health across your portfolio.
Level 3 — Projection. You extrapolate current trajectories into the near future. If your conflict-resolution agent has been declining in effectiveness over the past three months, where will it be in three more? If a new agent is showing rapid improvement during calibration, when will it reach maturity and require less oversight? Projection is what turns awareness into strategy — it lets you allocate attention today based on where each agent will be, not just where it is now.
Most people operating cognitive infrastructure are stuck at Level 1, if they are even there. They can name some of their agents if prompted. They cannot tell you the lifecycle stage of each one, and they certainly cannot project forward. This lesson exists to move you from unconscious management to deliberate situational awareness.
Four stages that every agent moves through
Wardley mapping, a strategy framework created by Simon Wardley, identifies four evolution stages that all components in a value chain traverse: genesis, custom-built, product, and commodity. Each stage has different characteristics, different competitive dynamics, and crucially, different management requirements. The same pattern applies to cognitive agents, with adapted terminology.
Genesis. The agent is newly conceived. You have identified a need and designed a response pattern, but it has not been tested against reality. A genesis agent is all intention and no track record. It requires your most creative attention — you are defining what this agent even is, what triggers it, and what success looks like. Examples: you have just designed a pre-meeting preparation protocol, or a new framework for evaluating whether to say yes to commitments. The agent exists on paper but not yet in practice.
Deployment. The agent is active but unstable. You are running it in real situations, and it is producing a mix of useful outputs and misfires. This stage demands the highest monitoring intensity of any stage — not creative design work, but calibration work. You are adjusting trigger conditions, refining the response pattern, and building the reps that move the agent from "I have to think hard to use this" toward "this is becoming automatic." Most agents that fail, fail here — not because they were badly designed, but because they did not receive enough monitoring during the fragile deployment window.
Maturity. The agent runs reliably with minimal conscious intervention. It fires when it should. Its outputs are consistently useful. You trust it. This is where most people make their subtlest error: they keep monitoring a mature agent out of habit, spending attention where it is not needed. A mature agent should be reviewed periodically — perhaps monthly — but it should not consume weekly cognitive budget. The whole point of building agents is to free up attention. If you over-maintain mature agents, you never actually reclaim that attention.
Decline. The agent's context has shifted. The situations that made it valuable occur less frequently, or you have outgrown the response pattern. A declining agent is not broken — it is obsolete. It still functions, but it no longer serves the purpose it was designed for. Without lifecycle awareness, declining agents persist indefinitely, consuming residual attention and occasionally producing outputs that are subtly misaligned with your current reality. This is the connection to the sprawl problem from L-0598: agents you never retire because you never noticed they were declining.
Maturity models: the principle of stage-appropriate management
The Capability Maturity Model Integration (CMMI), developed at Carnegie Mellon University, formalized a principle that applies well beyond software engineering: the management approach that works at one maturity level is wrong at another. At CMMI Level 1 (Initial), processes are ad hoc and success depends on individual heroics. At Level 3 (Defined), standardized processes exist across the organization. At Level 5 (Optimizing), the organization continuously improves based on quantitative feedback.
The insight is not just that maturity exists on a spectrum. It is that each level requires fundamentally different management behavior. You do not optimize a Level 1 process — you stabilize it. You do not heroically manage a Level 5 process — you measure it and let the system self-correct.
The same principle governs your agents. A genesis-stage agent needs design attention: define its triggers, clarify its outputs, articulate what success looks like. A deployment-stage agent needs calibration attention: run it, observe it, adjust it. A mature agent needs audit attention: check in periodically, confirm it is still aligned with your goals, and otherwise leave it alone. A declining agent needs evaluation attention: decide whether to evolve it, retire it, or replace it.
When you apply deployment-stage attention to a mature agent, you waste time. When you apply audit-stage attention to a deployment-stage agent, you let a fragile system break. Lifecycle awareness is knowing which kind of attention each agent needs right now.
Metacognitive monitoring: the skill underneath the skill
John Flavell's foundational research on metacognition (1979) identified a distinction that is essential here: the difference between cognition and knowing about your cognition. You can have agents running — that is cognition. You can also monitor those agents, assess their effectiveness, and regulate how you interact with them — that is metacognition.
Flavell identified four components of metacognitive ability: knowledge (what you know about your cognitive processes), experiences (the felt sense of how a cognitive event is going), goals (what you are trying to accomplish), and strategies (how you plan to accomplish it). Lifecycle awareness engages all four. You know what stage each agent is in (knowledge). You notice when an agent feels effortful versus automatic (experience). You understand what each agent is supposed to produce (goals). And you choose different management approaches based on stage (strategies).
What Flavell's research showed about children applies to adults managing cognitive infrastructure: the skill of monitoring is separate from the skill being monitored, and it develops independently. You can have sophisticated agents and terrible awareness of their lifecycle status. The monitoring ability does not come for free — it is a distinct skill that must be built through deliberate practice.
This is why a simple weekly review that asks "what stage is each agent in?" produces outsized returns. You are not doing the work of the agents themselves. You are doing the metacognitive work of monitoring the agents — which is how you catch the deployment-stage agent that needs more reps, the mature agent you should stop fussing over, and the declining agent that is quietly producing outdated outputs.
The MLOps parallel: why production systems need monitoring
Machine learning operations (MLOps) provides a concrete parallel from engineering. When an ML model is deployed to production, it does not stay accurate forever. The data it was trained on drifts. User behavior changes. The world moves on. Without continuous monitoring, a model that was 95% accurate at deployment can degrade to 70% accuracy over months — and nobody notices because the model still produces outputs that look plausible.
The MLOps lifecycle addresses this through explicit monitoring stages: tracking performance metrics after deployment, detecting data drift and model drift, setting automated alerts when performance degrades past thresholds, and triggering retraining or rollback when needed. The entire practice is built on a single assumption: production systems degrade unless you watch them, and watching them requires knowing what healthy looks like at each stage.
Your cognitive agents have no automated monitoring dashboard. You are the monitoring system. And without a practice of checking lifecycle stage — is this agent still in deployment, has it reached maturity, is it starting to decline? — you are running production-grade cognitive infrastructure with no observability layer. The outputs look plausible. You do not notice the drift. And over time, agents that once served you well are quietly producing misaligned results.
Attention is the scarce resource lifecycle awareness protects
The product lifecycle model, one of the oldest frameworks in business strategy, divides a product's life into four stages: introduction, growth, maturity, and decline. Each stage demands a different allocation of resources. You invest heavily during introduction and growth. You optimize for efficiency during maturity. You make hard choices during decline — reinvest, pivot, or sunset.
The resource being allocated in product lifecycle management is money. The resource being allocated in agent lifecycle management is attention. And attention, as Cowan's working memory research established, is far more constrained than money — you have roughly 3 to 5 slots of active cognitive processing at any moment. Every unit of attention spent monitoring a mature agent that does not need it is a unit unavailable for calibrating the deployment-stage agent that does.
This is why lifecycle awareness is not a nice-to-have organizational practice. It is the mechanism that prevents your most constrained resource from being allocated by default rather than by design. Without it, attention flows to whatever agent is most salient — the one that fired most recently, the one attached to the strongest emotion, the one you built most recently. With lifecycle awareness, attention flows to whatever agent most needs it based on its current stage.
The awareness practice
Lifecycle awareness is not complex. It is a monitoring habit applied to your agent portfolio. Here is what it requires:
A registry. You need a list of your active agents. This can be a document, a spreadsheet, a set of notes — the format does not matter. What matters is that every agent has a name and a lifecycle stage recorded in one place. If you completed the agent sprawl audit from L-0598, you already have the raw material.
A stage assessment. For each agent, determine its current lifecycle stage: genesis, deployment, maturity, or decline. Be honest. An agent you built three months ago that you have barely used is still in genesis, not maturity. An agent you have been running for years that no longer matches your circumstances is in decline, not maturity.
An attention allocation. Based on the stage distribution, decide where your attention goes this week. Deploy-stage agents get the most monitoring time. Genesis-stage agents get design time if you have capacity. Mature agents get a brief check-in. Declining agents get an evaluation: evolve, retire, or replace.
A cadence. Weekly is enough for most people. The assessment does not need to be long — a sentence per agent, updated during your existing review process. The point is not to create a new system. The point is to add a single metacognitive layer to whatever review practice you already have.
What this makes possible
When you know where each agent is in its lifecycle, three things change.
First, your attention becomes strategic. You stop spending equal time on agents at different stages, and you start spending proportional time based on actual need. Deployment-stage agents get stabilized faster. Mature agents stop consuming attention they do not need. Declining agents get noticed before they produce months of subtly misaligned outputs.
Second, you can project forward. Instead of reacting to agent failures after they happen, you can anticipate transitions. A deployment-stage agent that has been running well for six weeks is approaching maturity — you can start reducing your monitoring cadence. A mature agent whose context is shifting may be entering decline — you can begin planning its successor before it breaks.
Third, you see your portfolio as a portfolio. Not a collection of independent agents, but a system with a distribution of stages that shifts over time. If all your agents are in maturity, you are not growing — you are coasting. If all your agents are in genesis or deployment, you are overextended. A healthy portfolio has agents at multiple stages, with a deliberate flow from genesis through maturity, and a willingness to recognize and act on decline.
This is the penultimate lesson in the Agent Lifecycle phase because it synthesizes everything that came before: creation, deployment, maintenance, evolution, retirement, portfolios, documentation, and sprawl all become manageable when you have one meta-skill — the ability to see each agent's lifecycle stage and allocate attention accordingly.
The final lesson, L-0600, will take this one step further: the lifecycle you apply to your agents mirrors the lifecycle of your own learning. How you create, maintain, and retire agents reflects how you grow as a person. Lifecycle awareness of your agents is, ultimately, lifecycle awareness of yourself.