186 published lessons with this tag.
Cognitive agents are repeatable processes you design to handle recurring decisions.
Your habits and automatic reactions are agents that were installed without your conscious input.
Every deliberate agent you create replaces an unconscious default.
Every agent has a trigger that activates it, a condition that validates it, and an action it takes.
When an agent handles a recurring decision you preserve energy for novel decisions.
Vague agents do not fire reliably — specificity is required.
Internal agents run in your mind while external agents are embedded in tools and systems.
Inventory your existing agents both designed and default to understand what is running.
A simple agent that fires consistently beats a complex agent that fires intermittently.
Each agent should handle one specific situation — multi-purpose agents are fragile.
Written agent descriptions can be reviewed refined and shared.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
When an agent fails to fire or produces bad results you learn how to improve it.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Agents for how to respond in social situations like receiving criticism or giving feedback.
Agents for recurring decision types like buy-versus-build or accept-versus-decline.
Agents for how to structure emails presentations and difficult conversations.
Agents for sleep exercise nutrition and stress management decisions.
Agents for spending saving and investment decisions.
Designing agents for your own cognition is applying systems design to the most important system you manage.
Without a clear trigger an agent never activates no matter how well designed.
Internal triggers are thoughts and feelings — external triggers are events and cues.
A trigger must be something you can detect consistently.
Physical cues in your environment trigger more reliably than mental intentions.
Using specific times or time intervals as triggers leverages your existing time awareness.
Linking an agent to a specific event like arriving at work or opening your laptop.
Using specific emotional states as activation signals for pre-designed responses.
The completion of one agent becomes the trigger for the next.
Too sensitive and the agent fires too often — too insensitive and it never fires.
When a trigger fires in the wrong context you need to add qualifying conditions.
When you fail to notice a trigger you need to make it more salient.
Combining multiple trigger conditions for higher-specificity activation.
Position trigger cues where you will encounter them at the right moment.
Alarms, notifications, and calendar events as systematic trigger mechanisms.
Other people can serve as triggers — asking someone to remind you is a social trigger.
Too many triggers overwhelm your attention — curate ruthlessly.
Regularly review your triggers to ensure they are still relevant and well-calibrated.
You are designing the user experience of your own cognitive systems.
Start with broad triggers and narrow them as you learn what works.
A complete set of well-tuned triggers means you respond appropriately to everything that matters.
Every decision costs attention and energy — systematic frameworks reduce this cost.
Most decisions you face are variations of types you have encountered before.
Weight your criteria and score options systematically when multiple factors matter.
Spend minimal time on easily reversible decisions and maximum time on irreversible ones.
Deciding in advance what you will do in a specific situation removes in-the-moment temptation.
Record decisions, their reasoning, and their outcomes to improve future decision-making.
Setting deadlines for decisions prevents analysis paralysis.
Know which decisions you must make yourself and which can be delegated.
Different frameworks for decisions made alone versus with others.
Sometimes deciding fast is more important than deciding optimally.
After a decision plays out review whether your framework served you well.
Choosing which framework to apply requires a meta-framework.
When routine decisions are systematized your creative energy is preserved for novel problems.
Any system that cannot observe its own output cannot improve.
Action observation evaluation and adjustment form the basic feedback cycle.
The faster you get feedback on an action the faster you can adjust.
When feedback is delayed you may persist with ineffective behavior for too long.
Some loops reinforce themselves — success breeds more success or failure breeds more failure.
Self-correcting loops maintain balance by countering deviations.
Measure things that predict outcomes rather than waiting for outcomes themselves.
Your emotions create self-reinforcing cycles — anxiety begets more anxiety.
Habits persist because they create their own reinforcing feedback.
What you read shapes what you think which shapes what you seek out to read.
When a beneficial loop exists invest in making it stronger and faster.
Real situations often involve several interacting feedback loops simultaneously.
Do not wait for feedback to arrive naturally — engineer feedback into your systems.
Regularly check that your feedback loops are still connected to meaningful outcomes.
The ability to build and tune feedback loops is the ability to continuously improve.
No process works perfectly every time — error correction must be built in from the start.
You cannot fix what you cannot detect — invest in error detection mechanisms.
Execution errors knowledge errors and judgment errors require different correction approaches.
Design systems that surface errors early when they are easiest and cheapest to correct.
Accept that some error rate is normal and define how much error is tolerable.
When the same error happens repeatedly fix the root cause not just the symptom.
Asking why five times in succession usually reaches the root cause of a problem.
A checklist is an error prevention agent that catches predictable mistakes.
Reviewing key conditions before starting a task catches errors before they propagate.
Reviewing what happened after completing a task surfaces errors for future correction.
Small uncorrected errors can trigger chains of increasingly large errors.
Design your systems to fail partially rather than completely.
For every important process have a documented way to recover from common failures.
Focusing on who caused an error prevents understanding why it happened.
Recurring errors point to structural problems not personal failures.
Use tools and systems to catch errors that manual vigilance misses.
Every correction takes time and energy — reduce the error rate rather than just correcting faster.
Errors teach you more about your systems than successes do.
Expecting perfection creates fragility — expecting and handling errors creates resilience.
The best systems detect and correct their own errors without manual intervention.
When you run several cognitive agents they need to work together not interfere with each other.
When two agents try to handle the same situation they may give conflicting instructions.
When agents conflict the higher-priority agent wins.
Some agents must run in a specific order — define the sequence explicitly.
Some agents can run simultaneously while others must wait for previous results.
When agents need to share information define clearly how that information flows.
Define how the output of one agent becomes the input of another.
A meta-agent that coordinates other agents by deciding which should run when.
When one agent finishes and another starts the relevant context must transfer cleanly.
Draw the dependencies between your agents to see the full coordination picture.
When two agents each wait for the other neither can proceed — design to prevent this.
When multiple agents need the same scarce resource like your attention define allocation rules.
Common patterns like pipeline fan-out and consensus for coordinating multiple agents.
Coordination itself costs effort — keep the coordination cost proportional to the benefit.
Sometimes combined agent behavior produces results none of the individual agents intended.
Your set of agents is an ecosystem — it needs balance and periodic assessment.
Every new agent interacts with all existing agents — add new agents deliberately.
When retiring an agent update everything that depended on it.
Periodically assess how well your agents work together as a system.
When your agents work together smoothly the result looks like natural ability to others.
Effective delegation frees your highest-value attention for your highest-value work.
Tools, checklists, and automated processes are delegation targets.
Use clear criteria to decide what to delegate, what to automate, and what to keep.
Some decisions and responsibilities must remain with you — knowing which ones is a meta-skill.
Vague delegation produces vague results. Specify the outcome, constraints, and success criteria before handing anything off.
Specify the result you want, not the exact steps to get there. This preserves autonomy and invites better solutions.
Delegation without verification is abdication. Build lightweight checks to ensure delegated work meets your standards.
Trust your agents and systems — but build verification into the process, not as an afterthought.
A tool is a delegated capability — it does something you could do, but faster, more reliably, or at greater scale.
A well-designed habit is delegation to your future automatic self.
Your environment can enforce behaviors that willpower alone cannot sustain.
Delegating too much creates disconnection from the work that matters and atrophies critical skills.
Holding too much yourself creates bottlenecks, burnout, and prevents others (and systems) from developing capability.
Delegation ranges from "do exactly this" to "handle it entirely" — know which level you are using.
Delegation is a skill you build over time — each successful delegation increases your capacity for the next one.
True control comes from building systems you trust to operate without your constant oversight.
Every effective delegation multiplies your capacity — the cumulative effect is exponential leverage.
Effective delegation means your results exceed what your personal effort alone could produce.
Agent monitoring provides the data you need to optimize your cognitive systems.
Every agent needs a clear definition of what success looks like in measurable terms. Without operational metrics, monitoring produces noise instead of signal.
Monitor too rarely and you miss problems; monitor too often and you create noise. Find the right cadence.
A dashboard gives you a single view of all your agents' health and performance.
Track how often each agent fires when it should and does not fire when it should not.
Effectiveness means your agent produces the intended outcome, not just that it runs.
Track how quickly each agent responds to its trigger.
An agent that fires when it shouldn't wastes your attention and erodes trust.
An agent that fails to fire when it should leaves you exposed to undetected problems — the silence feels like safety, but it is blindness.
Agents degrade over time unless actively maintained — monitoring catches drift before it becomes failure.
Monitoring itself costs attention and energy — the overhead must be justified by the value it provides.
Automate monitoring wherever possible to reduce overhead while maintaining visibility.
Written reflection is the oldest and most versatile form of self-monitoring.
The act of measuring creates a commitment loop — what you track, you take responsibility for.
Define clear thresholds that distinguish normal operation from problems requiring your attention.
A single measurement tells you where you are; a trend tells you where you are heading.
Too much monitoring data overwhelms attention and leads to ignoring signals that matter. The solution is not more data — it is fewer, sharper signals routed to the right layer of attention.
Compare agents against each other and against baselines to identify relative performance.
Monitoring without action is observation theater — data must drive decisions.
Monitoring completes the feedback loop — observation enables adjustment enables improvement.
Use monitoring data to make targeted improvements to your agents.
Improving anything other than the bottleneck is wasted effort.
Consistent 1% improvements produce transformative results over time.
Each improvement gets harder and smaller — know when further optimization is not worth the cost.
The optimal amount of optimization is not infinite — there is a point where you should stop and move on.
Run two versions of an agent simultaneously and let the data tell you which performs better.
Change one thing at a time so you can attribute improvements to specific changes.
Optimization improves within a framework; innovation replaces the framework. Know which you need.
Making an agent faster means it can serve you more often with less friction.
An agent that acts fast but wrong is worse than one that acts slowly but right.
A reliable agent works every time, not just when conditions are perfect.
An agent that tries to do too much does nothing well. Optimize by narrowing scope to what matters.
An efficient agent achieves results with minimal energy expenditure — cognitive, emotional, or physical.
Optimize how agents connect and hand off to each other, not just how each agent performs in isolation.
The most powerful optimization is often subtraction — removing steps that add cost without adding value.
Dedicate focused time blocks to optimizing specific agents rather than trying to optimize everything continuously.
Without a baseline measurement, you cannot know whether your optimization actually improved anything.
Record what you changed, why, and what happened — optimization without documentation is gambling.
Optimizing before you understand the system is the root of much wasted effort.
Optimization is not something you do once — it is an ongoing relationship with your systems.
Every agent is created, deployed, maintained, and eventually retired.
Creating an agent is a deliberate design act — not something that just happens.
Moving an agent from design to daily operation takes time and deliberate effort.
New agents are most fragile in their first month — they need extra attention and support to survive.
Agents need regular maintenance — scheduled reviews prevent gradual degradation.
Sometimes you should improve an existing agent; sometimes you should replace it entirely.
Track versions of your agents so you can compare, rollback, and learn from changes.
Define clear criteria for when an agent should be retired rather than maintained. Without explicit retirement criteria set in advance, you will hold onto agents long past the point where they serve you — because the sunk cost of building them, the identity you attached to them, and the absence of a forcing function all conspire to keep dead agents on life support.
Retire agents gracefully — document what they did, why they're being retired, and what replaces them.
When retiring an agent ensure its responsibilities transfer to a new agent or are consciously dropped.
Understanding your past agents — even failed ones — reveals patterns in how you build cognitive systems.
Your full set of active agents is a portfolio that should be balanced and diversified.
Periodically review and rebalance your agent portfolio — retire underperformers, invest in high-value agents.
New agents can inherit properties and patterns from existing successful agents rather than being built from scratch.
Create reusable templates for common agent patterns to accelerate creation of new agents.
Some agents outlive their usefulness but persist because removing them feels risky or costly. Legacy agents consume resources, create confusion, and block the deployment of better alternatives. Identifying them is the first step toward a clean epistemic portfolio.
Documentation should evolve with the agent — outdated docs are worse than no docs.
Too many agents create coordination overhead that can exceed their collective value.
Knowing where each of your agents is in its lifecycle helps you allocate attention appropriately.
The way you create, maintain, and retire agents mirrors how you learn, practice, and let go of knowledge. Recognizing this parallel turns agent management into a form of self-directed development.