Frequently asked questions about thinking, epistemology, and cognitive tools. 1647 answers
Each agent should handle one specific situation — multi-purpose agents are fragile.
Each agent should handle one specific situation — multi-purpose agents are fragile.
Pick one agent you currently run (or want to run) that handles more than one situation. Split it into two or three narrower agents, each with a single trigger condition and a single action. Write each one on a separate card or line. Test them independently for three days and notice which ones.
Building a 'morning routine mega-agent' that tries to sequence seven behaviors. It works on day one when you have full motivation. By day four, one disruption cascades through the whole chain and the entire agent collapses. The failure isn't willpower — it's architectural. You coupled seven.
Each agent should handle one specific situation — multi-purpose agents are fragile.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Pick one agent you already run — a decision rule, a recurring process, a behavioral protocol. Write it down in this format: (1) Name, (2) Trigger — what activates it, (3) Conditions — when it applies and when it doesn't, (4) Actions — the specific steps, in order, (5) Success criteria — how you.
Writing a description so vague it could mean anything. 'Be more intentional with my mornings' is not a documented agent — it's an aspiration. A documented agent specifies: when X happens, if Y is true, do Z. If your document doesn't have that structure, you haven't documented the agent. You've.
Written agent descriptions can be reviewed refined and shared.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
Pick one agent (behavioral routine, decision rule, or AI workflow) you want to deploy. Before using it in a real situation, run a pre-mortem: imagine it is six weeks from now and the agent has completely failed. Write down three specific reasons it failed. Then run the agent in a low-stakes.
Skipping the test because you are excited about the new agent and confident it will work. Overconfidence is the specific failure mode Klein's pre-mortem was designed to counter. You deploy untested, something breaks under real conditions, and instead of learning from a controlled failure you are.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
When an agent fails to fire or produces bad results you learn how to improve it.
When an agent fails to fire or produces bad results you learn how to improve it.
When an agent fails to fire or produces bad results you learn how to improve it.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.