Frequently asked questions about thinking, epistemology, and cognitive tools. 9738 answers
Each agent should handle one specific situation — multi-purpose agents are fragile.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Written agent descriptions can be reviewed refined and shared.
Pick one agent you already run — a decision rule, a recurring process, a behavioral protocol. Write it down in this format: (1) Name, (2) Trigger — what activates it, (3) Conditions — when it applies and when it doesn't, (4) Actions — the specific steps, in order, (5) Success criteria — how you.
Writing a description so vague it could mean anything. 'Be more intentional with my mornings' is not a documented agent — it's an aspiration. A documented agent specifies: when X happens, if Y is true, do Z. If your document doesn't have that structure, you haven't documented the agent. You've.
Written agent descriptions can be reviewed refined and shared.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
Pick one agent (behavioral routine, decision rule, or AI workflow) you want to deploy. Before using it in a real situation, run a pre-mortem: imagine it is six weeks from now and the agent has completely failed. Write down three specific reasons it failed. Then run the agent in a low-stakes.
Skipping the test because you are excited about the new agent and confident it will work. Overconfidence is the specific failure mode Klein's pre-mortem was designed to counter. You deploy untested, something breaks under real conditions, and instead of learning from a controlled failure you are.
Run through scenarios mentally or in low-stakes situations before relying on a new agent.
When an agent fails to fire or produces bad results you learn how to improve it.
When an agent fails to fire or produces bad results you learn how to improve it.
When an agent fails to fire or produces bad results you learn how to improve it.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Pick one agent you already run — a repeatable behavior triggered by a specific situation. Write down the schema it operates on: what does this agent assume about the world? Then ask three questions. First, where did this assumption come from? Second, when was the last time I tested it? Third, what.
Building sophisticated agents on top of unexamined schemas. You get faster at producing the wrong outputs. The agent fires with perfect reliability, but the underlying model of reality is distorted — so every reliable action takes you further from where you actually want to go. Efficiency without.