Frequently asked questions about thinking, epistemology, and cognitive tools. 1675 answers
Pick one active goal or recurring commitment — a fitness routine, a creative practice, a work deliverable cadence. Write down the current expectation you hold for it. Now rewrite that expectation with an explicit error budget: how many misses, delays, or quality drops per month or quarter are.
Confusing error tolerance with lowered standards. Error tolerance does not mean accepting mediocrity. It means pre-authorizing a specific, bounded amount of deviation so that inevitable errors do not cascade into system collapse. The person who says 'I guess missing workouts is fine' has lowered.
Expecting perfection creates fragility — expecting and handling errors creates resilience.
A meta-agent that coordinates other agents by deciding which should run when.
List the 3-5 cognitive agents (habits, routines, mental processes) you run most frequently in a single context — your morning, your workday start, your creative sessions. Write them down. Now ask: who decides the order? If the answer is 'habit' or 'whatever I feel like,' you have no orchestrator..
Turning the orchestrator into a bottleneck by making it deliberate over every micro-decision. The orchestrator agent should activate only at transition points and sequence boundaries — not supervise every action within each sub-agent. If you find yourself spending ten minutes deciding whether to.
A meta-agent that coordinates other agents by deciding which should run when.
Agents degrade over time unless actively maintained — monitoring catches drift before it becomes failure.
Holding too much yourself creates bottlenecks, burnout, and prevents others (and systems) from developing capability.
An agent that fires when it shouldn't wastes your attention and erodes trust.
Measure things that predict outcomes rather than waiting for outcomes themselves.
Every agent has a trigger that activates it, a condition that validates it, and an action it takes.
Inventory your existing agents both designed and default to understand what is running.
Each agent should handle one specific situation — multi-purpose agents are fragile.
When an agent fails to fire or produces bad results you learn how to improve it.
Every agent embeds assumptions about the world — the schema it uses must be accurate.
Agents for recurring decision types like buy-versus-build or accept-versus-decline.
Agents for spending saving and investment decisions.
Linking an agent to a specific event like arriving at work or opening your laptop.
Too sensitive and the agent fires too often — too insensitive and it never fires.
When a trigger fires in the wrong context you need to add qualifying conditions.
Position trigger cues where you will encounter them at the right moment.
One-way doors deserve careful analysis — two-way doors should be walked through quickly.
Define good defaults so that the do-nothing option is acceptable.