Frequently asked questions about thinking, epistemology, and cognitive tools. 1668 answers
Confusing the feeling of having a plan with the reality of having a specific one. You say 'I have an agent for that' and feel the relief of having addressed the problem. But the agent is vague — 'When I feel stressed, I will take care of myself' — and because it lacks specificity, it never fires.
Treating internal agents as inherently superior because they feel more 'authentic' or 'natural.' This bias causes you to resist externalizing critical processes — like checklists for high-stakes procedures or automated reminders for recurring commitments — because relying on tools feels like a.
Running the audit in your head instead of on paper. You'll think you already know what your defaults are — and you'll be wrong, because the whole point of default agents is that they operate below conscious awareness. The other failure mode is self-judgment: treating the audit as a scorecard.
Treating this lesson as permission to stay shallow. The point is not that simple agents are better forever — it's that a simple agent that runs is the prerequisite for a complex agent that runs. People skip the prerequisite. They design elaborate systems, watch them fail, conclude they lack.
Building a 'morning routine mega-agent' that tries to sequence seven behaviors. It works on day one when you have full motivation. By day four, one disruption cascades through the whole chain and the entire agent collapses. The failure isn't willpower — it's architectural. You coupled seven.
Writing a description so vague it could mean anything. 'Be more intentional with my mornings' is not a documented agent — it's an aspiration. A documented agent specifies: when X happens, if Y is true, do Z. If your document doesn't have that structure, you haven't documented the agent. You've.
Skipping the test because you are excited about the new agent and confident it will work. Overconfidence is the specific failure mode Klein's pre-mortem was designed to counter. You deploy untested, something breaks under real conditions, and instead of learning from a controlled failure you are.
Building sophisticated agents on top of unexamined schemas. You get faster at producing the wrong outputs. The agent fires with perfect reliability, but the underlying model of reality is distorted — so every reliable action takes you further from where you actually want to go. Efficiency without.
Designing a social agent that sounds good on paper but ignores your actual emotional state in the moment. The most common failure is skipping emotion regulation and jumping straight to the "correct" response — which produces wooden, inauthentic interactions that feel performative to both parties..
Designing decision agents for situations that are genuinely novel and then following them rigidly. Not every decision is recurring. If you apply a buy-versus-build checklist to a once-in-a-career strategic pivot, the checklist will produce an answer — but the answer will be wrong, because the.
Applying communication agents mechanically without reading the situation. BLUF works for status updates to your manager; it can feel cold and transactional in a message to a grieving colleague. The Pyramid Principle structures a board presentation beautifully; it strips the narrative arc out of a.
Designing health agents that are too ambitious or too numerous for your current stage of readiness. You write agents for perfect sleep hygiene, daily intense exercise, pristine nutrition, and elaborate stress protocols — all at once. None of them survive first contact with your actual life. The.
The most common failure is designing financial agents that are too ambitious — 'save 50% of every paycheck' — which triggers loss aversion and gets overridden within weeks. Effective financial agents start below your pain threshold and escalate gradually, exactly as the Save More Tomorrow research.
Treating agent design as a one-time intellectual exercise rather than an ongoing systems practice. You design five agents, feel satisfied, and never revisit them. Without feedback loops — without monitoring whether agents fire, whether they produce good outcomes, whether conditions have changed —.
Designing triggers that depend on motivation or memory rather than environmental cues. You tell yourself 'I'll do my weekly review when I feel like it' or 'I'll remember to journal before bed.' Motivation fluctuates. Memory is unreliable. Effective triggers are externally anchored — they fire.
Designing elaborate environmental triggers that require their own maintenance. If your trigger system needs a trigger to maintain it, you've added complexity instead of removing it. The best environmental cues are static objects that persist without upkeep — a hook by the door, a notebook on the.
Assigning a vague time window instead of a precise moment. 'Sometime in the morning' is not a trigger — it is a wish. The specificity is load-bearing. Without a fixed time, you rely on self-initiated retrieval, which is the most cognitively expensive form of prospective memory. You will remember.
Choosing events that are not actually discrete or observable. 'When I feel settled in at work' is not an event — it is a subjective state with no clear boundary. 'When I am done with morning tasks' is ambiguous — done according to what criteria? The failure mode is building event-based triggers on.
Trying to use emotions as triggers before you can reliably detect them. If you cannot notice frustration until you are already shouting, frustration is not yet a usable trigger for you — it fires too late. The prerequisite for emotional triggers is emotional awareness, and awareness is a trainable.
Building chains that are too long before any single link is solid. A five-step chain where link two is unreliable means links three through five never fire. The other failure is invisible chains — sequences you run on autopilot that end somewhere you didn't choose. Chaining is powerful in both.
Treating sensitivity as a fixed setting rather than an ongoing calibration process. You pick a threshold once, it works for a week, then your context changes — new job, new schedule, new stressors — and the old threshold is suddenly wrong. The second failure mode is binary thinking: assuming the.
Adding so many qualifying conditions that the trigger never fires at all. This is the overcorrection — you swing from false positives to false negatives. The goal is not zero false positives. The goal is a false positive rate low enough that you still trust the trigger. If your guard clauses make.
Assuming you missed the trigger because you lack discipline. Missed triggers are almost never motivation failures — they are detection failures. If you respond to a miss by trying harder to remember, you are solving the wrong problem. Solve the perceptual problem instead.
Placing triggers where you think you should encounter them rather than where you actually move. If your trigger is on the kitchen counter but you enter through the garage and go straight to the office, you designed for an ideal path, not your real one. Audit your actual movement patterns before.