Frequently asked questions about thinking, epistemology, and cognitive tools. 567 answers
Creating atomic notes and filing them into folders by topic, then never linking them to anything. The notes are technically self-contained, but they function as isolated fragments because nothing connects them. You end up with a well-organized graveyard: everything is in its place, nothing is in.
Creating a tag taxonomy before you have enough atoms to need one. You design a careful hierarchy — #work/meetings/retrospectives — and then spend more energy maintaining the structure than writing the notes. The system collapses under its own organizational weight. The opposite failure is never.
Confusing system completeness with system trust. You build an elaborate capture infrastructure — multiple apps, complex workflows, automated integrations — and assume that because the system is comprehensive, you trust it. But trust is not a feature of the system. Trust is a psychological state.
Confusing a task list with an intention. A list of twelve things to do is not an intention — it is a menu that forces you to make a decision at the moment you should already be executing. The failure looks productive because you have a plan. But you still face the same attention-scattering.
Two opposite traps. First: treating shallow work as the enemy and trying to eliminate it entirely, which causes administrative debt to pile up until it becomes an emergency that destroys an entire deep work day. Second: letting shallow work colonize your peak hours because it feels productive —.
The most common failure is not refusing to document decisions — it is documenting the decision without documenting the context. People write "We chose React" without writing "because our team had three React developers and zero Angular developers, we had a six-week deadline, and the client.
Assuming alignment exists because the words sound the same. Two people can say 'we need better testing' and mean completely different things — one means more unit tests, the other means more user research. Shared vocabulary without shared schema is the most common collaboration failure, and it is.
The most common failure mode is waiting until a schema is so obviously broken that only a complete overhaul seems adequate. You tolerate small inaccuracies for months or years, ignoring the accumulating drift between your model and reality, until the gap becomes a crisis. Then you panic-revise.
Treating your worldview as a finished product rather than a living system. The moment you declare 'this is how the world works' and stop integrating new schemas, your worldview calcifies into ideology. You stop noticing evidence that doesn't fit. You stop updating. The worldview that once made you.
Treating delegation as binary — either you do it yourself or you hand it off completely. This collapses a seven-level spectrum into two positions and guarantees one of two failures: micromanagement (everything stays at Level 1) or abandonment (everything jumps to Level 7). Both destroy trust. The.
Using writing only to record what you already know — meeting notes that replay the meeting, journal entries that narrate the day, documentation that restates the obvious. This treats writing as a tape recorder. You get the comfort of having written without any of the cognitive benefit. The tell:.
Treating distraction as a character flaw rather than a biological default. When you frame distraction as weakness — 'I just need more discipline' or 'I need to try harder to focus' — you misdiagnose the problem and prescribe the wrong treatment. Willpower is a finite resource, and deploying it.
Misdiagnosing deadlock as a motivation or willpower problem. When you feel paralyzed between two competing priorities and neither moves forward, the instinct is to push harder — more discipline, more effort, more guilt. But if the structure is a circular dependency, no amount of force will break.
Delegating once, getting a mediocre result, and concluding that delegation doesn't work for your context. This is like going to the gym once, being sore the next day, and deciding exercise is counterproductive. The mediocre result IS the training signal. The discomfort of imperfect output is the.
Avoiding writing about topics you 'already understand' — which protects the illusion of understanding from ever being tested. The most dangerous knowledge gaps are in subjects you feel confident about, because confidence removes the motivation to verify. You will selectively write about things.
Knowing about attention residue but treating it as trivia rather than an operating constraint. You nod at the concept, then context-switch twelve times before lunch and wonder why your deep work feels shallow. The failure is not ignorance — it is refusing to change behavior once you understand the.
Confusing reliability with effectiveness. Your agent fires every time it should — perfect reliability score — so you assume it's working. But firing is not the same as producing the intended result. A smoke detector that sounds every time there's smoke is reliable. A smoke detector that sounds.
Documenting only the conclusion without the context. Writing 'We chose React' tells future-you nothing. Writing 'We chose React because the team already knows it, the timeline was six weeks, and we valued shipping speed over long-term performance' tells future-you everything. The decision without.
Treating a pre-flight check as a formality rather than a genuine verification. The most dangerous version of this is 'flow-through checking' — running your eyes down the checklist and marking each item complete without actually testing the condition. Airline investigators call this 'checklist.
Using the two-minute rule as a license for reactivity — doing every small thing that crosses your path all day long, instead of applying the rule during dedicated processing sessions. David Allen was explicit: the rule applies when you are processing your inbox, not when you are doing focused.
Mixing hot and cold in one container. Your permanent notes become polluted with half-formed fragments. Your inbox accumulates hundreds of items that were supposed to be temporary but became permanent by neglect. You stop trusting your system because you cannot tell what has been processed and what.
Adopting a new mental model that explains the anomaly that triggered the change but quietly drops coverage of situations the old model handled well. You feel enlightened because you solved the puzzle that was bothering you, but you've introduced silent regressions — areas of life where your.
Stopping at the first answer that feels emotionally satisfying rather than continuing to the structural cause. The Five Whys fails most often not because people ask too few questions, but because the third or fourth answer lands on something that confirms an existing belief — 'the vendor is.
Trying to change the behavior without identifying the trigger first. You white-knuckle through willpower for a week, then the trigger fires when you're tired and the pattern returns at full strength. The pattern isn't the enemy. The unidentified trigger is.