Frequently asked questions about thinking, epistemology, and cognitive tools. 200 answers
Treating distribution as an afterthought — finishing the work, feeling accomplished, and then vaguely hoping someone stumbles upon it in a shared drive or feed.
Copying the same content verbatim into every format instead of adapting it to each medium, which produces outputs that feel lazy and fail to serve any audience well.
Measuring only vanity metrics like views and likes, which feel rewarding but tell you nothing about whether your outputs actually changed anyone s thinking or behavior — optimizing for applause instead of impact.
Reviewing outputs without changing anything afterward — treating the review as a reflective ritual that feels productive but produces no behavioral adjustment, turning insight into self-congratulation.
Collaborative outputs fail most often from ambiguous ownership — when everyone is vaguely responsible, no one drives the work forward, and the result is a patchwork of conflicting voices that satisfies nobody.
Archiving everything with no metadata — dumping finished work into a folder called "Done" with original file names like "Final_v3_REAL_final.docx" — creating a graveyard instead of an archive.
Waiting to produce output until you feel ready, which means the compounding clock never starts and you accumulate zero surface area for luck to find you.
The capstone failure is treating the output system as a project to complete rather than an infrastructure to maintain. You finish this phase, feel the satisfaction of having a system, and then gradually stop using it. The pipeline board gathers dust. The templates go unused. The quality standards.
Treating reflection as journaling, venting, or storytelling rather than structured extraction of lessons from experience.
Turning the daily review into a journaling marathon. You sit down for five minutes and emerge ninety minutes later having written three pages of emotional processing and existential reflection. The review was supposed to capture lessons; instead it became therapy. This is not inherently bad, but.
Treating the weekly review as a task-list audit — checking off what you did and did not do — instead of a pattern-detection session that changes how you plan the next week.
Treating the monthly review as a guilt session where you catalogue failures rather than a diagnostic session where you identify structural patterns and recalibrate commitments.
The most common failure is conducting a quarterly review that is just a bigger monthly review — checking metrics without questioning whether those metrics still measure the right things.
The most common failure is treating the annual review as a bigger quarterly review — evaluating projects and goals without stepping back to examine the life those projects and goals are embedded in. You emerge with a sharper strategy for the same trajectory, when the real question was whether the.
The most common failure is conducting AARs only for failures. When a project succeeds, most people move on without examining why it succeeded — which means they cannot reliably reproduce the conditions that led to success. A proper AAR covers both positive and negative outcomes, because.
The most common failure is asking questions that are too vague to generate specific answers. 'How am I doing?' produces 'Fine.' Every time. Vague questions invite vague answers because they do not constrain the search space — your brain has no idea what aspect of your experience to examine, so it.
The primary failure is editing while writing. You write a sentence, decide it sounds wrong, delete it, and try again. This converts reflective writing into performance writing — you are now optimizing for how the words sound rather than discovering what you think. The entire mechanism of.
The primary failure mode is narrative imposition — seeing patterns that are not actually there. Nassim Taleb calls this the narrative fallacy: the human compulsion to weave disconnected events into a coherent story. You review three weeks of reflections, find two instances of frustration after.
The most common failure mode is performing reflection rather than doing it. You sit down, open your review document, and write honest-sounding sentences that never actually touch the uncomfortable truth. The review is thorough, well-structured, and entirely safe. You identify lessons learned that.
The most common failure is conducting a success review that produces only feel-good platitudes rather than actionable insight. "We succeeded because we had a great team" is not a finding — it is a congratulation. The review must push past emotional satisfaction to structural understanding. What.
The most common failure is treating energy and emotion tracking as a mood diary rather than as operational data. You note that you felt tired on Tuesday and anxious on Thursday, but you never analyze why or change anything structural in response. The tracking becomes a ritual of self-reporting.
The most common failure is performing a systems review that is actually just action review in disguise. You write "my system for morning exercise is broken" when what you mean is "I did not exercise this morning." The distinction matters: a genuine systems review examines the structural elements —.
The primary failure mode is toxic positivity masquerading as gratitude — using the gratitude section to avoid or minimize genuine problems. If your project failed because you did not manage dependencies, writing "I am grateful for the learning opportunity" without also writing "I failed to track.
The most dangerous failure mode is sharing with the wrong person. You open a vulnerable reflection to someone who lacks psychological safety — someone who judges, competes, advises prematurely, or later uses what you shared against you. One bad sharing experience can shut down the practice.