Frequently asked questions about thinking, epistemology, and cognitive tools. 1668 answers
Writing documents nobody reads — either because they are too long, too disorganized, or stored where nobody can find them. The most common version: a 40-page wiki that technically contains the answer but requires 30 minutes of reading to extract it. Delegation to documents fails when the document.
Treating rule creation as a one-time event and never revising. A rule that was right six months ago may be wrong now because the context shifted. The deeper failure is confusing the comfort of not deciding with the quality of the decisions being made on your behalf. If you never audit your rules,.
Confusing efficiency with competence. Over-delegation feels like progress because your calendar clears up. But the emptiness in your calendar can mask an emptiness in your capability. The warning signs are subtle: you stop asking sharp questions because you no longer know enough to formulate them..
Recognizing these warning signs intellectually while rationalizing each specific instance. 'Yes, I know I should delegate more, but THIS task really does require me.' The failure mode is not ignorance — it's exemption. You will agree with every word of this lesson and then exempt every item on.
Confusing the feeling of control with actual control. You attend every meeting, review every document, approve every decision — and mistake the exhaustion for effectiveness. Meanwhile, the system depends entirely on your presence. If you got sick for two weeks, everything would stop. That is not.
Treating delegation as a way to be lazy rather than a way to be leveraged. The person who delegates everything and monitors nothing isn't creating leverage — they're creating drift. Leverage requires the initial investment of building clear specifications, selecting the right delegate, and.
Equating delegation with abdication. The master delegator who 'does less' is not doing nothing — they are doing different work. They are designing systems, selecting agents, defining outcomes, verifying results, and refining the delegation architecture itself. When you see someone delegate.
Monitoring everything. You build a 47-metric dashboard for your morning routine and spend more time tracking than doing. Monitoring becomes the work instead of supporting it. The antidote is ruthless selectivity: monitor the minimum number of signals that tell you whether an agent is working. If a.
Defining metrics that are easy to count rather than meaningful to track. You measure 'number of journal entries per week' instead of 'percentage of entries that surface an actionable insight.' The easy metric gives you a green dashboard while the agent silently underperforms. This is Goodhart's.
Building a dashboard you never look at. The most common failure is not bad design — it is abandonment. You spend an hour creating a beautiful tracker, review it twice, then forget it exists. The dashboard rots while you return to operating without visibility. The antidote is making the review.
Treating reliability as a binary — the agent either 'works' or 'doesn't work.' This collapses a rich, multi-dimensional signal into a useless bit. An agent with 95% reliability and a 30% false-fire rate has a completely different failure profile than an agent with 70% reliability and a 0%.
Measuring only whether an agent fires, not how quickly. This is the binary trap: you treat activation as a yes-or-no event and declare success whenever the agent eventually engages. But an agent that fires correctly after the critical window has closed is functionally equivalent to an agent that.
The subtlest failure is mistaking the feeling of monitoring for the value of monitoring. Checking your analytics dashboard, reviewing your habit tracker, scrolling your fitness stats — these activities feel productive because they involve data about your performance. But feeling informed is not.
Automating monitoring without defining what constitutes a meaningful signal. This produces the alert fatigue problem: the system generates so many notifications — most of them irrelevant or low-severity — that you begin ignoring all of them. The monitoring is technically automated, but it has.
Treating the journal as a diary rather than a monitoring instrument. The most common failure is writing narrative entries about how you feel without structured observation of specific agents and their performance metrics. A diary says 'Today was stressful and I did not get much done.' A monitoring.
Confusing accountability with punishment. The monitoring-accountability loop works because measurement creates ownership — you see the data, you feel responsible, you adjust. But many people corrupt this loop by treating monitoring data as evidence for self-prosecution. A missed day becomes proof.
Checking current status and calling it monitoring. You open the dashboard, see that today's number looks fine, and close the dashboard satisfied. You have committed the point-in-time fallacy: treating a single observation as evidence that the system is healthy. A patient whose blood pressure reads.
Adding more monitoring to fix missed signals. When you notice that something slipped through your monitoring, the instinct is to add another dashboard, another notification, another daily check. But the reason you missed the signal was not insufficient data — it was attentional saturation. Adding.
Comparing agents on a single metric and declaring a winner. One agent may score higher on throughput but lower on sustainability. Another may look worse this week but was operating under unusual conditions. The failure is premature convergence — collapsing a multi-dimensional comparison into a.
Optimizing the metric instead of optimizing the system. Goodhart's Law warns that when a measure becomes a target, it ceases to be a good measure. If your morning-routine agent is measured by 'number of tasks completed before 9 AM,' you can optimize that number by splitting large tasks into.
Treating monitoring as a passive observation activity rather than an active component of a feedback loop. You collect data, review dashboards, notice trends — and then do nothing differently. This is surveillance, not monitoring. True monitoring feeds back: the data changes behavior, the behavior.
Optimizing without data — making changes based on how a system feels rather than how it measurably performs. This is the most common and most destructive optimization failure. It looks like productivity because you are making changes and feeling proactive. But without data, you are not optimizing..
The most common failure is optimizing what is visible rather than what is constraining. The step that annoys you most, the step that feels slowest, the step where you have the most expertise — these are the steps that attract optimization effort. But annoyance, subjective slowness, and expertise.
Mistaking motion for improvement. The compounding effect depends on each change being a genuine improvement — a measurable reduction in friction, error, time, or effort. If your daily changes are lateral moves rather than upward moves — reorganizing without simplifying, changing without measuring,.