The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When analyzing successes, classify each contributing factor as replicable (within your control) versus contingent (dependent on external circumstances), then extract operational principles only from the replicable factors to avoid attributing controllable outcomes to luck.
Treat energy depletion patterns as leading indicators of burnout rather than productivity decline, because emotional exhaustion precedes output degradation and is only visible through energy/emotion tracking, not task completion metrics.
Design review environments to ensure privacy from external observers, because the knowledge that others might see your reflections activates social self-presentation concerns that systematically distort the honesty of self-assessment toward socially desirable narratives.
Match vulnerable reflections to appropriate containers — share with trusted partners who have psychological safety, relevant context, no competing interests, and proven trustworthiness.
Treat avoidance in reflection as diagnostic data pointing toward the topics most important to your development, not as personal failure to overcome.
Categorize tracked activities by cognitive demand type rather than clock time or task label to reveal attention allocation patterns invisible to narrative memory.
Make undiscussable topics discussable by naming the avoidance explicitly before attempting to engage with the avoided content.
Shift focus from event reflection to system reflection when reviews become repetitive — when current questions produce no new insight, question the level of analysis rather than abandoning the practice.
Focus quarterly reviews on systems and assumptions rather than actions — examine the mental models, beliefs, and structural patterns that generate behavior, not just the behavior itself.
Design tool stack architecture with explicit data flows, defined boundaries, and clear integration points rather than accumulating tools as isolated utilities.
Choose tools whose interface complexity matches the complexity of your need—tools that are too simple force workarounds while tools that are too complex impose unnecessary cognitive overhead.
Designate exactly one authoritative write location for each information type to prevent sync drift and the cognitive overhead of reconciling contradictory copies.
Evaluate tools primarily on whether they reliably perform your specific job-to-be-done, not on feature count or theoretical capabilities you may never use.
Choose tools that store data in open formats (Markdown, CSV, JSON) rather than proprietary formats to preserve data portability and reduce migration risk.
Select tools based on how well they integrate with your existing stack rather than evaluating each tool in isolation, because integration quality determines system-level effectiveness more than individual tool quality.
Push past initial tool competence into proficiency by deliberately practicing capabilities that would expand your effectiveness rather than repeatedly using only the subset you already know.
Train attentional control through repeated detection-and-return cycles rather than attempting to prevent mind-wandering entirely.
Maintain exportable backups of your data in portable formats independent of whether you plan to migrate, to reduce future switching costs.
Evaluate tool interoperability as a primary selection criterion rather than feature richness alone, as disconnected best-in-class tools require you to be the integration layer.
Consolidate functions into existing tools at 80% effectiveness rather than adding specialized tools at 100% effectiveness when the switching cost exceeds the capability gap.
Set defined trial periods with explicit success criteria for new tools, including a scheduled exit date, to prevent permanent accumulation of trial tools.
Build custom tools when frequency is high, requirements are highly specific to your workflow, and available skill permits maintenance you will not resent.
Buy commercial tools when requirements are standard across thousands of users, even if they serve your needs imperfectly, unless the imperfection creates friction in high-frequency workflows.
Treat the act of building a tool as a form of externalization that reveals implicit steps in your workflow, making the tool itself secondary to the understanding gained.