The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
Favor verb-based and pattern-based tags (#deciding, #recurring-blocker) over abstract category tags (#productivity, #management) to capture actionable relationships.
When a tag appears on only one note, delete it during review; when a tag connects five notes from three different months, preserve it as earning its maintenance cost.
When attempting to structure an argument or presentation, gather existing atomic notes on the topic first, then arrange them into a sequence that produces a natural train of thought, rather than starting with an outline.
Present AI systems with your atomic notes and ask for multiple possible sequences (chronological, causal, problem-solution) rather than asking for a single best structure, using AI to discover sequences rather than impose them.
When a note exceeds 800 words or covers three distinct topics, decompose it into 2-4 separate atomic notes and rewrite the connections between them to reveal causal chains invisible in the original structure.
When splitting a compound note during refactoring, make explicit decisions about which idea is the core claim, what was supporting evidence versus separate argument, and how the pieces causally relate before completing the split.
Use AI to audit your knowledge base for structural debt (compound notes, duplicates, orphans, broken connections) but perform the actual refactoring decisions yourself to gain the cognitive benefit.
When refactoring reveals that notes in a sequence jump or break, treat those gaps as specifications for new atoms to write rather than as sequence failures.
When unable to determine if a note contains one idea or two, write it as-is during capture, then return during a dedicated review session to attempt decomposition without the pressure of real-time capture.
Each time you review or link a note, make one small improvement (sharpen title, add missing context, split tangled claim) rather than scheduling separate cleanup sessions.
Use voice capture for thoughts occurring during movement, driving, or exercise, as speaking is 3-4x faster than mobile typing and preserves complete thought structure before decay.
Design capture tools to minimize decisions between intent and recording, targeting one gesture or keystroke from any application context.
Start with one capture trigger anchored to your most natural existing habit, run it for six weeks minimum before adding a second trigger.
Track which inbox items actually required real-time response rather than batch-window response to gather evidence about whether continuous processing is truly necessary for your role.
When a thought triggers resistance to capture (a 'flinch' away from writing it down), use that resistance feeling as the capture trigger rather than a reason to skip—thoughts that produce hesitation are the highest-value capture targets.
During conversations where power dynamics make visible note-taking signal service role rather than equal participation, defer capture until immediately after the conversation ends—step outside within 2 minutes and externalize the three most important points while still in short-term memory.
Index only processed permanent notes in AI-searchable systems while keeping unprocessed inbox captures outside retrieval scope, because AI systems cannot distinguish epistemic status and will retrieve raw captures with equal confidence to verified knowledge.
During weekly reviews, ask four metacognitive questions—what did I capture well, what did I almost lose, where did I over-capture noise, and what am I avoiding—to monitor system health rather than just processing lists.
For analog captures intended for long-term use, implement a pipeline that photographs or transcribes key entries into digital storage during weekly review—preserving handwriting's cognitive benefits during capture while enabling digital searchability and AI-readability for retrieval.
When using AI meeting transcription, continue taking personal compressed notes during the conversation rather than relying solely on transcripts—then review AI summary against your notes afterward to identify gaps and improve real-time capture calibration.
Classify each skipped capture by checking whether failure frequency is random across topics (friction problem requiring tool improvement) or clustered in specific domains (resistance problem revealing avoidance patterns).
For each thought you resist capturing, immediately ask and write down 'What would become true if I had written this down?'—usually revealing that externalization would trigger a downstream commitment chain you're avoiding.
Defer emotional interpretation to review sessions when multiple entries enable pattern recognition, rather than explaining emotions during initial capture.
For each captured surprise, write one sentence answering 'What did I apparently believe that turned out to be wrong?' to convert observations into explicit model gaps.