The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Feed AI only your signal-tagged thoughts rather than your unfiltered mental stream, because AI amplification of noise-plus-signal produces noise-amplified-by-compute rather than useful pattern detection.
When a note contains multiple ideas connected by 'and' or 'also,' create separate notes—one per idea—with explicit links between them, rather than allowing compound ideas to remain fused in a single container.
Assign a unique identifier to every note before writing any content, treating the addressing decision as the first step that enables all subsequent linking and referencing.
Apply the 'link test' by checking whether all links from a note feel relevant to the entire note—if some links connect only to parts, the note contains multiple units requiring separation.
Store evidence with full methodological metadata (sample size, control conditions, limitations) as independent nodes rather than as decorative citations on claims, to enable proportionality assessment and multi-argument reuse.
Before forcing resolution of contradictory observations or beliefs, accumulate multiple instances in a contradiction log to enable pattern detection impossible from individual contradictions.
When AI retrieval quality degrades despite good source material, diagnose whether notes are self-contained units or fragments requiring external context, because fragmentation produces context confusion that corrupts AI reasoning.
Match note granularity to retrieval frequency and question complexity: create fine-grained atomic notes (single claims) for domains where you need precise retrieval, and coarser aggregated notes for domains where you need high-level orientation.
When using AI systems with your knowledge base, maintain multiple granularity levels of the same material (fine-grained for precise retrieval, coarse-grained for contextual reasoning) rather than forcing a single chunk size, because different query types require different resolutions.
Store well-formed questions as first-class atoms in your knowledge system with the same structural treatment (unique identifiers, bidirectional links, metadata) as claims and answers, because questions organize attention and generate persistent search filters.
When a question receives a partial answer, preserve the original question as a persistent atom and link the answer to it rather than replacing the question, creating a visible record of how understanding evolves from open inquiry to accumulated evidence.
When using AI to analyze accumulated evidence around an open question, provide the constellation of linked notes (question + partial answers + contradictions + gaps) as context rather than asking the AI to answer from scratch, because the accumulated context enables pattern recognition your cold query cannot access.
For every high-stakes term in your reasoning (quality, success, productive, fair), write an operational definition specifying observable conditions that must be true for the term to apply, then store that definition as a canonical reference atom in your knowledge system.
Feed your operational definitions to AI systems as explicit context before generating analysis or recommendations, treating your personal glossary as the translation layer between the model's probability-weighted semantics and your specific conceptual framework.
When encountering the same insight expressed in three or more separate notes across different contexts, extract the shared structural pattern into a single canonical note with a precise name, then replace the duplicate instances with links to the canonical abstraction.
When considering whether to merge two similar notes, test whether the underlying structure is identical (same entities, same relationships, same claims) rather than whether the vocabulary overlaps, because structural identity warrants abstraction while surface similarity does not.
Run semantic similarity searches against your existing notes when creating new notes to detect conceptual duplication hidden behind different vocabulary, treating AI-surfaced matches as candidates for potential abstraction or cross-linking.
Create cross-domain links between notes from different topic clusters rather than only within-cluster links, because weak ties that bridge disparate domains generate more surprising insights than strong ties that reinforce existing knowledge clusters.
When a note has accumulated multiple backlinks from different contexts, review those backlinks as a discovery mechanism to identify emergent patterns and connections your original authorship did not anticipate, treating the backlink panel as a serendipity engine.
When building knowledge systems that will interface with AI, treat every link you create as infrastructure that future graph traversal algorithms will follow, prioritizing explicit relationship encoding over implicit semantic similarity because GraphRAG systems require edges to perform multi-hop reasoning.
Tag notes with 1-3 keywords answering 'If I had this insight again in a different context, what word would I search for?' rather than building taxonomies before you have enough atoms.
Favor verb-based and pattern-based tags (#deciding, #recurring-blocker) over abstract category tags (#productivity, #management) to capture actionable relationships.
When a tag appears on only one note, delete it during review; when a tag connects five notes from three different months, preserve it as earning its maintenance cost.
When attempting to structure an argument or presentation, gather existing atomic notes on the topic first, then arrange them into a sequence that produces a natural train of thought, rather than starting with an outline.