The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
When someone challenges one part of your compound plan and you defend the whole thing, treat this as a diagnostic signal that you're still operating on fused ideas rather than independent assumptions.
When presenting compound statements to AI systems, explicitly ask for assumption enumeration rather than direct answers, then critically verify the decomposition's completeness since the AI may introduce its own hidden assumptions.
When unpacking task estimates, decompose complex tasks into component steps before estimating duration—unpacking improves accuracy by forcing visibility of dependencies and transitions that holistic estimation skips.
When encountering difficulty naming a concept precisely, treat that difficulty as a diagnostic signal revealing incomplete understanding requiring further processing rather than a labeling problem.
Store evidence with full methodological metadata (sample size, control conditions, limitations) as independent nodes rather than as decorative citations on claims, to enable proportionality assessment and multi-argument reuse.
Before forcing resolution of contradictory observations or beliefs, accumulate multiple instances in a contradiction log to enable pattern detection impossible from individual contradictions.
When AI retrieval quality degrades despite good source material, diagnose whether notes are self-contained units or fragments requiring external context, because fragmentation produces context confusion that corrupts AI reasoning.
Match note granularity to retrieval frequency and question complexity: create fine-grained atomic notes (single claims) for domains where you need precise retrieval, and coarser aggregated notes for domains where you need high-level orientation.
When using AI systems with your knowledge base, maintain multiple granularity levels of the same material (fine-grained for precise retrieval, coarse-grained for contextual reasoning) rather than forcing a single chunk size, because different query types require different resolutions.
Store well-formed questions as first-class atoms in your knowledge system with the same structural treatment (unique identifiers, bidirectional links, metadata) as claims and answers, because questions organize attention and generate persistent search filters.
When a question receives a partial answer, preserve the original question as a persistent atom and link the answer to it rather than replacing the question, creating a visible record of how understanding evolves from open inquiry to accumulated evidence.
When using AI to analyze accumulated evidence around an open question, provide the constellation of linked notes (question + partial answers + contradictions + gaps) as context rather than asking the AI to answer from scratch, because the accumulated context enables pattern recognition your cold query cannot access.
For every high-stakes term in your reasoning (quality, success, productive, fair), write an operational definition specifying observable conditions that must be true for the term to apply, then store that definition as a canonical reference atom in your knowledge system.
When two people or two parts of your own thinking use the same term with persistent conflict, pause the debate and conduct a definition audit: have each party write their operational definition independently, then compare—if definitions diverge, the conflict is definitional not factual and should be resolved at the definition level.
Feed your operational definitions to AI systems as explicit context before generating analysis or recommendations, treating your personal glossary as the translation layer between the model's probability-weighted semantics and your specific conceptual framework.
When encountering the same insight expressed in three or more separate notes across different contexts, extract the shared structural pattern into a single canonical note with a precise name, then replace the duplicate instances with links to the canonical abstraction.
When considering whether to merge two similar notes, test whether the underlying structure is identical (same entities, same relationships, same claims) rather than whether the vocabulary overlaps, because structural identity warrants abstraction while surface similarity does not.
Run semantic similarity searches against your existing notes when creating new notes to detect conceptual duplication hidden behind different vocabulary, treating AI-surfaced matches as candidates for potential abstraction or cross-linking.
Create cross-domain links between notes from different topic clusters rather than only within-cluster links, because weak ties that bridge disparate domains generate more surprising insights than strong ties that reinforce existing knowledge clusters.
When a note has accumulated multiple backlinks from different contexts, review those backlinks as a discovery mechanism to identify emergent patterns and connections your original authorship did not anticipate, treating the backlink panel as a serendipity engine.
When building knowledge systems that will interface with AI, treat every link you create as infrastructure that future graph traversal algorithms will follow, prioritizing explicit relationship encoding over implicit semantic similarity because GraphRAG systems require edges to perform multi-hop reasoning.
When a belief revises three or more times in a short period without converging, treat this as a diagnostic signal that you are reacting to surface events rather than updating a deeper model.
When two schemas of the same situation diverge between people, treat the divergence itself as information about complexity the territory contains that neither schema fully captured.
Tag notes with 1-3 keywords answering 'If I had this insight again in a different context, what word would I search for?' rather than building taxonomies before you have enough atoms.