The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When reviewing AI-generated text, verify whether you could reconstruct the reasoning independently - if not, you have received polish without cognitive gain and should write your own version first.
Feed AI only your signal-tagged thoughts rather than your unfiltered mental stream, because AI amplification of noise-plus-signal produces noise-amplified-by-compute rather than useful pattern detection.
When presenting compound statements to AI systems, explicitly ask for assumption enumeration rather than direct answers, then critically verify the decomposition's completeness since the AI may introduce its own hidden assumptions.
When AI retrieval quality degrades despite good source material, diagnose whether notes are self-contained units or fragments requiring external context, because fragmentation produces context confusion that corrupts AI reasoning.
When using AI systems with your knowledge base, maintain multiple granularity levels of the same material (fine-grained for precise retrieval, coarse-grained for contextual reasoning) rather than forcing a single chunk size, because different query types require different resolutions.
When using AI to analyze accumulated evidence around an open question, provide the constellation of linked notes (question + partial answers + contradictions + gaps) as context rather than asking the AI to answer from scratch, because the accumulated context enables pattern recognition your cold query cannot access.
Feed your operational definitions to AI systems as explicit context before generating analysis or recommendations, treating your personal glossary as the translation layer between the model's probability-weighted semantics and your specific conceptual framework.
Run semantic similarity searches against your existing notes when creating new notes to detect conceptual duplication hidden behind different vocabulary, treating AI-surfaced matches as candidates for potential abstraction or cross-linking.
When building knowledge systems that will interface with AI, treat every link you create as infrastructure that future graph traversal algorithms will follow, prioritizing explicit relationship encoding over implicit semantic similarity because GraphRAG systems require edges to perform multi-hop reasoning.
Present AI systems with your atomic notes and ask for multiple possible sequences (chronological, causal, problem-solution) rather than asking for a single best structure, using AI to discover sequences rather than impose them.
Use AI to audit your knowledge base for structural debt (compound notes, duplicates, orphans, broken connections) but perform the actual refactoring decisions yourself to gain the cognitive benefit.
Index only processed permanent notes in AI-searchable systems while keeping unprocessed inbox captures outside retrieval scope, because AI systems cannot distinguish epistemic status and will retrieve raw captures with equal confidence to verified knowledge.
When using AI meeting transcription, continue taking personal compressed notes during the conversation rather than relying solely on transcripts—then review AI summary against your notes afterward to identify gaps and improve real-time capture calibration.
When using AI to draft difficult communications, compare your reactive draft against the AI-generated neutral version to measure where emotions are distorting your message, rather than sending the AI version directly.
Before prompting AI to analyze meeting transcripts or documents, explicitly request separated outputs: first section lists only observable facts without interpretation, second section offers interpretations of those observations.
When using AI to practice observation skills, provide it with your written accounts of charged situations and explicitly request separation of observational statements from evaluative statements, using the AI's output as immediate feedback on which judgments you embedded without noticing.
Before using AI for pattern analysis on observational data, ensure your input consists of descriptive observations rather than evaluative conclusions by applying the camera test to each input statement, because AI analyzing your conclusions produces confirmation of your biases rather than structural insights.
Use AI to scan peripheral domains weekly and deliver filtered summaries, while reserving human attention for 2-3 deep engagements per week in your core signal domains.
When using AI for signal detection, provide explicit goals and evaluation criteria first, then use AI to scale pattern recognition, because AI without human-defined goals produces generic output from noisy channels.
When using AI during high stress, prompt with 'I am stressed and may be experiencing tunnel vision—what am I likely not seeing?' rather than 'prove my interpretation is right.'
When an AI system makes consequential decisions about people (hiring, performance evaluation, resource allocation), audit what organizational context and metrics trained the system before evaluating algorithm quality, because AI inherits and amplifies the biases of the measurement system.
Before finalizing significant decision records, have an AI argue against your reasoning and append the strongest objection to your record, preserving the full deliberation rather than only your preferred conclusion.
Use AI to analyze patterns across multiple emotional externalization entries (recurring emotions, triggers, trends) rather than to label emotions for you, because the regulatory benefit comes from the act of labeling, not from being labeled.
When using AI for learning, write your own explanation first, then use AI interrogation to find gaps, then revise—never let AI write the initial explanation because reading AI output does not produce the generation effect.