The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Deliberately link contradicting ideas in your knowledge graph rather than keeping them in separate domains, because spatial proximity forces the cognitive confrontation that compartmentalization prevents.
Apply the cascade test to contradictions by asking 'If I resolved this, what else would have to change?' to distinguish surface contradictions (low dependency count) from deep contradictions (high dependency count).
Set holding periods for contradictions based on cascade depth: one week for low cascade, two to four weeks for medium cascade, one month or longer for high cascade.
During a contradiction holding period, write brief notes whenever the contradiction surfaces capturing what triggered it and what you noticed, without attempting resolution.
When holding period ends, extend the hold rather than forcing resolution if you cannot yet articulate a missing variable or synthesis, because premature resolution defeats the purpose of incubation.
Test whether a resolved contradiction is genuine innovation rather than compromise by verifying that both original requirements are fully satisfied, not partially abandoned.
When two credentialed experts contradict each other on the same question, treat their disagreement as a map of genuine uncertainty in the evidence base rather than as a problem requiring you to pick a winner.
When experts disagree, ask 'why do they disagree' rather than 'who is right' to identify structural sources like different methodologies, populations, or outcome measures.
When steel-manning an opposing position, verify adequacy by checking whether advocates of that position would say 'Yes, that is exactly what I mean' before proceeding to critique.
For each schema, list assumptions it makes—things it takes for granted without defining—then compare assumption lists across schemas to find shared dependency gaps where both schemas assume the same foundational concept but neither defines it.
When a cross-domain mapping breaks down or fails, investigate the mismatch systematically rather than forcing the analogy—mapping failures reveal domain-specific structural features that successful mappings cannot expose.
When two schemas appear to share a concept or principle, test whether the connection is genuine by attempting to scramble the specifics—if the 'connection' would work equally well between any two randomly selected schemas, you've found semantic coincidence rather than structural isomorphism.
After experiencing what feels like an insight or integration moment, verify whether it represents genuine integration by testing whether you can now do something you could not do before—if the click produced no new capability, inference, or prediction, you experienced fluency or familiarity rather than structural integration.
When an attempted integration between two schemas forces you to reshape one schema to fit the other rather than discovering a higher-order structure that accommodates both unchanged, you are executing Procrustean integration—abandon the attempt and either maintain the schemas separately or search for a genuinely encompassing framework.
Do not automate decisions where the outcome is genuinely different each instance even if the category recurs (interpersonal conflicts, creative problems, novel diagnoses), because automating decisions with genuine novelty produces rigidity disguised as efficiency.
Before building any agent, explicitly name the schema it operates on by writing what the agent assumes about how the world works, because unexamined schemas produce systematically wrong outputs despite reliable execution.
When someone shares expertise or makes a recommendation, separate your evaluation of their reasoning from your evaluation of their credentials by asking 'Would I find this compelling if it came from a low-status source?'
Before acting on AI-generated conclusions, apply the defense test: 'Could I defend this conclusion without the AI's output? Do I understand the reasoning well enough to identify where it might be wrong?'—if not, do the cognitive work before proceeding.
After encountering AI recommendations that contradict your careful analysis, apply three filters in sequence: Does this present unconsidered evidence? Does this identify verifiable reasoning errors? Does it merely state a different conclusion without showing work? Only the first two warrant revision.