The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Expertise develops through deliberate practice that builds sophisticated mental representations and enables perceptual differentiation of domain-specific features that untrained observers cannot detect.
Meaning is constructed by receivers using their own mental models rather than transmitted intact from senders, because information contains no inherent meaning—meaning emerges from the interaction between information and context.
Schemas are cognitive structures that organize knowledge at all levels of abstraction by specifying relationships between concepts and guiding information processing.
Mental models have structural correspondence (isomorphism) to the situations they represent, with relationships between model elements mapping onto relationships between real-world elements.
People typically construct a single mental model of a situation and reason from it as if it were complete, without spontaneously generating alternative models.
When confusion or disagreement persists despite shared facts, externalize all mental models spatially (whiteboard, diagram, parallel columns) before continuing verbal discussion, because visual comparison reveals structural misalignment that sequential verbal exchange cannot surface.
When two schemas of the same situation diverge between people, treat the divergence itself as information about complexity the territory contains that neither schema fully captured.
For each captured surprise, write one sentence answering 'What did I apparently believe that turned out to be wrong?' to convert observations into explicit model gaps.
Validate cross-domain pattern candidates by verifying that the relational structure (not surface similarity) matches across domains—two patterns share structure when the causal relationships between elements are preserved even when the elements themselves differ completely.
Draw mental models as diagrams with boxes for entities and labeled arrows for relationships within ten minutes, because spatial layout forces explicit specification of what connects to what and reveals gaps that prose automatically conceals.
When externalizing mental models, label every arrow with a specific verb describing the relationship mechanism (causes, enables, blocks, amplifies) rather than vague connectors like 'affects' or 'relates to', because unlabeled relationships reveal unexamined assumptions.
After drawing a mental model, audit it for missing feedback loops by tracing whether any effects circle back to influence their own causes, because circular causation governs most complex systems but is invisible to linear thinking.
Before attempting to design better schemas, inventory your current operating schemas by writing what you actually do (not what you should do) across professional, relational, and self-concept domains.
Perform schema inspection through a five-step audit: (1) list actual operating rules in a domain, (2) source each rule's origin, (3) identify one success and one failure case per rule, (4) rate confidence in each rule, (5) compare confidence to evidence quality.
When using AI to assist schema inspection, first externalize your thinking in writing, then request assumption extraction, pattern detection across entries, or adversarial questioning—because AI can only inspect articulated schemas, not internal ones.
When you update a belief, write an explicit update statement in the format 'Based on [specific evidence], I am updating my model from [old version] to [new version]' to reframe revision as calibration rather than defeat.
For each schema driving consequential decisions, document: (1) the schema as a sentence, (2) when you adopted it, (3) supporting evidence, and (4) what would falsify it — if you cannot articulate falsification conditions, treat the schema as dogma requiring immediate audit.
Store each schema with explicit scope documentation specifying the domain where it was built and the structural conditions it assumes, treating scope as mandatory metadata rather than optional annotation.
When someone proposes a different categorization and your first reaction is irritation that they are 'wrong,' treat this as a signal that you have mistaken a constructed category for an objective feature of reality.
For each candidate enabling relationship, articulate the specific mechanism through which one condition creates another; if you can only state correlation ('they go together') rather than mechanism, treat it as association not enabling.
After drawing a complete relationship map, write three to five sentences describing the structural story—focusing specifically on what was invisible before you drew the map rather than summarizing what you already knew.
When a relationship type is mathematically transitive (like 'is greater than' or 'is ancestor of'), you can safely infer the endpoint connection from a chain; when it is not transitive (like 'is friend of' or 'is close to'), you must verify endpoint relationships directly rather than inferring them.
For each major domain where you make decisions, explicitly write down the single deepest assumption everything else depends on, then list 5-10 decisions that would change if that root were different to verify you've found an actual root.
State the negation of any root concept you identify and ask what you would do differently if the opposite were true, not to believe the negation but to break the structural lock that makes the original feel inevitable.