The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
When expert pattern recognition locks onto a familiar solution, force consideration of alternatives by asking 'I see X—but what if it is not X? What would I expect if it were Y?' before committing.
Write explicit tripwire conditions specifying what observations would resolve ambiguity in each direction before entering any strategic waiting period.
When searching for disconfirming evidence, if your search could not have actually changed your mind, you performed a ritual not genuine disconfirmation—redesign the search until failure is possible.
When new evidence arrives, classify it by diagnostic value before updating—ask whether you'd see this evidence regardless of belief truth versus only if belief were true/false.
When importing best practices or frameworks from another era or scale, explicitly verify that the contextual conditions (organizational size, technological infrastructure, market maturity) that made the practice optimal still hold before adopting it.
When algorithmic feeds or social media constitute a primary information source, deliberately rotate information inputs across epistemic communities outside your filter bubble on a scheduled basis to counteract echo chamber effects.
Before removing any inherited system, process, or organizational structure, document why it was originally created and what problem it solved—if this context cannot be reconstructed, you lack sufficient information to safely remove it.
For information arriving through multiple transmission steps (forwarded quotes, summarized studies, dashboard metrics), multiply the confidence value at each transmission step rather than treating endpoint confidence as equal to source confidence.
Before finalizing significant decision records, have an AI argue against your reasoning and append the strongest objection to your record, preserving the full deliberation rather than only your preferred conclusion.
Externalize reasoning chains by writing numbered steps where each step connects to the next through an explicit warrant stating why step N leads to step N+1, marking any transition that relies on unstated assumptions.
When a reasoning chain contains no surprises or pauses during construction—no moments where the next link was weaker than expected—you have transcribed conclusions rather than constructed reasoning and should restart with genuine step-by-step building.
Maintain an assumption register with five components for each assumption: the specific testable claim, what would change if false, current evidence for/against it, validation status, and next action to test it—reviewing weekly for active projects.
Draw mental models as diagrams with boxes for entities and labeled arrows for relationships within ten minutes, because spatial layout forces explicit specification of what connects to what and reveals gaps that prose automatically conceals.
When externalizing mental models, label every arrow with a specific verb describing the relationship mechanism (causes, enables, blocks, amplifies) rather than vague connectors like 'affects' or 'relates to', because unlabeled relationships reveal unexamined assumptions.
Categorize each failure as preventable (process deviation), complex (novel factor interaction), or intelligent (frontier experiment) before analysis, because different failure types require different questions.
When formal and intuitive schemas disagree on a decision, investigate the disagreement for thirty minutes rather than defaulting to either—write what your gut is reacting to and test whether it reveals a pattern your formal criteria missed or a bias you haven't examined.
For each schema driving consequential decisions, document: (1) the schema as a sentence, (2) when you adopted it, (3) supporting evidence, and (4) what would falsify it — if you cannot articulate falsification conditions, treat the schema as dogma requiring immediate audit.
Document the purpose each category serves by completing the sentence 'this category exists to [do what] for [whom]' to distinguish functional infrastructure from inherited furniture.
When someone proposes a different categorization and your first reaction is irritation that they are 'wrong,' treat this as a signal that you have mistaken a constructed category for an objective feature of reality.
When a binary classification hides multiple distinct failure modes or reasons within a single bucket, decompose it into separate dimensions that can be evaluated independently.
For every pair of categories in a classification system, verify that no item can legitimately belong to both (mutual exclusivity test), and verify that no domain item falls outside all categories (collective exhaustiveness test).
Before committing to a category assignment in high-stakes decisions, explicitly name what actions that category triggers and what you would do differently if the item belonged to an adjacent category.
Treat a validation log with no disconfirmations as a warning signal of selective documentation rather than validation success, because unbiased testing inevitably produces some surprises.
For each candidate enabling relationship, articulate the specific mechanism through which one condition creates another; if you can only state correlation ('they go together') rather than mechanism, treat it as association not enabling.