The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Document the specific conditions under which a schema was validated, as the boundary of tested conditions defines the boundary of warranted confidence.
Reformulate schemas with explicit boundary clauses that separate validated conditions from untested extrapolations.
When transferring a schema to a new domain, treat the transfer as a new hypothesis requiring independent validation rather than an extension of existing validation.
Treat scale transitions as requiring revalidation, as schemas validated at one scale often encounter emergent properties at different scales that invalidate their predictions.
Distinguish epistemic confidence (grounded in evidence) from psychological confidence (grounded in identity and familiarity) by requiring an external validation trail for the former.
Test schemas where you feel most certain first, as high confidence without testing history signals the highest risk of unwarranted certainty.
Design tests that can genuinely fail by specifying in advance what observations would falsify the schema, not merely what would confirm it.
Document falsifications more thoroughly than confirmations, as falsification provides boundary information, revision constraints, and exposure of hidden assumptions that confirmation cannot reveal.
Reframe invalidation as the highest-value epistemic event rather than as personal failure, treating moments when schemas break as the richest learning opportunities.
Match schema review cadence to the actual rate of change in each domain rather than applying uniform revision intervals across all knowledge areas.
Treat unexpected outcomes as automatic triggers for schema re-validation, as surprise signals that at least one schema in your stack has drifted out of calibration.
Identify beliefs where being wrong would have the greatest impact and prioritize validating those schemas first, as high-stakes beliefs with thin evidence represent maximum epistemic risk.
Schedule regular schema audits to detect drift before catastrophic failure, because unmonitored schemas degrade without signaling their obsolescence.
Frame belief revision as model calibration rather than personal failure to reduce identity-threat responses that block updating.
Limit revision scope to single-schema updates rather than wholesale worldview overhauls to stay within working memory capacity and enable accurate evaluation.
Maintain a revision queue where you capture schema anomalies as they occur rather than trying to fix everything immediately, enabling prioritized processing.
Log schema revisions in real-time rather than retrospectively, because delayed documentation becomes narrative construction rather than accurate provenance.
Categorize revision triggers by type (data, experience, authority, social proof, emotion) to build meta-knowledge about your own epistemic patterns.
Archive old schema versions rather than deleting them, preserving the complete trajectory of your thinking for comparison and potential rollback.
When a schema stops fitting reality, formally mark it as deprecated with full documentation rather than patching it indefinitely with exceptions and workarounds.
Every deferred schema update accumulates interest payments in the form of suboptimal decisions, failed predictions, and wasted effort compounding until the schema is corrected.
Build schema update triggers based on surprise and prediction failure rather than calendar schedules, because surprise is the natural signal of model-reality divergence.
When updating a schema, catalog every downstream dependency—habits, commitments, tools, relationships, routines—and migrate them systematically rather than expecting behavior to automatically align with the updated belief.
Execute schema migrations in order of friction—start with dependencies that generate the most daily conflict between old infrastructure and new understanding to reduce the highest-cost misalignments first.