The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
Measure personal development quality by asking whether any practice changed a schema's structure or merely added information to existing schemas - structural change is genuine growth, information addition is not.
For each evolved schema, document not just old and new versions but also what the old belief was protecting or enabling, because identifying the lost function reveals emotional costs of revision and increases honesty about unrevised beliefs.
When a schema cannot specify any observation that would falsify it, classify it as a belief system rather than a testable model and flag it for replacement or constraint.
In your schema inventory, require behavioral proof by identifying three decisions from the last month that each schema governed—if you cannot find three, reclassify the schema as aspirational rather than operational.
For each important schema, map both its prerequisites (what it depends on) and its dependents (what depends on it), then flag schemas appearing most frequently as dependencies for regular review.
When schema conflict persists after examining evidence, build a conditional routing rule specifying the exact conditions under which each schema applies rather than attempting to pick a universal winner.
When planning task duration, deliberately switch from inside-view scenario construction to outside-view base-rate consultation by asking 'how long have similar tasks taken?' instead of 'how long will this take?'
For each schema you operate on, document source provenance in a single field—specific person, book, cultural norm, direct experience, or unknown—then prioritize verification effort by source weakness.
Apply lateral reading by immediately opening new tabs to search for independent information about a source rather than evaluating the source by reading the source itself, because external assessment outperforms internal coherence checking.
When AI assistants suggest frameworks or schemas, respond by asking for original research sources, boundary conditions, and strongest counterarguments rather than accepting or rejecting the claim directly.
During real-time execution of high-stakes tasks, defer metacognitive recursion beyond two levels to avoid working memory saturation—externalize to enable deeper inspection.
When your explanation of your own behavior differs from an external observer's explanation by more than surface framing, treat the divergence as high-confidence evidence of a metacognitive blind spot requiring investigation.
When building knowledge graphs, limit relationship type taxonomies to 5-7 types rather than attempting comprehensive ontological coverage, because classification overhead beyond this threshold produces diminishing informational returns while increasing maintenance cost.
When creating bridge nodes between domains, link to structural patterns (diminishing returns, feedback delays, threshold effects) rather than surface metaphors (companies as bodies), because only structural correspondence enables valid inference transfer across contexts.
When AI systems traverse your knowledge graph, maintain typed relationship labels with explicit predicates rather than relying on semantic similarity alone, because typed edges enable logical reasoning while embeddings only surface associative proximity.
When opening a hub note (one you reference frequently), immediately check its backlinks panel and spend two minutes reading the incoming references to surface connections you had forgotten.
When building connections between notes, test each link by asking whether you can articulate the relationship in a complete sentence—if you cannot, delete the link rather than inflating density metrics artificially.
Identify your top 5% of notes by connection count and schedule quarterly reviews where you verify each hub note is current, accurate, and well-linked, investing maintenance effort proportional to structural importance.
Distinguish real hubs (concepts earning centrality through genuine cross-domain relationships) from artificial hubs (index pages linking to everything in a category) by testing whether removing the hub would sever meaningful conceptual pathways or merely convenience pathways.
Test each bridge node by verifying it generates novel predictions or actionable insights in both connected domains—if it only produces a sense of similarity without bidirectional inference, demote it to metaphor status or delete it.
Identify your three densest knowledge clusters by examining which groups of notes link heavily to each other but sparsely to the rest of the graph, then label each cluster based on observed structure rather than imposed categories.
When a cluster appears only after you force-link unrelated notes to create it, delete those artificial connections because imposed clusters destroy the diagnostic value of emergent structure.
When your graph's cluster structure contradicts your self-narrative about expertise, trust the cluster structure because it reflects actual linking behavior while self-assessment is systematically distorted.
List your major clusters and for each pair ask what connects them; when the answer is 'nothing' or 'one weak edge,' write down the bridge concept that should connect them as your next learning target.