The irreducible epistemic atoms underlying the curriculum. 2,888 atoms across 3 types and 2 molecules
Write blockers in the form 'I cannot [specific action] because [specific obstacle]' immediately upon noticing friction to convert ill-structured problems into solvable ones.
Decompose compound blockers into separate obstacles with independent owners and solutions before attempting resolution, because monolithic blockers resist action through perceived complexity.
Write learning in the structure: claim (one sentence, your words), evidence (why believe it), connection (how it relates), question (what's unresolved) to force generation rather than transcription.
When using AI for learning, write your own explanation first, then use AI interrogation to find gaps, then revise—never let AI write the initial explanation because reading AI output does not produce the generation effect.
Capture feedback within 60 minutes of receiving it using structured fields (date, source, verbatim content, emotional reaction, specific behavior) before memory reconstruction distorts the signal.
Categorize each failure as preventable (process deviation), complex (novel factor interaction), or intelligent (frontier experiment) before analysis, because different failure types require different questions.
Audit thinking environments weekly by comparing actual conditions against documented specifications to detect entropy, because environmental decay through accumulated objects, browser tabs, and permission drift is constant and unnoticed without structured review.
Document system operations in five components—capture rules, processing workflow, retrieval method, review protocol, and evolution history—because each component addresses a distinct failure mode in knowledge system sustainability.
During weekly reviews, cross-reference externalized domains to detect contradictions—compare stated priorities against time allocation, goals against commitments, assumptions against failure analyses—because isolated review of each domain misses the conflicts that degrade decision quality.
Test AI integration by verifying whether interactions increase your independent understanding—if you cannot reconstruct the reasoning without the AI, the tool is replacing cognition rather than extending it.
Feed complete externalized system context to AI assistants rather than isolated queries, because AI reasoning quality scales with the completeness and structure of the personal knowledge base it can traverse.
Before attempting to design better schemas, inventory your current operating schemas by writing what you actually do (not what you should do) across professional, relational, and self-concept domains.
Perform schema inspection through a five-step audit: (1) list actual operating rules in a domain, (2) source each rule's origin, (3) identify one success and one failure case per rule, (4) rate confidence in each rule, (5) compare confidence to evidence quality.
When using AI to assist schema inspection, first externalize your thinking in writing, then request assumption extraction, pattern detection across entries, or adversarial questioning—because AI can only inspect articulated schemas, not internal ones.
When you update a belief, write an explicit update statement in the format 'Based on [specific evidence], I am updating my model from [old version] to [new version]' to reframe revision as calibration rather than defeat.
When formal and intuitive schemas disagree on a decision, investigate the disagreement for thirty minutes rather than defaulting to either—write what your gut is reacting to and test whether it reveals a pattern your formal criteria missed or a bias you haven't examined.
For each schema driving consequential decisions, document: (1) the schema as a sentence, (2) when you adopted it, (3) supporting evidence, and (4) what would falsify it — if you cannot articulate falsification conditions, treat the schema as dogma requiring immediate audit.
Store each schema with explicit scope documentation specifying the domain where it was built and the structural conditions it assumes, treating scope as mandatory metadata rather than optional annotation.
Document the purpose each category serves by completing the sentence 'this category exists to [do what] for [whom]' to distinguish functional infrastructure from inherited furniture.
When someone proposes a different categorization and your first reaction is irritation that they are 'wrong,' treat this as a signal that you have mistaken a constructed category for an objective feature of reality.
When a binary classification hides multiple distinct failure modes or reasons within a single bucket, decompose it into separate dimensions that can be evaluated independently.
When forced to make a binary decision after spectrum-based deliberation, document the richer multi-dimensional signal alongside the binary outcome so future analysis can recover what the compression discarded.
For every pair of categories in a classification system, verify that no item can legitimately belong to both (mutual exclusivity test), and verify that no domain item falls outside all categories (collective exhaustiveness test).
Design multi-class classification systems with mutually exclusive categories when items can only be one type, and multi-label systems when items can legitimately belong to multiple categories simultaneously.