The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Expertise fundamentally changes the size and nature of perceptual chunks—experts automatically perceive larger meaningful patterns as single units, enabling them to work with more complex information within the same working memory constraints.
The human brain automatically generates and perceives patterns, relationships, and regularities in sensory input prior to and often independent of conscious verification, subject to systematic biases such as confirmation bias that preferentially encode pattern-consistent information while filtering contradictory evidence.
When a behavior, reaction, or outcome recurs three or more times across a 30-day period, classify it as a pattern candidate requiring structural analysis rather than treating it as coincidence.
Record each occurrence of a suspected pattern with date and context in a dedicated log before drawing conclusions, because memory-based frequency assessment systematically overestimates recurrence through the frequency illusion.
When the same symptom triad precedes system failures across three independent incidents, document it as a named detection pattern and build an automated alert triggered by that specific combination.
Name behavioral patterns using 2-4 word descriptors that compress the full trigger-response sequence into a recognizable label, enabling real-time pattern recognition under cognitive load.
For each named pattern in your Pattern Dictionary, document three required fields: the pattern name, its observable trigger conditions, and the default behavioral response it produces.
Do not expect pattern recognition alone to eliminate the pattern—track the ratio of pattern-following to pattern-breaking instances over weeks rather than demanding immediate control, because automaticity requires repeated override practice to weaken.
Validate cross-domain pattern candidates by verifying that the relational structure (not surface similarity) matches across domains—two patterns share structure when the causal relationships between elements are preserved even when the elements themselves differ completely.
Before optimizing around a perceived positive pattern, verify through deliberate removal tests whether the pattern persists when suspected causal factors are absent.
When a pattern appears to reverse across subgroups in your data, disaggregate by relevant context variables (sleep, stress, social setting) before drawing conclusions from the aggregate pattern.
When identifying meta-patterns, require each second-order claim to ground in at least three documented first-order pattern instances to distinguish genuine meta-patterns from intellectual speculation.
When multiple relationships produce the same tension pattern despite different people, map your own contribution to the dynamic before attributing the pattern to others' behavior.
For each avoided task that persists beyond 48 hours, log the emotion triggered, the substitute activity performed, and the rationalization constructed, as these three elements constitute the replicable structure of personal avoidance patterns.
For each genuine success in the past two years, document conditions present, behaviors that differed from defaults, people involved, internal state, and 48-hour setup—then surface elements appearing in three or more instances as your replicable success pattern.
When a problem persists across multiple attempts, identify exceptions where the problem was absent or reduced rather than analyzing why the problem occurs, as exception conditions are more directly actionable than problem mechanisms.
When reviewing notes for patterns, read through all entries without editing or organizing first, then extract recurring themes on a separate page, to prevent premature categorization from filtering out emergent structures.
Before concluding a pattern is meaningful, verify it survives three independent filters: sample size check (occurrences vs. opportunities), base rate comparison (frequency vs. background rate), and alternative explanation generation (minimum two alternatives).
Maintain two separate lists—'Pattern Candidates' and 'Confirmed Patterns'—promoting candidates only after they survive three independent observations, one alternative-explanation check, and one successful prediction.
After accumulating multiple validation records, analyze them as a set to identify recurring patterns in how your schemas fail—same blind spot, same overconfidence direction, same boundary condition—that no single test reveals.
Accumulate anomalies (observations that don't fit the schema) in a running list and trigger full schema review when the count reaches a pre-defined threshold, rather than treating each anomaly as requiring immediate action.
Create a dedicated anomaly log separate from regular notes where each entry records what you expected, what happened, and which schema generated the expectation.
When error budget exhaustion occurs in a tracked system, conduct root cause analysis of the pattern rather than investigating individual deviations, because budget exhaustion signals structural problems while individual errors within budget represent normal variance.
After accumulating 10+ post-action reviews, analyze them in aggregate to identify structural causes appearing across multiple unrelated tasks—these recurring patterns indicate systemic tendencies requiring architectural fixes not isolated corrections.