The irreducible epistemic atoms underlying the curriculum. 4,828 atoms across 8 types and 2 molecules
Design replacement behaviors using differential reinforcement strategies (DRA, DRI, DRO) that serve the same function as the unwanted behavior through a more adaptive channel, scoring candidates on functional match, temporal match, and sustainability before implementation.
Establish explicit, falsifiable hypotheses with measurable outcomes and proposed mechanisms before attempting behavior change to enable genuine learning through prediction error.
Measure baseline behavior before implementing interventions to distinguish genuine effects from normal variation and regression to the mean.
Conduct structured evaluations at time-box endpoints using pre-specified criteria and tracked data rather than subjective impressions to make evidence-based continuation decisions.
Isolate variables in behavioral experiments by changing only one factor at a time while holding all other conditions deliberately constant.
Identify and test the behavioral kernel—the irreducible core action without which the behavior does not exist—stripped of duration, intensity, and context complexity.
Use pre-mortem techniques by imagining failure has occurred and working backward to identify what you didn't see, rather than projecting forward from current knowledge.
Reduce restraining forces rather than increasing driving forces when initiating behavioral change, as removing obstacles creates movement with less resistance than pushing harder against barriers.
Distinguish between hypothesis failure (wrong theory), execution failure (inconsistent implementation), and measurement failure (inadequate detection) when experiments produce negative results, as each type requires different corrective action.
Design experiments to produce intelligent failures—small, fast, deliberate tests at the frontier of knowledge where negative results generate information unavailable through other means.
Use population-level research to set informed priors while recognizing that individual responses may diverge significantly from average effects without invalidating either the research or your personal data.
Use reversal designs (ABA or ABAB patterns) to strengthen causal claims in single-subject experiments by demonstrating that outcomes track the intervention across multiple on-off cycles.
Replicate promising pilot experiments in different temporal contexts before adopting them permanently, as single successes may reflect novelty effects or favorable circumstances rather than robust causal relationships.
Maintain a separate backlog for experimental ideas distinct from active experiments, using five structured fields (hypothesis, expected impact, estimated effort, domain, dependencies) to separate idea generation from execution decisions.
Run experiments sequentially when they share outcome variables or operate through overlapping mechanisms; run them in parallel only when they are independent on outcome, mechanism, and temporal dimensions.
Accept bundled interventions when all components are low-cost to maintain and the value of knowing precise attribution is less than the cost of additional sequential testing to isolate contributions.
Define routine chains at explicit trigger-action granularity where each link specifies both the completion signal from the previous link and the specific next action, as unspecified transitions are the primary failure point in chain execution.
Train interoceptive accuracy through body-scanning practices that correlate physical sensations with decision outcomes, calibrating which somatic signals are reliable in which contexts.
Design routine pilots to deliberately include context variations (weekends, travel, stress) within the fourteen-day window, as context-dependent failure is the primary threat to routine sustainability.
Schedule explicit transition checkpoints at seasonal boundaries (equinoxes and solstices) to catch environmental drift before accumulated failures produce self-blame.
When scaling a successful small experiment, change one dimension at a time (duration, frequency, scope, context, or integration) to preserve interpretability.
Identify and restructure balancing loops that will resist scaling before attempting to scale, rather than fighting them with willpower after they activate.
Treat small-scale success as proof of concept rather than proof of scalability — larger scales encounter forces that small scales never test.
Conduct regular cross-experiment reviews to extract meta-patterns that individual experiment evaluations cannot reveal.