Definitionv1
Bus factor: the number of people who would need to be
Bus factor: the number of people who would need to be suddenly unavailable for a project or system to stall, used as a measure of system resilience and individual dependency risk
Why This Is a Definition
This definition establishes 'bus factor' by naming the term, identifying its genus (measure of dependency), and specifying its differentia (number of people whose unavailability would halt progress). It clearly distinguishes this concept from other measures of team capability and precisely defines how it functions as an indicator of system vulnerability.
Connections
Defines (60)
AxiomWorking Memory Capacity LimitAxiomExponential Information DecayAxiomCognitive Defusion: Thoughts Are ObjectsAxiomExtended Cognition ThesisAxiomDirected Attention as Depletable ResourceAxiomHindsight Bias and Calibration NecessityAxiomHabits as Context-Response AssociationsAxiomTwo-Level Metacognitive ArchitectureAxiomIllusion of Explanatory DepthAxiomExpertise Transforms Perceptual ChunkingAxiomDual Coding Theory: Verbal and Visual ChannelsAxiomConversational Memory Asymmetry From Production PlanningAxiomUltradian and Circadian Cognitive RhythmsAxiomSystematic Overconfidence TaxonomyAxiomEmotion as Systematic Cognitive ModulatorAxiomGlucose-Cognition Dependency ThresholdAxiomMeaning as Receiver ConstructionAxiomBias Blind Spot AsymmetryAxiomCognition Operates Through Dual Processing SystemsAxiomCognitive and Affective Empathy Are DistinctAxiomLooping Effects of Human ClassificationAxiomAutomatic Pattern PerceptionAxiomHierarchical Chunking Expands CapacityAxiomConstrual Level Effects on PerceptionAxiomPiagetian Equilibration Through Schema DynamicsAxiomPeople interpret failure as either evidence about theirAxiomHumans have a fundamental drive to evaluate their ownAxiomWhen estimating future task duration, people naturally adoptAxiomHuman beings make decisions under conditions of incompleteAxiomHumans exhibit automation complacency — reducing monitoringPrincipleProcess inbox items in two distinct passes—first clarifyingPrincipleWhen delegating to AI systems, maintain human capability toPrincipleIdentify tasks where you are a single point of failure (busPrincipleDelegate tasks at 70% of your quality level when adequatePrincipleConsolidate all agent status information onto a singlePrincipleBenchmark efficiency, accuracy, and quality dimensionsPrincipleWhen designing cognitive agents, examine the full patternPrincipleMake context switching costs visible through deliberatePrincipleBlock your measured peak attention hours on your calendar asPrincipleConduct periodic authority audits by listing every sourcePrincipleUse the 'five whys' technique on any significant energyPrincipleBuild energy systems through sequential single-componentPrincipleTreat energy depletion patterns as leading indicators ofPrincipleConduct functional analysis before attempting extinction byPrincipleUse external systems (AI, writing, trusted others) to assessPrincipleRespond to emotional bids (small implicit requests forPrincipleAttribute success to process rather than talent to preservePrincipleEmbed learning capacity into the system itself rather thanPrinciplePeriodically surface process schemas by extracting embeddedPrincipleAssign decision authority to the lowest level withPrincipleLeverage tacit team knowledge about individual strengths,PrincipleAggregate predictions by confidence level and compare statedPrincipleCognitive offloading must become an automatic daily habitPrincipleDocument not only what tools you use but the completePrincipleTest AI interactions for cognitive extension versusPrincipleTest each candidate classification dimension by askingPrincipleAllocate attention only to tasks where your unique judgment,PrincipleScore tasks on three dimensions — irreversibility (cost ofPrincipleSpecify every delegation with five components: concretePrincipleVerify habit automaticity by checking whether the behavior