Your agent is reliable. Now ask: is it doing the right things?
In L-0571, you optimized for reliability — making your agent fire more consistently in response to its triggers. But a perfectly reliable agent that does too many things is still a failing agent. It fires every time, yes, but each firing drains more resources than necessary, takes longer than necessary, and produces diluted results because its attention is split across tasks that do not all belong together. Reliability tells you the agent works. Scope tells you the agent works on the right things.
Scope optimization is the practice of expanding or narrowing the situations and actions an agent handles until the agent's boundary matches its purpose. Too broad, and the agent becomes bloated — slow, fragile, and expensive. Too narrow, and the agent becomes useless — technically functional but unable to accomplish its mission. The optimum is precise alignment between what the agent does and what the agent is for.
This is not a new idea. It is one of the most validated principles in systems design, software engineering, military strategy, economics, and cognitive science. Every discipline that builds functional agents — whether those agents are software modules, military units, business strategies, or personal habits — has independently discovered that scope discipline is the single most reliable predictor of agent effectiveness.
The Unix philosophy: do one thing and do it well
The most famous articulation of scope optimization in computing history came from Doug McIlroy, head of the Bell Labs Computing Sciences Research Center and the inventor of Unix pipes. In 1978, McIlroy documented the design philosophy that had emerged organically among the developers of the Unix operating system: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."
The history of how this principle crystallized is instructive. McIlroy had proposed the pipe mechanism — a way to feed the output of one program directly into the input of another — over a period from 1970 to 1972. Ken Thompson, the co-creator of Unix, implemented it overnight. The next morning, as McIlroy later recalled, "we had this orgy of one-liners. Everybody had a one-liner... Everybody started putting forth the UNIX philosophy."
What made the pipe mechanism revolutionary was that it turned scope limitation into a strength rather than a weakness. Before pipes, a program that needed to sort data, filter it, count occurrences, and format the output had to do all four things itself. After pipes, four small programs — each doing one thing well — could be chained together: sort | grep | wc | fmt. Each program was radically narrow in scope. The composition of narrow programs produced capabilities that no single broad program could match.
The lesson is not that narrow scope is inherently good. It is that narrow scope combined with composability is more powerful than broad scope alone. A program that sorts, filters, counts, and formats will always be less flexible than four programs that each do one of those things, because the monolithic program can only perform those four operations in the sequence its designer imagined. The composed pipeline can be rearranged, extended, and recombined in ways the original designers never anticipated.
Eric Raymond, in The Art of Unix Programming (2003), codified this into what he called the Rule of Modularity: "Write simple parts connected by clean interfaces." The emphasis is on the word "simple." Not simplistic — simple. A module with a clear, narrow scope is easier to understand, easier to test, easier to debug, and easier to compose with other modules. A module that tries to handle everything becomes impenetrable, untestable, and irreplaceable.
The single responsibility principle: scope optimization in software architecture
Robert C. Martin — known in the software industry as Uncle Bob — formalized scope optimization as the first of his five SOLID design principles in his 2000 paper "Design Principles and Design Patterns." The Single Responsibility Principle states: a class should have one, and only one, reason to change.
This definition is subtler than it appears. Martin is not saying a class should do one thing. He is saying a class should serve one stakeholder, one axis of change. A class that generates a report and formats it for printing has two reasons to change: the report content might change (business logic) and the print format might change (presentation logic). Even though both feel like "reporting," they are two distinct scopes that change for different reasons, at different times, initiated by different people.
When those two scopes are entangled in a single class, a change to the print format risks breaking the report logic, and a change to the report logic risks breaking the formatting. The class is harder to test because you cannot test the report generation without also exercising the formatting code. The class is harder to reuse because any system that needs the report logic also inherits the formatting code, whether it wants it or not.
Separating the scopes — one class for report generation, another for format rendering — produces two components that are each easier to understand, test, modify, and reuse. The total amount of code may increase slightly. The total amount of complexity decreases dramatically.
The principle extends far beyond software. Any agent — behavioral, organizational, or cognitive — that serves two masters will eventually be pulled apart by their conflicting demands. A morning routine that tries to serve both "prepare for the day" and "maintain the household" will be torn between productivity preparation and domestic maintenance every time those priorities conflict. Separate the scopes into two agents, and each one can be optimized independently.
Adam Smith's pin factory: scope optimization as economic law
The economic argument for scope optimization is older than computing by two centuries. In 1776, Adam Smith opened The Wealth of Nations with a detailed analysis of a pin factory, arguing that "the greatest improvement in the productive powers of labour, and the greater part of the skill, dexterity, and judgement with which it is anywhere directed, or applied, seem to have been the effects of the division of labour."
Smith observed that one worker performing all the operations required to make a pin — drawing out the wire, straightening it, cutting it, pointing it, grinding the head — could produce perhaps twenty pins in a day. But a factory of ten workers, each specializing in one or two operations, could produce 48,000 pins per day. That is a 240-fold increase in per-worker productivity, achieved entirely through scope narrowing.
Smith identified three mechanisms by which narrow scope increases effectiveness. First, increased dexterity: a worker who performs one operation repeatedly develops a speed and precision that a generalist cannot match. Second, time savings: eliminating the transition cost of switching between different tasks, tools, and mental models. Third, innovation opportunity: a worker focused on a single operation is more likely to notice improvements specific to that operation, because their attention is concentrated rather than diffused.
The third mechanism is the most important for personal agents. When your morning routine agent has twelve steps, you are unlikely to notice that step seven could be done in half the time if you changed the sequence, because your attention is spread across twelve competing demands. When the same agent has five steps, you notice inefficiencies within each step because your cognitive resources are not being consumed by the sheer complexity of coordinating a dozen tasks.
The second-system effect: what happens when scope expands unchecked
Fred Brooks, in The Mythical Man-Month (1975), identified the pathology that scope optimization is designed to prevent. He called it the second-system effect: the tendency for a successful first system — small, focused, elegant — to be followed by a second system that collapses under the weight of its own ambition.
The mechanism is psychological. The designer of the first system was constrained — by time, by resources, by ignorance. Those constraints forced narrow scope. The system worked well precisely because it did not try to do too much. When the designer begins the second system, the constraints are relaxed. They are more confident, more experienced, and more aware of features they could not include in the first version. They add all the deferred ideas. They generalize every mechanism. They accommodate every edge case. The result is a bloated, fragile system that is worse than the original in every dimension that matters.
Brooks was describing IBM's transition from relatively simple operating systems for the 700/7000 series to the notoriously complex OS/360. But the pattern is universal. It applies to software, to organizations, to personal systems, and to cognitive agents. Your first version of a habit is usually focused because you designed it when you did not know any better. Over time, you learn more, think of more things the habit could do, and gradually expand its scope until the habit that once took ten minutes now takes forty and fails half the time.
Brooks' prescription was explicit discipline: resist "functional ornamentation," make resource costs visible for every addition, and ensure that architectural leadership has the authority to say no. The same prescription applies to personal agents. Every time you consider adding a step to a routine, a condition to a rule, or a task to a workflow, ask: what does this addition cost, and does it serve the agent's core mission? If the answer to the second question is no, the answer to the first question does not matter.
Scope creep in project management: the empirical evidence
The project management literature provides the most robust quantitative evidence for the cost of unchecked scope expansion. Scope creep — the gradual, uncontrolled expansion of project scope beyond its original boundaries — is consistently identified as one of the primary causes of project failure.
The data is stark. Research published in 2024 and 2025 reports that project costs typically rise by 15 percent due to scope creep, with IT and construction projects experiencing increases of 20 percent or more. A 2025 comprehensive analysis found that scope creep can cost up to four times the initially expected development cost when left entirely uncontrolled. Sixty-two percent of projects experience budget overruns primarily attributable to uncontrolled scope expansion.
Beyond cost, scope creep produces timeline delays in 25 percent of projects — an average delay of twelve weeks — and quality degradation in 14 percent of projects. Projects without formal change management processes are 35 percent more likely to exceed costs or miss deadlines. And scope creep remains one of the top reasons why 52 percent of all projects fail to meet their objectives.
The root causes are consistently the same across studies: unclear initial objectives, evolving stakeholder needs, inadequate requirements gathering, and — critically — the absence of a formal process for evaluating whether a proposed scope change serves the project's core mission or merely satisfies someone's wish list. The project management literature's answer is change control: a deliberate process for evaluating every proposed addition against defined criteria before accepting it. This is scope optimization applied to organizational agents.
The hedgehog concept: scope as strategic advantage
Jim Collins, in Good to Great (2001), studied companies that made the transition from average performance to sustained greatness and identified a pattern he called the Hedgehog Concept. Drawing on an ancient Greek parable — "The fox knows many things, but the hedgehog knows one big thing" — Collins found that companies that achieved sustained greatness were hedgehogs. They identified the intersection of three circles (what they were deeply passionate about, what they could be the best in the world at, and what drove their economic engine) and then disciplined themselves to operate exclusively within that intersection.
The comparison companies — the ones that remained merely good — tended to be foxes. They pursued multiple strategies simultaneously, responded to every market opportunity, and spread their resources across a diffuse set of initiatives. They were more active, more ambitious, and more responsive. They were also less effective.
Collins' key distinction: a Hedgehog Concept is not a goal to be the best. It is not a strategy or a plan. It is an understanding of what you can be the best at — and, equally important, what you cannot be the best at and therefore must stop doing. Scope optimization for Collins was not about doing fewer things. It was about achieving clarity on which things to do and having the discipline to refuse everything else.
The personal application is direct. Your cognitive agents — habits, routines, decision rules, behavioral patterns — are more effective when they operate within a clearly defined scope than when they attempt to serve every possible need. The morning routine agent that tries to cover fitness, meditation, meal prep, household maintenance, email triage, and goal review is a fox. The morning routine agent that does exactly three things — physical activation, mental centering, and daily orientation — is a hedgehog. The hedgehog finishes. The fox is still running when the day begins.
The AI parallel: narrow models outperform broad ones
The artificial intelligence research of the past two years provides striking empirical confirmation of the scope optimization principle. Across multiple benchmarks and deployment contexts, specialized AI models consistently outperform general-purpose models within their domain of focus.
The mechanism mirrors Smith's pin factory exactly. A specialized model trained on domain-specific data develops deeper pattern recognition within its scope than a general model trained on everything. Focused models can lower hallucination rates by 70 to 85 percent compared to general-purpose systems operating in the same domain, because the narrower training distribution means fewer opportunities for the model to generate plausible-sounding nonsense from out-of-domain patterns.
The economic evidence is equally clear. Enterprise generative AI investments reached $4.6 billion in 2024, an almost eightfold increase from the previous year, with the fastest growth occurring in domain-specific applications rather than general-purpose deployments. Industry analysts project that more than 50 percent of enterprise AI deployments will rely on specialized models by 2028. The market is voting with billions of dollars for scope optimization.
The most telling finding comes from multi-agent architectures. When AI systems are designed as compositions of narrow, specialized agents — each handling one well-defined task — rather than as a single general-purpose agent, overall system performance improves across accuracy, reliability, and cost efficiency. This is the Unix pipe principle applied to artificial intelligence: narrow agents, clean interfaces, composed capabilities.
The lesson for your cognitive agents is the same lesson every engineering discipline has independently discovered: an agent that tries to do everything does nothing well. The path to optimization runs through scope, not through adding more capacity to a bloated architecture.
The scope optimization protocol
Moving from principle to practice requires a systematic method for evaluating and adjusting the scope of any agent.
Step 1: State the agent's mission in one sentence. If you cannot complete this step, the agent's scope is undefined, and you are optimizing in the dark. The morning routine's mission might be: "Transition from sleep to focused readiness for the day's most important work." Every element of the agent's scope must serve this sentence.
Step 2: List every action the agent currently performs. Be exhaustive. Include the steps you have added gradually, the exceptions you accommodate, the edge cases you handle. Scope bloat is almost always incremental — each individual addition seemed reasonable, and the cumulative effect was never evaluated.
Step 3: Test each action against the mission. For every item on the list, ask: if I removed this, would the agent's mission fail? Not "would it be slightly less complete" or "would I miss it" — would the mission fail? Items that pass this test are core scope. Items that fail it are candidates for removal or migration to a different agent.
Step 4: Identify composition opportunities. Some items you remove from this agent's scope still need to happen — they just do not belong here. Can they be handled by a different agent with a different mission? The household maintenance tasks removed from the morning routine do not vanish. They become a separate "weekly maintenance" agent with its own trigger, scope, and optimization cycle. This is the Unix pipe principle applied to personal systems: narrow agents, composed into a larger workflow.
Step 5: Test the narrowed scope. Run the agent at its new, reduced scope for a defined period. Measure completion rate, time to execute, subjective satisfaction, and whether the mission is actually being served. If the narrowed agent accomplishes its mission more reliably and at lower cost, the optimization succeeded. If the mission is no longer being served, you cut something load-bearing and need to restore it.
The discipline of refusal
Scope optimization is ultimately a discipline of refusal. It requires saying no to additions that feel useful, that someone requested, that you could accommodate if you just expanded the agent a little. Every yes is a cost — not just the cost of the new element, but the cost of increased complexity, increased execution time, increased cognitive load, and increased failure probability for every other element in the agent.
Brooks called this discipline "resisting functional ornamentation." McIlroy encoded it in the Unix philosophy. Collins called it "the discipline to stay within the three circles." Martin encoded it in the Single Responsibility Principle. Smith demonstrated it in the pin factory. The vocabulary varies. The principle is identical: an agent achieves greatness not by adding capabilities but by ruthlessly aligning its scope with its mission and refusing everything else.
Your agents now fire reliably from L-0571. They now have the right scope from this lesson. But a reliable, well-scoped agent can still be expensive to run — consuming more time, attention, and willpower than necessary. In L-0573, you will learn energy optimization: reducing the cognitive cost of executing an agent that already does exactly the right things. Scope tells you what. Energy tells you how cheaply.
Sources:
- McIlroy, M. D. (1978). Unix philosophy principles. Bell Labs Computing Sciences Research Center. Historical context: Raymond, E. S. (2003). The Art of Unix Programming. Addison-Wesley.
- Martin, R. C. (2000). "Design Principles and Design Patterns." Subsequently formalized in Martin, R. C. (2003). Agile Software Development: Principles, Patterns, and Practices. Prentice Hall.
- Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. W. Strahan and T. Cadell.
- Brooks, F. P. (1975). The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley.
- Collins, J. (2001). Good to Great: Why Some Companies Make the Leap... and Others Don't. HarperBusiness.
- "Understanding and Mitigating Scope Creep in Project Management: A Comprehensive Analysis of Causes and Solutions." (2025). ResearchGate.
- "Why Specialized SLMs are Outperforming General-Purpose LLMs." (2025). OneReach AI Research.