Core Primitive
Automate every operational step that does not require human judgment.
The twelve minutes you will never get back
You wake up. You open your task manager. You scan yesterday's incomplete items and manually copy them into today's list. You open your bank app and check whether the electricity bill cleared. You open your calendar and manually create the same three recurring blocks you create every Monday. You open your email and drag the same five newsletters into the same reading folder. None of these actions require thought. They require only execution — the mechanical movement of information from one known location to another according to a fixed rule. And yet you perform them by hand, every day, burning cognitive fuel on tasks that a script, a filter, or a recurring event could handle while you sleep.
This is the automation problem. Not that you lack tools. That you spend human attention on inhuman work.
The hierarchy you must follow
The previous lesson established a critical sequence: simplify before you optimize. Automation extends that sequence into a three-step hierarchy that every operations engineer learns and most individuals ignore. The hierarchy is: eliminate, simplify, automate. In that order. Always.
The logic is straightforward. If a step produces no value, eliminating it costs nothing and saves everything that step consumed. If a step produces value but contains unnecessary complexity, simplifying it reduces the surface area you must maintain. Only after a step has survived elimination and simplification does it become a candidate for automation. Automating a wasteful step does not remove the waste — it entrenches it. Automating a needlessly complex step does not reduce the complexity — it hides it behind a trigger that will eventually fail in ways you cannot quickly diagnose.
Taiichi Ohno, the architect of the Toyota Production System, was relentless on this point. He argued that automating waste is worse than doing waste manually, because manual waste is visible. You feel the friction. You notice the time. Automated waste disappears from view while continuing to consume resources — server costs, API calls, storage, and most importantly, the accumulated fragility of systems that grow without pruning. Ohno called this "autonomation" — automation with a human touch, where machines stopped themselves when they detected a defect rather than continuing to produce defective output at high speed. The principle translates directly to personal systems: your automations should have built-in checks, not blind execution.
The hierarchy is not a suggestion. It is a dependency chain. If you skip elimination and jump straight to automation, you will build elaborate machinery that does the wrong thing efficiently. If you skip simplification and automate a complex process, you will create a brittle system that breaks at every edge case. The hierarchy forces you to clean before you build.
What machines do well and what humans do well
In 1951, Paul Fitts published a deceptively simple analysis that engineers now call Fitts' list, or the MABA-MABA framework (Men Are Better At / Machines Are Better At). Fitts identified the task characteristics where machines outperform humans and vice versa. Machines excel at repetitive execution, precise timing, sustained vigilance over long periods, rapid computation, simultaneous monitoring of multiple channels, and consistent application of fixed rules. Humans excel at pattern recognition in novel situations, flexible response to unexpected events, inductive reasoning from incomplete data, judgment under ambiguity, and the ability to improvise when conditions change.
This list has been refined over seventy years but its core insight remains intact. When you are deciding what to automate in your personal operations, Fitts' list provides the filter. Ask of each step: does this require the application of a fixed rule to known inputs? If yes, it is a machine task. Does this require judgment, interpretation, or response to novelty? If yes, it is a human task. The primitive of this lesson — automate every operational step that does not require human judgment — is Fitts' list distilled to a single directive.
Raja Parasuraman and colleagues formalized this further with their model of levels of automation, published across several influential papers in the 1990s and 2000s. They described ten levels ranging from fully manual (the human does everything) to fully autonomous (the machine does everything, including deciding what to do). Between these extremes sit hybrid levels: the machine offers suggestions, the machine executes unless the human vetoes, the machine executes and informs the human afterward. Most personal automation operates at the middle levels — your email filter sorts messages, but you review the sorted folders. Your calendar blocks recurring time, but you decide whether to honor the block on any given day. Understanding that automation is a spectrum, not a binary, prevents the all-or-nothing thinking that leads people to either automate nothing or automate recklessly.
The ironies that automation creates
In 1983, Lisanne Bainbridge published a paper called "Ironies of Automation" that every person who builds automated systems should read. Bainbridge identified a paradox: the more you automate a system, the harder you make the remaining human tasks. Here is why.
When a system runs automatically, the human operator monitors rather than acts. Monitoring is cognitively demanding in a specific way — it requires sustained attention to a process that rarely fails. Humans are poor at sustained vigilance. Decades of research on signal detection, from Mackworth's clock test in the 1940s onward, confirm that human vigilance degrades rapidly when events are rare. So the automation removes the routine tasks that kept the human engaged and leaves only the rare, difficult interventions — precisely the tasks that require the most skill and the freshest attention, delivered to an operator whose skills have atrophied from disuse and whose attention has degraded from monitoring a system that almost never fails.
This is not an abstract concern for air traffic controllers. It applies directly to your automated bill payments, your automated backups, your automated email filters. When your bill pay runs correctly for eighteen months, you stop checking it. When it miscalculates — a rate change, a duplicate charge, a payment to a closed account — you discover the error only when the damage has accumulated. When your backup system runs silently for a year, you stop verifying that the backups are actually restorable. When it fails, you discover the failure at the moment you need the backup most. Bainbridge's irony is personal: the more reliably your automation works, the less prepared you are for the moment it does not.
The practical implication is that every automation you build requires a monitoring protocol. Not "I will check it when I remember." A scheduled, recurring review with a specific checklist. This is the cost of automation that most people fail to budget. The automation itself is cheap. The ongoing monitoring is the real expense, and if you do not pay it upfront, you will pay it in crisis when something breaks silently.
Automation debt
There is a concept parallel to the technical debt that software engineers track. Call it automation debt. It accumulates every time you build an automation and fail to document it, fail to monitor it, or fail to update it when the underlying process changes.
Automation debt compounds. Each unmonitored automation is a silent assumption embedded in your operational system. When those assumptions drift from reality — and they will, because reality changes — the automation produces incorrect outputs that propagate through your system before anyone notices. A filter that sorted vendor emails correctly last year now misroutes messages because the vendor changed their sending domain. A recurring purchase that was efficient at its original quantity now over-orders because your consumption pattern shifted. A script that processed files from one folder now fails because you reorganized your directory structure three months ago.
The insidious quality of automation debt is invisibility. Manual processes fail visibly — you forget a step, you notice the gap, you fix it. Automated processes fail silently. The step executes. It just executes wrong. And because you delegated it to automation precisely so you would not have to think about it, you do not think about it until the consequences become impossible to ignore.
Managing automation debt requires the same discipline as managing any other form of operational debt. You maintain an automation registry — a simple list of every automated process you run, what it does, when it was last verified, and what would break if it failed. You schedule periodic reviews. You build alerts where possible. You treat your automations not as permanent fixtures but as living systems that require maintenance.
Building your personal automation layer
With the hierarchy, the human-machine distinction, the ironies, and the debt model in place, you can now build a personal automation layer that is powerful without being fragile. The approach has four steps.
Step one: inventory your mechanical tasks. Go through a typical week and flag every task that follows a fixed rule applied to known inputs. Bill payments. File organization. Calendar blocking. Data entry. Status updates. Recurring communications with identical content. Backup execution. These are your automation candidates.
Step two: rank by frequency times duration. A task you perform daily for five minutes (35 minutes per week) is a higher-value automation target than a task you perform monthly for thirty minutes. Multiply frequency by duration to calculate weekly time cost. Automate the highest-cost mechanical tasks first.
Step three: choose the simplest automation tool that works. This is where most people over-engineer. The automation hierarchy for personal systems runs from low-tech to high-tech: checklists and templates (zero technology — a reusable document you copy), calendar events and reminders (built into every phone), email filters and rules (built into every email client), saved text expansions and keyboard shortcuts, spreadsheet formulas and conditional formatting, dedicated automation platforms like IFTTT or Zapier, and custom scripts. Start at the lowest level that solves the problem. A saved email template you paste into a reply is automation. A checklist you print every Monday is automation. You do not need code to automate. You need a fixed rule applied consistently.
Step four: build the monitoring layer. For each automation, define a verification cadence and a failure indicator. How often will you check that this automation is still running correctly? What signal would tell you it has broken? Write these down in your automation registry. A simple table works: automation name, what it does, tool used, last verified date, failure indicator, verification cadence. Review the registry weekly until your automations have proven stable, then shift to monthly.
The Third Brain
AI tools extend your automation capacity into territory that was previously reserved for human judgment. Language models can now draft routine communications, summarize meeting notes, categorize incoming information, and generate first passes on repetitive analytical tasks. This does not mean they replace judgment — it means they push the automation boundary further up Parasuraman's scale.
The key discipline is the same one that governs all automation: define the rule, verify the output, monitor for drift. Use an AI tool to draft your weekly status update from your task completion log — but review the draft before sending. Use it to categorize your reading list by topic — but spot-check the categories monthly. Use it to generate checklists from your process documentation — but validate the checklists against reality. AI automation is still automation. Bainbridge's ironies still apply. The tool is powerful precisely to the degree that you maintain the monitoring discipline that prevents silent failure.
When automation meets disruption
You now have a framework for identifying what to automate, a hierarchy for doing it safely, and a monitoring protocol for keeping it reliable. But reliability under stable conditions is only half the challenge. Your systems will face disruptions — travel, illness, role changes, life transitions — that test whether your automations survive contact with reality. An automation that works perfectly in your home office may break completely when you travel. A monitoring cadence that works during a calm quarter may collapse during a crisis.
The next lesson addresses this directly. Operational resilience is the discipline of designing your systems — including your automations — to survive disruptions without catastrophic failure. Where this lesson taught you to delegate mechanical work to machines, the next teaches you to ensure those machines keep working when the ground shifts beneath them.
Frequently Asked Questions