Core Primitive
Tools that work without internet are more reliable for critical work.
The moment your tools vanish
On November 25, 2020, Amazon Web Services experienced a multi-hour outage in its US-EAST-1 region. The ripple effects were staggering. Roku streaming devices stopped working. Ring doorbells went dark. Adobe's Creative Cloud services became unavailable. Flickr went down. The Washington Post's website struggled. And across millions of desks, knowledge workers discovered that the documents, notes, project boards, and communication tools they relied on every day had a single point of failure they had never considered: someone else's computer, connected to them by a wire they could not see and did not control.
This was not a freak event. In June 2022, Cloudflare experienced an outage that took down Discord, Shopify, Fitbit, and thousands of other services. In January 2023, Microsoft 365 went down globally, leaving organizations unable to access email, Teams, or SharePoint for hours. Downdetector, the outage tracking service, logs hundreds of significant service disruptions every month across major platforms. The internet is not a utility like electricity with 99.99% uptime. It is a complex, fragile system that fails regularly, unpredictably, and — from the perspective of the person staring at a loading spinner — completely.
And yet most knowledge workers have built their entire cognitive infrastructure on the assumption that the network will always be there. Every note in Notion. Every document in Google Docs. Every task in Asana. Every file in Dropbox. Every thought captured in a tool that is, at its foundation, a browser tab connected to someone else's server. When the connection holds, these tools are magnificent. When it does not, your ability to think — to access what you know, to continue what you started, to create anything new — evaporates. This lesson is about ensuring it does not.
The reliability argument for local-first
The primitive here is deceptively simple: tools that work without internet are more reliable for critical work. But the implications run deep, because reliability is not a feature you appreciate when everything is working. Reliability is the feature that matters precisely when everything else fails. And in the context of your cognitive infrastructure — the tools that hold your notes, your drafts, your knowledge graph, your thinking — a failure at the wrong moment does not just delay your work. It severs you from your extended mind.
Nassim Nicholas Taleb, in his 2012 book "Antifragile," draws a fundamental distinction between systems that are fragile (break under stress), robust (resist stress), and antifragile (improve under stress). A cognitive infrastructure built entirely on cloud-dependent tools is fragile in Taleb's precise sense. It has a hidden weakness — network dependency — that is invisible during normal operation and catastrophic during disruption. The system does not degrade gracefully. It does not offer a diminished version of itself. It simply stops. Your note-taking application does not show you a read-only version of your notes when the server is unreachable. It shows you a login screen, or a loading animation, or nothing at all. The gap between "full functionality" and "zero functionality" contains no intermediate state.
A robust cognitive infrastructure, by contrast, maintains its core capabilities regardless of network conditions. You can read your notes, write new ones, edit your documents, manage your tasks, and continue your creative work whether you are connected to the internet, sitting on an airplane, working in a cabin with no reception, or enduring the third AWS outage this year. The tools still function because the data lives on your machine first and synchronizes to the cloud second. The network is a convenience for collaboration and backup, not a prerequisite for operation.
This is not a theoretical distinction. It is the difference between a knowledge worker who loses an afternoon of productive work every time the hotel Wi-Fi drops and one who does not even notice the outage until they try to check email.
The local-first movement and the architecture of independence
In 2019, Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan published a landmark paper through the Ink & Switch research lab titled "Local-First Software: You Own Your Data, in Spite of the Cloud." The paper articulated seven ideals for software that respects user agency: the software should work offline, enable collaboration without requiring a central server, keep the canonical copy of data on the user's device, ensure the network is optional for core functionality, give users full ownership of their data, provide longevity (the data should outlive the software), and preserve user privacy.
The technical foundation for much of this vision is the Conflict-free Replicated Data Type, or CRDT — a data structure that allows multiple copies of the same data to be edited independently and then merged without conflicts. CRDTs make it possible for two people to edit the same document on different devices, without internet, and reconcile their changes perfectly when they reconnect. The mathematics guarantee that the merge will be consistent regardless of the order in which edits arrive. This is not speculative technology. Git, the version control system used by virtually every software development team on earth, embodies the local-first philosophy. Linus Torvalds designed Git in 2005 explicitly for distributed, disconnected operation. Every developer has a complete copy of the entire repository on their local machine. They can commit, branch, diff, log, and revert without touching a network. The remote server is a coordination point, not a dependency. You push when you are ready. You pull when you want to. The work itself happens locally, and it always has.
The Ink & Switch team's insight was that what Git provides for source code, other tools should provide for every kind of knowledge work. Your notes should be local files that sync when convenient. Your documents should be editable offline with changes merged on reconnect. Your task lists should live on your device and replicate to others when the network allows. The cloud should be a replication layer, not the primary storage layer.
This philosophy already has real-world embodiments that you can adopt today. Obsidian, the knowledge management tool, stores everything as plain Markdown files in a folder on your local filesystem. You own those files. You can open them with any text editor. They work without internet, without an account, without a subscription. If Obsidian disappears tomorrow, your files remain. Contrast this with Notion, a tool with beautiful design and powerful collaboration features, but one that stores your data on Notion's servers. If Notion's servers go down, your data is inaccessible. If Notion shuts down, you must export — and hope the export is faithful. The trade-off is real: Notion's collaboration features are genuinely superior for team work. But for your personal cognitive infrastructure — the notes and ideas that constitute your extended mind — the reliability argument overwhelmingly favors the local-first approach.
The deep work dimension
The case for offline capability extends beyond disaster preparedness. There is a second, subtler argument: disconnection is not just a contingency plan. It is a performance strategy.
Cal Newport, in his 2016 book "Deep Work," argues that the ability to focus without distraction on a cognitively demanding task is becoming simultaneously more rare and more valuable. Newport identifies network connectivity as one of the primary destroyers of deep work, not because the internet is inherently bad, but because connected tools create a constant low-grade pull toward shallow work. Every connected application is a portal to distraction — a notification, a message, an update, a reason to context-switch away from the demanding cognitive task in front of you. When your writing tool is a browser tab, the entire internet is one click away.
Mihaly Csikszentmihalyi's research on flow states, published in his 1990 book "Flow: The Psychology of Optimal Experience," reinforces this. Flow — the state of complete absorption in a challenging task — requires uninterrupted concentration. Csikszentmihalyi's studies found that once a flow state is disrupted, it takes an average of fifteen to twenty-five minutes to re-enter it. A single notification, a single loading spinner, a single "reconnecting..." banner at the top of your document, can eject you from flow and cost you the most productive quarter-hour of your session.
Tools that work offline eliminate an entire category of disruption. When you write in a local editor with the Wi-Fi off, there are no notifications. There are no chat pings. There is no temptation to "quickly check" something, because there is nothing to check. The tool becomes a sealed chamber for thought. Many writers and developers have discovered this accidentally — the "airplane mode productivity phenomenon," where people report extraordinary productivity on flights precisely because they are forced into a disconnected state. The insight is not that airplanes are magical productivity environments. It is that disconnection removes the friction of constant micro-interruptions that connected tools impose.
This is not an argument against the internet. It is an argument for tools that give you the choice. A tool that requires the internet to function denies you the option of deliberate disconnection. A tool that works offline gives you that option whenever you want it — and functions seamlessly when you are connected as well. The offline-capable tool is strictly more flexible than the cloud-dependent tool. It can do everything the cloud tool can do (when connected), plus everything it cannot (when disconnected).
Building your offline layer
The practical work here is architectural, not aspirational. You are not trying to go off-grid. You are trying to ensure that your critical thinking capabilities survive network interruptions and that you can choose disconnection when deep work demands it.
Start by distinguishing between critical and convenient. Your note-taking system is critical — it holds your extended mind, and you need it for any serious thinking work. Your email is convenient — you want access to it, but a few hours without it will not destroy your cognitive capacity. Your writing tool is critical. Your social media dashboards are convenient. Your task management system falls somewhere in between, depending on how you use it. The offline layer needs to cover your critical tools. The convenient ones can remain cloud-dependent without risk to your core capabilities.
For each critical capability, identify or adopt a tool that stores data locally and functions without a network connection. For note-taking, this means local Markdown files — Obsidian, Logseq, or even a plain text editor with a well-organized folder structure. For writing, this means a native application like iA Writer, Ulysses, Typora, or VS Code, not Google Docs or a browser-based editor. For task management, this means a local system — a plain text file using a format like todo.txt, a local Kanban app, or a native application like Things or Todoist (which caches locally and syncs when connected). For reference material, this means maintaining a local library of your most-used PDFs, bookmarks saved as files, and critical web pages archived offline.
The synchronization layer sits on top of the local layer, not underneath it. Use Syncthing, iCloud, Dropbox, or Git to replicate your local files across devices and to the cloud as a backup. But the principle is inviolable: the local copy is primary. The cloud copy is the replica. When you sit down to work, you are working with files on your machine, not with files on a server that your machine is merely displaying. The distinction sounds pedantic, but it is the distinction between a cognitive infrastructure that works everywhere, always, under any conditions — and one that works only when the Wi-Fi does.
The Third Brain
AI tools introduce an interesting tension in the offline capability conversation. Most AI assistants — ChatGPT, Claude, Gemini — are inherently cloud-dependent. They require a network connection to function, because the models run on remote servers. This means your AI-augmented workflows have a network dependency baked in, even if everything else in your stack is local-first.
The pragmatic response is to design your AI workflows with graceful degradation in mind. The AI is an accelerator, not a dependency. You should be able to do every critical cognitive task without AI assistance — it just takes longer. When you are offline, you write the first draft without AI feedback. You organize your notes manually instead of using AI-assisted clustering. You do the synthesis yourself instead of asking for pattern recognition. When you reconnect, you can run those tasks through the AI for refinement. The work does not stop because the AI is unavailable. It slows down. That is graceful degradation — the hallmark of a resilient system. The emerging category of local AI models (running directly on your hardware) may eventually eliminate even this dependency, but for now, the principle is clear: use AI as a layer you can peel off, not as a foundation you cannot remove.
The bridge to recovery
Offline capability is your first line of defense against tool failure — but it is not your only line. Tools can fail in ways that go beyond network interruptions. A hard drive crashes. A software update corrupts your data. A synchronization conflict silently overwrites hours of work. An account gets locked. A service shuts down entirely.
Offline-capable tools reduce your exposure to one category of failure — the network outage — but your cognitive infrastructure needs a broader resilience strategy. That strategy is the subject of the next lesson: tool backup and recovery, where you will learn to ensure that no single failure — hardware, software, network, or service — can permanently destroy the data that constitutes your extended mind.
Sources:
- Kleppmann, M., Wiggins, A., van Hardenberg, P., & McGranaghan, M. (2019). "Local-First Software: You Own Your Data, in Spite of the Cloud." Ink & Switch Research Lab.
- Taleb, N. N. (2012). Antifragile: Things That Gain from Disorder. Random House.
- Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World. Grand Central Publishing.
- Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. Harper & Row.
- Torvalds, L., & Hamano, J. (2005). Git: Distributed Version Control System. Designed for offline-first, distributed operation.
- Amazon Web Services US-EAST-1 Outage (November 25, 2020). AWS Post-Event Summary.
- Cloudflare Outage (June 21, 2022). Cloudflare Incident Report.
- Microsoft 365 Global Outage (January 25, 2023). Microsoft Service Health Status.
Frequently Asked Questions