Core Primitive
Do not experiment with behaviors that could cause serious harm.
The line between courage and recklessness is thinner than you think
In 1929, Werner Forssmann, a twenty-five-year-old surgical resident in Germany, threaded a catheter into his own heart. He had been told the procedure was too dangerous to attempt. His supervising physicians refused to authorize it. A nurse tried to stop him by offering herself as the subject, and he pretended to catheterize her arm while actually inserting the catheter into his own. He then walked to the X-ray department — catheter dangling from his arm, tube advancing toward his right atrium — and had a technician photograph the proof. The image showed the catheter tip sitting in his cardiac chamber. He had demonstrated that cardiac catheterization was survivable, opening the door to modern interventional cardiology. He eventually won the Nobel Prize.
In 1984, Barry Marshall, an Australian physician frustrated by the medical establishment's refusal to accept that bacteria could survive in stomach acid, drank a Petri dish full of Helicobacter pylori. He developed severe gastritis within days, proving his hypothesis that the bacterium caused peptic ulcers. He too eventually won the Nobel Prize.
These stories are celebrated as acts of scientific courage, and they were. But they are also terrible models for your behavioral experiments. Forssmann could have died on the walk to the radiology suite — a cardiac arrhythmia triggered by the catheter would have killed him in a hallway with no resuscitation equipment. Marshall gambled that the infection would respond to antibiotics, which it did, but an antibiotic-resistant strain would have left him with chronic gastritis or worse. Both men were physicians who understood the specific risks they were taking within their domain of expertise, and both were still operating at the ragged edge of what could be considered ethically defensible. The Forssmann experiment was so controversial that he was fired from his hospital.
The lesson from these stories is not "be bold enough to experiment on yourself." It is that even when you have deep domain expertise, even when you are a physician experimenting within your own specialty, self-experimentation can cross a line from courageous to reckless — and the line is far harder to see from the inside than from the outside.
You are running behavioral experiments on yourself. You learned in N-of-one experiments that you are the scientist, the subject, and the instrument of measurement — a sample size of one. That creates unique methodological challenges, which that lesson addressed. But it also creates unique ethical challenges, and those are what this lesson is about. When you experiment on yourself, who protects the subject? In a clinical trial, an ethics board reviews the protocol, ensures informed consent, evaluates the risk-benefit ratio, and has the authority to stop the experiment if something goes wrong. In your behavioral experiments, you are the ethics board. And unlike an external board, you have a conflict of interest: you want the experiment to happen, which means you are systematically biased toward underestimating its risks.
Bioethics adapted for the self-experimenter
Tom Beauchamp and James Childress, in their foundational text Principles of Biomedical Ethics, established four principles that have governed medical ethics for decades: autonomy, beneficence, non-maleficence, and justice. These principles were designed for cases where one person (a physician or researcher) makes decisions that affect another person (a patient or subject). When you experiment on yourself, the two roles collapse into one — but the principles do not disappear. They transform.
Autonomy, in the clinical context, means the subject has the right to make informed decisions about their own participation. When you are both experimenter and subject, autonomy seems automatic — of course you consent to your own experiment. But there is a subtlety that most self-experimenters miss. The "you" who designs the experiment on Monday is not the same "you" who lives through the consequences on Thursday. Your Monday self is enthusiastic, motivated, optimistic about the potential benefits, and abstractly aware of the risks. Your Thursday self — sleep-deprived because the experiment involved a radical schedule change, or anxious because the experiment involved stopping a coping mechanism — is a different person experiencing concrete consequences that Monday-you underestimated. Genuine informed consent in self-experimentation means your present self must advocate for your future self, imagining the worst realistic outcome with enough vividness that the consent is real rather than performative.
Beneficence — the obligation to act in the subject's best interest — requires you to honestly assess whether the experiment is designed to benefit yourself or to satisfy your curiosity. These are not the same thing. An experiment designed to benefit you will have clear criteria for what "benefit" looks like, a defined threshold for success, and a plan for what to do if the results are ambiguous. An experiment driven by curiosity alone is more likely to be open-ended, poorly measured, and continued past the point where a neutral observer would say "this is not working." Curiosity is a fine motivator for experiments, but it is not sufficient as an ethical justification when the experiment carries real risk.
Non-maleficence — "first, do no harm" — is the principle most directly relevant to this lesson's primitive. In clinical ethics, non-maleficence means the expected harm of an intervention must be proportional to the expected benefit, and all reasonable steps must be taken to minimize harm. For your self-experiments, this translates into a blunt question: what is the worst realistic thing that could happen, and can you live with it? Not the worst conceivable thing — catastrophic thinking would paralyze all experimentation. The worst realistic thing: the outcome that a reasonable, informed person would identify as the plausible downside. If the worst realistic outcome is that you waste a week, proceed freely. If the worst realistic outcome is that you damage a relationship, compromise your health, or destabilize your psychological equilibrium, you need either a much more careful protocol or professional guidance.
Justice, in the clinical context, concerns the fair distribution of risks and benefits across subjects. In self-experimentation, there are no other subjects — but there are other stakeholders. Your experiments do not happen in isolation. They affect your partner, your children, your colleagues, your friends. An experiment that requires you to be unavailable for two weeks affects everyone who depends on your availability. An experiment that makes you irritable, distracted, or emotionally volatile imposes costs on the people around you. Justice in self-experimentation means accounting for these externalities and either designing the experiment to minimize them or obtaining something like consent from the people affected.
The harm spectrum: what is safe to experiment with
Not all behavioral experiments carry the same risk. Part of experimental ethics is developing an accurate sense of where different experiments fall on the harm spectrum, so that you apply the right level of scrutiny to the right experiments rather than either screening everything with paranoid caution or screening nothing with reckless optimism.
At the low end of the spectrum are experiments on productivity, learning, and routine. Testing whether you write better in the morning or evening. Trying a new note-taking method for two weeks. Experimenting with different exercise times. Adjusting your commute. These experiments have minimal downside — the worst realistic outcome is that you lose some time or productivity, and the reversion path is clear. You can and should experiment freely in this zone, applying the standard experimental design principles from this phase without additional ethical screening.
In the middle of the spectrum are experiments that touch your relationships, your professional reputation, or your daily functioning in ways that affect others. Experimenting with radical honesty in your communication. Testing a policy of saying no to all non-essential requests for a month. Trying a dramatically different parenting approach. Adopting a new leadership style at work. These experiments are not inherently dangerous, but they have an impact radius that extends beyond you, and the consequences may not be fully reversible. A month of being aggressively honest might damage a friendship in ways that cannot be repaired by reverting to your previous communication style. In this zone, you need to think carefully about the people affected, communicate your experiment to them when appropriate, and set clear criteria for when to stop if the impact on others becomes too high.
At the high end of the spectrum are experiments that touch your physical health, your psychological stability, your financial security, or domains that require professional expertise. Changing your medication regimen. Adopting extreme dietary protocols. Deliberately depriving yourself of sleep to test your functioning threshold. Using psychological techniques to process trauma without therapeutic support. Investing a significant portion of your savings in an unfamiliar asset class as a "learning experiment." These are the experiments where self-experimentation is most dangerous, because the downside is potentially severe, the effects may be irreversible on any practical timeline, and your ability to assess risk accurately is compromised by the fact that you are not an expert in the relevant domain.
The critical insight is that the harm spectrum does not correlate neatly with how exciting or valuable the experiment feels. Some of the most transformative experiments are low-risk: changing when you do your deep work, restructuring your information diet, experimenting with different reflection practices. And some of the most dangerous experiments feel modest until they go wrong: adjusting your sleep by just an hour each night seems minor until cumulative sleep deprivation impairs your judgment, your driving, and your emotional regulation over the course of weeks.
The reversibility test
Nassim Taleb, in Antifragile, makes a structural argument about risk that applies directly to self-experimentation. The key distinction is not between risky and safe but between bounded and unbounded downside. An experiment with bounded downside — where the worst case is losing some time, some comfort, or some convenience — can be run freely, because the cost of failure is absorbable. An experiment with unbounded or catastrophic downside — where the worst case includes permanent damage, loss of relationships, or health consequences that compound over time — requires a fundamentally different approach.
The reversibility test operationalizes Taleb's principle for behavioral experiments. Before running any experiment that falls in the middle or high end of the harm spectrum, ask yourself: if this experiment produces the worst realistic outcome, can I return to my previous state? How long would the reversal take? What would be permanently changed?
Some experiments are trivially reversible. You experiment with waking up at 5 AM for a week and it makes you miserable. You go back to your normal schedule. The reversal is complete within a day. Some experiments are reversible but with friction. You experiment with being dramatically more assertive at work. Your colleagues form impressions during the experiment that will take time to modify even after you adjust your behavior. The reversal takes weeks or months. And some experiments are effectively irreversible. You experiment with stopping medication without medical supervision and trigger a withdrawal-induced episode. The episode may resolve, but the neurological disruption and the damage to your confidence in your own stability do not simply reset when you resume the medication.
The reversibility test is not binary — it is a spectrum from "instantly reversible" to "effectively permanent." Your ethical obligation to your future self scales with that spectrum. Instantly reversible experiments need minimal ethical screening. Slowly reversible experiments need careful impact assessment and clear stop-criteria. Effectively irreversible experiments should not be self-experiments at all. They belong in a professional context where someone with relevant expertise can help you design a safer protocol and monitor for adverse effects.
Your experiments affect more people than you think
One of the most common blind spots in self-experimentation is treating yourself as an isolated system. You are not. You are embedded in a network of relationships, responsibilities, and interdependencies, and your behavioral experiments send ripples through that network whether you intend them to or not.
Consider a straightforward experiment: you decide to test the hypothesis that eliminating all social media for thirty days will improve your focus and reduce your anxiety. From your perspective, this is a personal experiment with clear benefits and minimal downside. But your partner notices that you have stopped responding to messages in the family group chat, which is hosted on a social platform. Your colleague wonders why you have not liked or commented on the project update they posted, which they interpret as disengagement. A friend who primarily communicates through social media direct messages feels ghosted. None of these consequences are catastrophic, but they are real, and they were not part of your experimental design. You were measuring your focus and anxiety. You were not measuring the relational cost.
The impact-on-others evaluation is not about seeking permission for every experiment you run. It is about honest accounting. When you design an experiment, ask: who will experience the effects of this experiment besides me? What will those effects be? Are there ways to achieve the same experimental goal while reducing the impact on others? And for experiments with significant relational impact, have you communicated what you are doing and why?
This last point — communication — is the practical equivalent of informed consent extended to your social network. You do not need your partner's permission to experiment with your morning routine, but if the experiment involves waking at 4:30 AM and the alarm disturbs their sleep, telling them what you are doing and why, and asking whether they have concerns, is both ethically appropriate and practically useful. They might identify risks you have not considered. They might suggest modifications that achieve your goal with less disruption. Or they might simply appreciate being treated as a stakeholder rather than a bystander in your self-improvement project.
The professional boundary: when to experiment and when to consult
The Nuremberg Code, developed in response to the horrific medical experiments conducted in Nazi concentration camps, established ten principles for ethical human experimentation. While the Code was designed to protect research subjects from abusive researchers, several of its principles translate directly to the ethics of experimenting on yourself. The most relevant is Principle Four: "The experiment should be so conducted as to avoid all unnecessary physical and mental suffering and injury." And Principle Seven: "Proper preparations should be made and adequate facilities provided to protect the experimental subject against even remote possibilities of injury, disability, or death."
These principles point to a boundary that many enthusiastic self-experimenters resist: there are domains where self-experimentation is simply inappropriate without professional involvement. This is not because you lack the right to experiment on yourself. It is because certain domains require expertise that you do not have, and operating without that expertise turns experiments into gambles.
The professional boundary is clearest in medical and psychological domains. Adjusting psychiatric medication is a medical decision that requires understanding pharmacokinetics, withdrawal effects, drug interactions, and the specific neurological mechanisms of the medication. No amount of self-experimentation enthusiasm compensates for the lack of this knowledge. Similarly, using psychological techniques to process serious trauma — exposure therapy, EMDR, somatic experiencing — can retraumatize you if applied without proper training and therapeutic support. These are not areas where "move fast and learn from failure" is an appropriate philosophy, because the failures can be severe and the learning comes at too high a cost.
The professional boundary also applies, less obviously, to extreme physical interventions. Extended fasting protocols, radical elimination diets, deliberate cold or heat exposure at extreme intensities, and sleep deprivation experiments all carry physiological risks that are difficult to self-monitor. Your subjective sense of "I feel fine" is a poor measure of what is happening to your cortisol levels, your cardiac rhythm, or your immune function during these interventions. A professional — a physician, a registered dietitian, a certified coach with relevant training — can provide objective measurement, identify warning signs you would miss, and intervene before subjective symptoms appear.
The principle is not "never experiment in these domains." It is "do not experiment alone in these domains." Marshall did not just drink H. pylori on a whim. He was a physician who understood gastric pathology, had access to endoscopy to monitor his stomach lining, and had antibiotics ready for treatment. His self-experiment was reckless by modern ethical standards, but it was at least reckless within his domain of expertise. When you experiment outside your domain of expertise, you lack even that minimal safety net.
A useful heuristic: if you cannot articulate the three most likely adverse outcomes of your experiment and the specific mechanism by which each one would occur, you do not understand the domain well enough to self-experiment safely. The solution is not to abandon the experiment but to involve someone who can articulate those risks and help you design around them.
Informed consent from your future self
The deepest ethical challenge of self-experimentation is temporal. You are making decisions now that your future self will live with, and your present self is a biased advocate. Present-you overweights the potential benefits because you are excited about the experiment. Present-you underweights the costs because you have not experienced them yet. Present-you discounts future suffering because of temporal distance — the discomfort is hypothetical now and will only be concrete later. This is not a character flaw. It is a well-documented feature of human cognition called temporal discounting, and it operates in every decision you make about your future.
Informed consent from your future self is a thought experiment that counteracts this bias. Before committing to any experiment in the middle or high range of the harm spectrum, pause and construct a vivid mental model of the worst realistic outcome. Not a vague acknowledgment that "things could go wrong." A specific, concrete, sensory imagining of what the bad outcome would feel like to experience. If the experiment involves stopping a medication, imagine the specific symptoms of withdrawal — the anxiety, the insomnia, the cognitive fog — and ask yourself whether you would consent to experiencing those symptoms for the potential benefit. If the experiment involves a radical dietary change, imagine the specific experience of the worst plausible outcome — the fatigue, the irritability, the social friction of explaining your diet to everyone you eat with — and ask whether the trade-off is worth it.
This practice is not designed to make you fearful. It is designed to make your consent real. In clinical research, informed consent requires that the subject understands the risks in concrete, specific terms — not just "there may be side effects" but "possible side effects include nausea lasting two to four days, headaches, and in rare cases, liver inflammation requiring hospitalization." Your self-experiments deserve the same standard. When you consent to an experiment with full, vivid awareness of the worst realistic downside, and you still choose to proceed, that is genuine autonomy. When you consent in a haze of optimism because you have not bothered to imagine the downside concretely, that is not autonomy. It is negligence toward yourself.
The four-gate ethical screen
Synthesizing the principles above, you can construct a practical ethical screen for any behavioral experiment that falls above the trivial threshold. The screen has four gates, and an experiment must pass all four to proceed as a self-experiment.
Gate one is reversibility. If this experiment produces the worst realistic outcome, can you return to your previous state within a reasonable timeframe? Experiments that are easily reversible pass this gate freely. Experiments that are slowly reversible pass with the condition that you have clear stop-criteria and a reversion plan. Experiments that are effectively irreversible do not pass — they need to be redesigned or conducted under professional supervision.
Gate two is impact radius. Who besides you is affected by this experiment, and have you accounted for their interests? Experiments with zero external impact pass freely. Experiments that affect others pass if you have communicated your intentions and considered their concerns. Experiments with significant relational or professional impact need explicit stakeholder awareness, if not buy-in.
Gate three is domain competence. Do you have sufficient expertise to anticipate the range of possible outcomes, including the adverse ones? In domains where you have deep personal experience or professional knowledge, you can self-experiment with reasonable confidence in your risk assessment. In domains where you are a novice — medical, psychological, financial, legal — your risk assessment is unreliable, and the experiment should involve a professional.
Gate four is informed future-self consent. Having vividly imagined the worst realistic outcome, do you still choose to proceed? This gate is a check on temporal discounting. It forces you to make the costs concrete before you commit, ensuring that your consent is informed rather than optimistic.
An experiment that passes all four gates is ethically sound for self-experimentation. An experiment that fails one gate needs modification. An experiment that fails multiple gates should not be a self-experiment — it should either be abandoned, radically redesigned, or conducted with professional support.
The Third Brain
Your AI assistant can serve as an ethics reviewer for your experimental designs — a role for which it is unusually well-suited, precisely because it does not share your excitement about the experiment. When you are enthusiastic about a new behavioral experiment, describe the protocol to the AI and ask it to run the four-gate ethical screen. Ask it to identify risks you might be underweighting, stakeholders you might be overlooking, and domains where your expertise might be insufficient for safe self-experimentation.
The AI is particularly valuable for the informed-future-self-consent gate, because it can help you construct the vivid worst-case scenario that temporal discounting makes hard to generate on your own. Tell the AI your experimental plan and ask: "Describe, in concrete and specific terms, what the worst realistic outcome of this experiment would look and feel like. Do not catastrophize, but do not minimize." The AI can produce a realistic depiction of the downside that your optimistic planning brain would rather not construct, giving you the information you need to make your consent genuine.
You can also use the AI to help determine where the professional boundary falls for a specific experiment. Describe your planned intervention and ask: "Does this experiment fall in a domain where self-experimentation without professional guidance is inappropriate? What specific risks would a professional in this domain identify that I might miss?" The AI cannot replace a medical professional or therapist, but it can help you recognize when you need one — which is the most important ethical judgment in self-experimentation.
From ethical screening to the experiment backlog
You now have a framework for determining which experiments are safe to run on yourself, which ones need modification, and which ones require professional collaboration. You understand that self-experimentation carries genuine ethical obligations — to your future self, to the people around you, and to the domains of expertise you are operating within. You know how to apply the reversibility test, the impact radius evaluation, the domain competence check, and the informed future-self consent practice.
This ethical framework is not a constraint on your experimentation. It is what makes sustained, ambitious experimentation possible. The person who experiments without ethical guardrails eventually runs an experiment that causes serious harm — to their health, their relationships, or their psychological stability — and the resulting damage makes them cautious about all experimentation, including the low-risk experiments they should be running freely. Ethical screening protects your experimental practice from the single catastrophic failure that would shut it down.
With the ethical framework in place, you are ready to address a practical challenge: you likely have far more experiments you want to run than time to run them. The next lesson, The experiment backlog, introduces the experiment backlog — a systematic way to capture, prioritize, and sequence your experimental ideas so that you run the most valuable experiments first and never lose a promising idea to the limits of memory. The ethical screen from this lesson becomes the first filter in that backlog system: every proposed experiment passes through the four gates before it enters the queue.
Frequently Asked Questions