Trolley Problem & Modern Ethics

5 min briefing · March 23, 2026 · 14 sources
0:00 -0:00

Picture this: a runaway trolley is hurtling down the tracks toward five people who will certainly die if nothing happens. You're standing beside the track, and you can pull a lever to divert the trolley onto a side track.

Trolley Problem Ethics Philosophy

Make your own briefing in 30 seconds

Pick any topic. VocaCast researches it, writes it, and reads it to you.

Transcript

Picture this: a runaway trolley is hurtling down the tracks toward five people who will certainly die if nothing happens. You're standing beside the track, and you can pull a lever to divert the trolley onto a side track. There's just one problem — one person is on that side track, and diverting the trolley will kill them instead. [1] Do you pull the lever? This scenario, known as the canonical trolley problem, has become the central battleground of modern moral philosophy. [2] For utilitarians, the answer is straightforward. Pull the lever. [3] The only thing that matters morally is the outcome — five lives saved outweighs one life lost. It's mathematics applied to ethics. The numbers dictate your duty. But deontologists see it differently. They argue that actively killing someone is fundamentally wrong, regardless of how many others you save by doing it. [4] There's a crucial moral distinction, they say, between allowing harm to happen and directly causing it yourself. Pulling that lever means you're the one responsible for that one person's death. Now here's where the thought experiment gets genuinely unsettling. Imagine a variation called the footbridge dilemma. [5] You're on a bridge above the tracks, and again, five people face an oncoming trolley. But this time, the only way to stop it is to push a large person off the footbridge directly onto the tracks below, using their body to halt the trolley. The outcome is identical — one dies, five live. Yet something about this scenario feels profoundly different to most people. [5] The distinction between direct harm and passive choice becomes visceral. This tension reveals something deeper. Immanuel Kant's Categorical Imperative holds that actions should be judged by whether they follow universal moral laws. [1] Under Kant's framework, directly causing someone's death to achieve a good outcome would violate those universal principles, even if the mathematics favor it. The trolley problem didn't emerge from nowhere. Philippa Foot introduced it in a 1967 essay, using it to explore the doctrine of double effect — the idea that there's a moral difference between intended consequences and foreseen side effects. [6] She also examined the distinction between positive duties, like actively helping others, and negative duties, like refraining from harm. [6] Years later, Judith Jarvis Thomson expanded and adapted these scenarios, deepening the philosophical investigation. [1] What makes the trolley problem so enduring isn't that it settles anything. [6] Rather, it exposes the fault lines between two major ethical frameworks. Utilitarians maximize outcomes. Deontologists honor rules and the integrity of individual agency. Most of us, it turns out, are inconsistent — we pull the lever in one scenario and refuse in another. That inconsistency isn't a bug. It's the whole point of the puzzle.

When philosophers debate the trolley problem in the abstract, they're asking an urgent practical question: what does an autonomous vehicle actually do when a crash becomes unavoidable? The trolley problem has moved from the seminar room into the algorithm. That's why researchers at MIT launched the Moral Machine experiment, which collected 40 million decisions in ten languages from millions of people across 233 countries and territories to explore moral dilemmas faced by autonomous vehicles. [7] But here's what made the findings especially revealing: the experiment identified three major clusters of countries exhibiting cross-cultural variations in ethical decision-making preferences, correlating these differences with modern institutions and deep cultural traits. [7] In other words, there's no universal answer to how an AV should choose. Your country, your culture, your institutions shape what you think is right. So how do you build that into code?

A study proposed an ethical decision-making algorithm framework for autonomous vehicles based on public moral preferences. [8] But translating what people say they want into what a machine actually does runs into a wall immediately. A key challenge is quantifying harm and valuing lives — how do you assign a number to a human life? Should an AV prioritize younger passengers over older pedestrians? Should it protect passengers over strangers? [9]

Researchers tested one approach: the AVWEWM (Attribute Value Weighted EWM) decision-making model performed better at matching what people actually wanted the car to do compared to the standard EWM model when tested on 40 ethical dilemmas. [10] Yet even this promising framework raised a deeper question: some research questions the suitability of data-driven, sub-symbolic deep-learning AI for AVs in embodying societal values, suggesting that symbolic, model-based approaches offer a more structured framework for encoding ethical goals. [11] The implication is significant — maybe machine learning alone isn't the right tool for encoding ethics.

Meanwhile, regulators are trying to keep pace. The German Ethics Commission on Automated Driving has provided guidelines and frameworks influencing the deployment of AVs, with a focus on ethical considerations, and the UNECE World Forum WP. [12] 29 is developing regulatory landscapes and ethical guidelines for autonomous vehicles. [13] In the United States, the US Department of Transportation released a 15-point policy requiring manufacturers to explain how their AVs will handle "ethical considerations. " But the absence of a robust, pluralistic meta-ethical framework is a significant challenge for guiding AV decision-making and ensuring public acceptance. [14]

Thanks for listening to this VocaCast briefing. Until next time.

Sources

  1. [1] The Trolley Problem: A Philosophical Thought Experiment
  2. [2] The Great Divide: Consequentialism vs. Deontology in Moral Philosophy | HEGELCOURSES
  3. [3] Trolley problem: preference utilitarians vs. classical utilitarians vs. Kant
  4. [4] The Trolly Problem: Utilitarianism vs Deontology | by Ashwinjit Singh
  5. [5] [PDF] Chapter 1: The Trolley Problem
  6. [6] Trolley problem | Definition, Variations, Arguments, Solutions, & Facts
  7. [7] The Moral Machine experiment - MIT Media Lab
  8. [8] An ethical decision-making framework for autonomous vehicles ...
  9. [9] Autonomous Accidents: The Ethics of Self-Driving Car Crashes - Viterbi Conversations in Ethics
  10. [10] Applying AVWEWM to ethical decision-making during ...
  11. [11] Addressing ethical challenges in automated vehicles: bridging the ...
  12. [12] [PDF] Ethical Considerations and Autonomous Vehicles - TÜV SÜD
  13. [13] Ethical decision system for autonomous vehicles in ...
  14. [14] Perception of Moral Judgment Made by Machines - MIT Media Lab