Trolley Problem & Modern Ethics

5 min briefing · April 03, 2026 · 13 sources
0:00 -0:00

When a self-driving car must choose between hitting a pedestrian or swerving into its own passengers, who should it kill? That's the question researchers asked millions of people across the globe. The MIT Moral Machine experiment collected 40 million decisions in ten languages from millions of pe...

Trolley Problem Ethics Philosophy

Make your own briefing in 30 seconds

Pick any topic. VocaCast researches it, writes it, and reads it to you.

Transcript

When a self-driving car must choose between hitting a pedestrian or swerving into its own passengers, who should it kill? That's the question researchers asked millions of people across the globe. The MIT Moral Machine experiment collected 40 million decisions in ten languages from millions of people in 233 countries and territories to explore this moral dilemma. [1] What emerged was striking: the answers revealed people around the world did not agree on how machines should handle life-and-death choices. The experiment identified three major clusters of countries exhibiting cross-cultural variations in ethical decision-making preferences for autonomous vehicles, correlating these differences with modern institutions and deep cultural traits. [2] In other words, the ethics of the trolley problem aren't universal.

They're shaped by where you live, what your society values, and how your culture has historically solved difficult choices.

This discovery created an urgent problem for engineers and policymakers. If we can't agree on what's right, how do we program it into a car? Researchers have proposed an ethical decision-making algorithm framework for autonomous vehicles based on public moral preferences, aiming to incorporate these preferences into AV decision-making logic. [3] To test whether this approach actually works, scientists evaluated the AVWEWM model — that's the Attribute Value Weighted EWM decision-making model — against the standard EWM model when tested on 40 ethical dilemmas for autonomous vehicles. [3] The AVWEWM performed better at matching what people actually wanted.

But simply knowing what people prefer doesn't solve the deeper problem of actually building the logic that embodies those preferences in a machine.

Governments began stepping in. The German Ethics Commission on Automated Driving has provided guidelines and frameworks influencing the deployment of AVs, with a focus on ethical considerations. [4] The UNECE World Forum WP. 29 is involved in developing regulatory landscapes and ethical guidelines for autonomous vehicles. [5] Yet here's where things get genuinely thorny: some research questions the suitability of data-driven, sub-symbolic deep-learning AI for AVs in embodying societal values, suggesting that symbolic, model-based approaches offer a more structured framework for encoding ethical goals. [6] The absence of a robust, pluralistic meta-ethical framework is a significant challenge for guiding AV decision-making and ensuring public acceptance.

A new algorithm incorporating Robert Alexy's Weight Formula and Rawls' Maximin Principle demonstrated potential to reduce loss of life by approximately 24 percent in simulation tests. [3] [7] Meanwhile, the EU AI Act imposes risk-based obligations like traceability, transparency, and human oversight but refrains from mandating specific moral principles for AV decision-making in harm scenarios, opting for procedural safeguards instead. [8]

But the real challenge lurking beneath every algorithm is more granular than it sounds. A key difficulty in programming autonomous vehicles is quantifying harm and valuing lives — distinguishing between pedestrians and occupants, factoring in age, determining how to minimize negative outcomes when an accident becomes unavoidable. These aren't abstract philosophical questions anymore. They're parameters that engineers must encode into code, decisions that shape whether a vehicle prioritizes one life over another in a split-second scenario that no simulation can fully predict. The trolley problem, once a thought experiment for philosophy classrooms, has become a technical and legal challenge reshaping how we build machines that make choices about human lives.

The trolley problem wasn't dreamed up in a vacuum. It emerged from real philosophical debates about how we should act when faced with impossible choices. What began as a way to test philosophical intuitions has become something far more urgent: a framework for thinking about what happens when we ask machines to choose who lives and who dies. The trolley problem was first proposed by British philosopher Philippa Foot in 1967 as a defense for the doctrine of double effect.

Years later, American philosopher Judith Jarvis Thomson coined the term "Trolley Problem" in her 1976 essay "Killing, Letting Die, and the Trolley Problem. [9] " This shift matters because AI systems are increasingly making decisions in fields like business, healthcare, and manufacturing, raising questions about their capacity for independent life-or-death choices. [9] [10]

There is widespread concern regarding AI's ability to assess ethical challenges and to respect societal values in its decision-making. [11] Autonomous vehicles bring this to sharp focus, prompting ethical considerations, such as whether an AI should prioritize the safety of passengers or those on the ground during a catastrophic failure. [12] Imagine the car's brakes fail. The algorithm must decide in milliseconds: swerve and hit the pedestrian, or brake harder and crash into the guardrail, potentially killing everyone inside. Autonomous machines, such as autonomous vehicles and industrial robots, could potentially make decision-making errors leading to deaths that might be avoided if humans were involved, although more lives could be saved overall if robotics are deployed responsibly.

That paradox cuts to the heart of the dilemma: machines might actually prevent more deaths than they cause. [13] But that doesn't make the individual choice easier. The trolley problem serves as a conceptual tool to explore the ethical dimensions of AI decision-making, particularly in critical areas like autonomous vehicles, highlighting difficulties in encoding ethical principles into technology.

Thanks for listening to this VocaCast briefing. Until next time.

Sources

  1. [1] [PDF] MIT Open Access Articles The Moral Machine experiment
  2. [2] Frontiers | A Deeper Look at Autonomous Vehicle Ethics: An Integrative Ethical Decision-Making Framework to Explain Moral Pluralism
  3. [3] An ethical decision-making framework for autonomous vehicles ...
  4. [4] Ethical frameworks for automated vehicles - Springer Nature
  5. [5] Ethical decision system for autonomous vehicles in ...
  6. [6] An ethical decision-making framework for autonomous ...
  7. [7] An Ethical Decision Making Algorithm for Autonomous Vehicles During an Inevitable Collision | Proceedings of the 2024 4th International Conference on Big Data, Artificial Intelligence and Risk Management
  8. [8] According to Whose Morals? The Decision-Making Algorithms of Self-Driving Cars and the Limits of the Law
  9. [9] The Trolley Problem – An Ethical Conundrum That Persists Through the Years » Answers In Reason
  10. [10] The Trolley Problem: AI Revived This Moral Dilemma - Medium
  11. [11] Artificial Intelligence - What Is The Trolley Problem?
  12. [12] The Trolley Problem and autonomous flight
  13. [13] AI and machine learning has its own trolley problem debate