Somewhere out in the quiet countryside, where the technology and infrastructure has yet to catch up with the modernised cities miles away, you stand alone in a tower, in front of the control panel of a railroad. After a long while with little happening, a group of young people approach the tracks. One stops to smoke a cigarette, whilst the others stand in a group slightly ahead, talking amongst themselves. By the time you wake up from your day dream, it’s already too late. A class 4 freight train comes barrelling down the tracks, less than a minute away. You’re given a matter of seconds to make a decision. If you do nothing, you’ll be forced to watch a group of teenagers be mown down on the spot. Your hand reaches for the controls. If you pull the lever, the train will switch tracks, and only the loner will die, but their blood shall be on your hands. What do you do?
* * * * * *
Right now, as you read this article, a very similar decision is being made. An estimated 3400 people day every day as a result of fatal car crashes, totalling up to around 1.25 million a year, putting fatal road injuries as amongst the top ten causes of death globally (Wegmen, 2017). This could all change within the next decade, with the development of self-driving cars, programmed so sophisticatedly that the number of deaths attributed to road accidents could be cut in half. Yet, with Elaine Herzberg being killed in 2018 by a self-driving car becoming the first such case, one can’t help but wonder what the future entails.
The above scenario is an example of the famous “trolley problem”, coined by Judith Jarvis Thomsom in 1976, and is currently being applied to the ethics of self-driving cars and their programming. In this sense, “self-driving” is somewhat of a misnomer. They are, in reality, pre-driven cars, as when faced with such a situation, the decision taken has already been made by the programmer. It’s easy to see where the ethical issues arise: if we take the “logical” solution, the car would kill one person to save the lives of a group. Yet, if nothing is done, many more lives may be needlessly lost. However, instead of having a human make this life-or-death decision based on the surrounding contextual factors, we instead have an algorithm making the decision based on the data it has been fed.
The solution might seem obvious to us. In an ideal society, surely we would want the algorithms that make such life-changing decisions to be based upon the beliefs and values we follow? Recent research into exactly what these values are has exposed the dark side of society’s value-judgements. In 2016, MIT developed the “Moral Machine”, a survey in which a series of scenarios similar to the trolley problem are presented to participants. The data was based on the responses of ~2 million people, from 133 different countries. From scraping this information from such a huge number of people, some rather disturbing results were found. On average, most people would rather hit a dog than a criminal. In fact, criminals were close to the bottom in the ranking of value of human life, with doctors and athletes being close to the top. Amongst other factors determining the value of life was age, weight and profession. Holding these beliefs personally is vastly different to encoding them into a system like this. Imagine a system in which the value of your life is determined by society’s judgement of you: who you worked for, how old you were or how much you weighed.
Uploading the values of neoliberal capitalist society onto this new “smart” network of infrastructure and transport would only lead to the further entrenchment of the tiered structure of society, where the socio-political reasons a person may meet one of these factors is forgotten. Criminality, for example, will be seen less as a result of systemic inequality, and more as a factor of someone’s identity that determines the value of their life. The seemingly noble effort to reduce fatalities may result in handing over power to an increasingly unaccountable section of society. As Kahnemen (2011) points out, people tend to prefer negative outcomes to be a result of human error, as apposed to a miscalculation by an algorithm or machine. This is because it leaves someone accountable for the decision. Yet, it is likely the companies behind the new self-driving technology will push for the law to dictate that they cannot be sued for potential crashes. If this happens, the implication is that a large corporation will be able to judge the value of your life based on factors out of your control, and could not be held accountable for taking your life as a result.
This is the neoliberal dream; to be able to condense down all decisions to being the responsibility of the individual (Rennix and Robinson, 2017). It suits the existing power structures to encourage people to ignore why we must rank human life based on sociocultural labels, and the reasons people might end up with such labels, to instead ask us to choose who lives and who dies. This way, the totality of the responsibility for fatal car crashes, the reasons they happen, and even the political causes of inequality, are shifted from the state and corporations onto you, the individual. Of course, it’ll all be sold to us under the guise of saving lives.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59-64.
Daniel Kahneman. (2011). Thinking fast and slow. Farrar, Straus and Giroux.
Rennix, B., & Robinson, N. J. (2017). The Trolley Problem Will Tell You Nothing Useful about Morality. Current Affairs.
Thomson, J. J. (1976). Killing, letting die, and the trolley problem. The Monist, 59(2), 204-217.
Wegman, F. (2017). The future of road safety: A worldwide perspective. IATSS research, 40(2), 66-71.