The Trolley, And It’s Not A Problem

The “Trolley Problem” has been buzzing around for a while now, so much that it became the subject of large empirical studies which aimed at finding a solution to it that be as close to “our values” as possible, as more casually the subject of an episode of The Good Place.

Could it be, however, that the trolley problem isn’t one? In a recent article, the EU Observer, an investigative not-for-profit outlet based in Brussels, slashed at the European Commission for its “tunnel vision” with regards to CAVs and how it seems to embrace the benefits of this technological and social change without an ounce of doubt or skepticism. While there are certainly things to be worried about when it comes to CAV deployment (see previous posts from this very blog by fellow bloggers here and here) the famed trolley might not be one of those.

The trolley problem seeks to illustrate one of the choices that a self-driving algorithm must – allegedly – make. Faced with a situation where the only alternative to kill is to kill, the trolley problem asks the question of who is to be killed: the young? The old? The pedestrian? The foreigner? Those who put forward the trolley problem usually do so in order to show that as humans, we are forced with morally untenable alternative when coding algorithms, like deciding who is to be saved in an unavoidable crash.

The trolley problem is not a problem, however, because it makes a number of assumptions – too many. The result is a hypothetical scenario which is simple, almost elegant, but mostly blatantly wrong. One such assumption is the rails. Not necessarily the physical ones, like those of actual trolleys, but the ones on which the whole problem is cast. CAVs are not on rails, in any sense of the word, and their algorithms will include the opportunity to go “off-rails” when needed – like get on the shoulder or on the sidewalk. The rules of the road incorporate a certain amount of flexibility already, and such flexibilities will be built in the algorithm.

Moreover, the very purpose of the constant sensor input processed by the driving algorithm is precisely to avoid putting the CAV in such a situation where the only options that remain are collision or collision.

But what if? What if a collision is truly unavoidable? Even then, it is highly misleading to portray CAV algorithm design as a job where one has to incorporate a piece of code specific to every single decision to be made in the course of driving. The CAV will never be faced with an input of the type we all-too-often present the trolley problem: go left and kill this old woman, go right and kill this baby. The driving algorithm will certainly not understand the situation as one where it would kill someone; it may understand that a collision is imminent and that multiple paths are closed. What would it do, then? Break, I guess, and steer to try to avoid a collision, like the rest of us would do.

Maybe what the trolley problem truly reveals is the idea that we are uneasy with automated cars causing accidents – that is, they being machines, we are much more comfortable with the idea that they will be perfect and will be coded so that no accident may ever happen. If, as a first milestone, CAVs are as safe as human drivers, that would certainly be a great scientific achievement. I do recognize however that it might not be enough for the public perception, but that speaks more of our relationship to machines than to any truth behind the murderous trolley. All in all, it is unfortunate that such a problem continues to keep brains busy while there are more tangible problems (such as what to do with all those batteries) which deserve research, media attention and political action.

Leave a Reply

Your email address will not be published. Required fields are marked *