Should Automated Vehicles Break The Law?
Earlier this month, the Journal of Law and Mobility hosted our first annual conference at the University of Michigan Law School. The event provided a great opportunity to convene some of the top minds working at the intersection of law and automated vehicles. What struck me most about the conference, put on by an organization dedicated to Law and mobility, was how few of the big questions related to automated vehicles are actually legal questions at this point in their development.
The afternoon panel on whether AVs should always follow the rules of the road as written was emblematic of this juxtaposition. The panel nominally focused on whether AVs should follow traffic laws. Should an automated vehicle be capable of running a red light, or swerving across a double yellow line while driving down the street? Should it always obey the posted speed limit?
The knee-jerk reaction of most people would probably be something along the lines of, “of course you shouldn’t program a car that can break the law.” After all, human drivers are supposed to follow the law. So why should an automated vehicle, which is programmed in advance by a human making a sober, conscious choice, be allowed to do any differently?
Once you scratch the surface though, the question becomes much more nuanced. Human drivers break the law in all kinds of minor ways in order to maintain safety, or in response to the circumstances of the moment. A human driver will run a red light if there is no cross-traffic and the car bearing down from behind is showing no signs of slowing down. A human will drive into the wrong lane or onto the shoulder to avoid a downed tree branch, or a child rushing out into the street. A human driver may speed away if they notice a car near them acting erratically. All of these actions, although they violate the law, may be taken in the interest of safety in the right circumstances. Even knowing they violated the law, a human driver who was ticketed in such a circumstance would feel their legal consequence was unjustified.
If automated vehicles should be able to break the law in at least some circumstances, the question shifts – which circumstances? Answering that question is beyond the scope of this post. At the moment, I don’t think anyone has the right answer. Instead, the point of this post is to highlight the type of moment-to-moment decisions every driver makes every day to keep themselves and those around them safe. The rules of the road provide a rough cut, codifying what will be best for most people most of the time. They could not possibly anticipate every situation and create a special legal rule for that situation. If they tried, the traffic laws would quickly grow to fill several libraries.
In my view, the question of whether an AV should be able to break the law is only tangentially a legal question. After arriving at an answer of, “probably sometimes,” the question quickly shifts to when, and in what circumstances, and whether the law needs to adapt to make different maneuvers legal. These questions have legal aspects to them, but they are also moral and ethical questions weighted with a full range of human driving experience. Answering them will be among the most important and difficult challenges for the AV industry in the coming years.