The Problem of Algorithmic Bias in Autonomous Vehicles

The common story of automated vehicle safety is that by eliminating human error from the driving equation, cars will act more predictably, fewer crashes will occur, and lives will be saved. That future is still uncertain though. Questions still remain about whether CAVs will truly be safer drivers than humans in practice, and for whom they will be safer. In the remainder of this post, I will address this “for whom” question.

A recent study from Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern at Georgia Tech found that state-of-the-art object detection systems – the type used in autonomous vehicles – demonstrate higher error rates in detection of darker-skinned pedestrians as compared to lighter-skinned pedestrians. Controlling for things like time-of-day or obstructed views, the technology was five percentage points less accurate at detecting people with darker skin-tones.

The Georgia Tech study is far from the first report of algorithmic bias. In 2015, Google found itself at the center of controversy when its algorithm for Google Photos incorrectly classified some black people as gorillas. More than two years later, Google’s temporary fix of removing the label “gorilla” from the program entirely was still in place. The company says they are working on a long-term fix to their facial recognition software. However, the continued presence of the temporary solution several years after the initial firestorm is some indication either of the difficulty of achieving a real solution or the lack of any serious coordinated response across the tech industry.

Algorithmic bias is a serious problem that must be tackled with a serious investment of resources across the industry. In the case of autonomous vehicles, the problem could be literally life and death. The potential for bias in automated systems begs for an answer to serious moral and legal questions. If a car is safer overall, but more likely to run over a black or brown pedestrian than a white one, should that car be allowed on the road? What is the safety baseline against which such a vehicle should be judged? Is the standard, “The AV should be just as likely (hopefully not very likely) to hit any given pedestrian?” Or is it “The AV should hit any given pedestrian less often than a human driven vehicle would?” Given our knowledge of algorithmic bias, should an automaker be opened up to more damages if their vehicle hits a black or brown pedestrian than when it hits a white pedestrian? Do tort law claims, like design defect or negligence, provide adequate incentive for automakers to address algorithmic bias in their systems? Or should the government set up a uniform system of regulation and testing around the detection of algorithmic bias in autonomous vehicles and other advanced, potentially dangerous technologies?

These are questions that I cannot answer today. But as the Georgia Tech study and the Google Photos scandal demonstrate, they are questions that the AV industry, government, and society as a whole will need to address in the coming years.

Leave a Reply

Your email address will not be published. Required fields are marked *