An Introduction to Facial Recognition Software and Transportation

This blog post is the first in a series about facial recognition software in various forms of public and private means of transportation, as well as the greater public policy concerns of facial recognition tools.

Volume I: An Introduction

What is the point of facial recognition software in vehicles? For that matter, what is facial recognition software? We use it to unlock our iPhones and tag our loved ones in photos on social media, but facial recognition software can also serve a variety of functional purposes throughout society, such as identifying suspects of violent crime or survivors of modern slavery. One of the most sensational questions surrounding facial recognition software in vehicles is how we can protect individuals’ privacy.

Facial recognition software is a form of biometric security. It is used to determine people’s identity using their faces from photos, videos, or even real time. Put simply, a camera scans the face and, once the face is detected, the software begins to analyze the image. Analysis converts the image to data based on the person’s facial features; think of this as turning the face into a type of equation. The software then tries to find a match by comparing the face to images in a database.

Though the technology may make people uneasy, it is important to conceptualize that this technology has been public for a while. Mercedes-Benz first introduced the Mercedes-Benz User Experience (MBUX) at the beginning of 2018, which is considered one of the most comprehensive automaker-created infotainment centers to date. Among other features that enhance the driver experience, MBUX includes facial recognition that senses the driver’s fatigue or discomfort, in which case the system will change the music or climate control features to prevent falling asleep behind the wheel. 

MBUX also has a voice recognition feature called “Hey Mercedes”, which is very similar to the in-home listening devices such as Alexa and can help drivers dial a phone call or input a destination into the navigation system without removing their hands from the steering wheel. Jaguar Land Rover has been using facial recognition since 2019 to similarly read driver fatigue with the ultimate goal of implementing machine learning to track the driver’s alertness and patterns of discomfort. 

In addition to personal vehicles, facial recognition software may be used to monitor rideshare vehicles to offer a safer experience. Uber has already been using facial recognition to determine if drivers were wearing their masks throughout the pandemic. The Uber app previously used facial recognition to identify the driver before each ride began, and the mask verification was an extension of that feature. Experts have also talked extensively about the potential safety benefits of facial recognition on public transportation. Companies such as FaceFirst aim to use facial recognition on or around public transit to identify domestic terrorists, child abductors, missing persons, survivors of modern slavery and human trafficking, people on ‘Do Not Fly’ lists, and deter petty theft. 

All of this is to say that facial recognition software in vehicles may be used to create a safer and more comfortable transportation experience. But what are some potential concerns of facial recognition technology, and how do we reconcile those concerns with some of the potential benefits? 

It is impossible to explain the huge variety of concerns people have about introducing more facial recognition, but one major concern is data breaching. Clearview AI, for example, is a facial recognition software company that touts itself as “the world’s largest facial network”, which currently sells its product to law enforcement agencies to “generate high-quality investigative leads”. Last year, Clearview was breached by hackers who were able to obtain access to about 3 billion images (some of which were scraped from social media which is a distinct violation of most social media platforms’ terms of service). This instance has raised serious concerns about how individual’s photos are obtained, how the images are stored, and who has access to those images.

A second concern is racism. Tech racism is a very large topic that certainly deserves its own blog post. As an overview, it is important to note that facial surveillance is largely used by law enforcement agencies, which is already a dynamic loaded withracial bias. Joy Buolamwini and Timnit Gebru’s 2018 research concluded that facial recognition software misidentifies Black women approximately 35% of the time but only misidentified white men 0.8% of the time. A false match can result in a wrongful arrest, a wrongful conviction, or even violence. Moreover, facial recognition used by police rely on mugshots databases for identification, which exacerbates racist policing patterns of the past because Black people are the most likely to be arrested due to over-policing. Because Black people are disproportionately likely to have mugshots in existing databases, current facial recognition software being used by law enforcement is more likely to get a match for Black faces. Coupled with the fact that the software is less likely to correctly identify Black features, the disproportionately high number of Black mugshots in the databases ensures that Black people are matched more frequently with less accuracy. This could especially be a problem for instances of traffic stops and crowd control.

Finally, there are many unanswered constitutional questions about the right to privacy and freedom of assembly. A handful of police departments, including Pittsburg law enforcement, have used facial recognition software to scan crowds of protesters to identify who in the crowd has outstanding warrants and arrest them accordingly. On Last Week Tonight, John Oliver described this practice as “the most insidious way” to prevent people from exercising their First Amendment right to assemble freely. Moreover, there are countless privacy questions about facial recognition in public spaces, including rideshare services and public transit that remain unanswered. Ultimately, people may not want to be under surveillance when using public or private means of transportation for a variety of reasons, and public policy will have to reflect the balance to be had between intelligence and privacy.

The aforementioned uses and concerns are far too abundant to address in a single blog post. As such, this blog post will be the first in a series of pieces that consider the role of facial recognition software in transportation technology and how we can reconcile its incredible potential as a safety tool and the relevant civil rights issues. In this series, we will seek to parse out some of the nuances of facial recognition software in privately owned vehicles, rideshares, and public transportation to consider how to best implement the technology widely.

Leave a Reply

Your email address will not be published. Required fields are marked *