Technology Racism and Facial Recognition Software in Transportation

This blog post is the third in a series about facial recognition software in various forms of public and private means of transportation, as well as the greater public policy concerns of facial recognition tools. More posts about the relationship between transportation technology, FRS, and modern slavery will follow.

Racism in America has been ingrained for many years now. Though we have come a long way, our history with racism is still very much present in the inner workings of our society, especially when it comes to technology and transportation. So how has the history of our country encouraged tech racism in transportation? In her book Dark Matters: On the Surveillance of Blackness, Professor Simone Brown demonstrates that we can trace the emergence of surveillance technologies and practices back to the trans-Atlantic slave trade

Early surveillance in this country began in the 18th century with the “lantern laws.” Simply put, Black, mixed-race, and Indigenous people carried candle lanterns while walking in the streets after dark and not in the company of a white person so slaves could be easily identified. The “lantern laws” were a prime example of earlier supervisory technology. If these laws were broken, they came with punishments. Not only was this a form of early surveillance, but it was a form of control. The “lantern laws” made it possible for the Black body to be controlled and helped to maintain racial boundaries

In the 1950s and 60s, government surveillance programs like the FBI’s “COINTELPRO” targeted Black people, and it was a systematic attempt to spy on and dispute activists in the name of “national security.” However, recently we have learned that the FBI surveillance program targets so-called “Black Identity Extremists.” Put simply, race plays a major factor in a policy terms such as “Black Identity Extremist” because the FBI attempts to define a movement where none exists. Essentially, a group of Black individuals who are connecting ideologically is considered a threat because they are Black.

We can see how past laws and practices connect to present times. Today, police surveillance cameras are disproportionately installed in Black and Brown neighborhoods to keep a constant watch. Along with the disproportionate rate at which Black and Brown communities are watched, the ACLU says there are additional ways in which the government could misuse cameras including voyeuristic purposes, which have targeted women; spied on and harassed political activists; and even intended criminal purposes. Governmental surveillance programs have been the most recent in a string of periodic public debates around domestic spying. 

Racial bias is a significant factor when it comes to facial recognition technology in transportation, especially when in use by law enforcement agencies. Black people are incarcerated at more than five times the rate of white people. Black people receive harsher prison sentences, Black people are more likely to be held on bail during pretrial procedures, and Black people are dying disproportionately at the hands of the police. Racial biases are still very much present when it comes to the technology that law enforcement agencies use to aid in arrest. 

To start, technology itself can be racially biased. According to Joy Buolamwini and Timnit Gebru, based on their 2018 study, they have brought to the forefront in their research how algorithms can be racist. For example, law enforcement uses digital technology for surveillance and predicting crime on the theory that it will make law enforcement more accurate, efficient, and effective. But digital technology such as facial recognition can be used as a tool for racial bias, not effective policing.

This technology can be beneficial in theory, but when people of color are being misidentified at disproportionate rates, we must reconsider the algorithms and the purpose behind facial recognition. People of color are misclassified over a third of the time, while white people rarely suffer from these mistakes. For example, Joy Buolamwini and Timnit Gebru’s 2018study found that the datasets used to identify people were overwhelmingly composed of lighter-skinned people. Black women are misidentified approximately 35% of the time versus the 0.8% of white men who are misidentified. Additionally, in 2019, a national study of over 100 facial recognition algorithms found that they did not work well on Black and Asian faces.

With many software business models increasingly relying on facial recognition tech, the error-prone algorithms exacerbate the already-pervasive racial biases towards people of color. Moreover, false matches lead to a bigger problem, such as mass incarceration. All it takes is one false match, which can lead to lengthy interrogations, being placed on a watch list by police, dangerous police encounters, false arrest, or worse, wrongful convictions. A false match can come from nearly anything. For example, in New Jersey, Nijeer Parks was arrested for a crime he did not commit based on a bad face recognition match. This bad facial recognition came from the police comparing Mr. Parks New Jersey state ID with a fake Tennessee driver’s license left by the perpetrator.

There is more of a risk factor for people like Mr. Parks, who has a prior criminal record because facial recognition software is often tied into mugshot databases. This amplifies racism further because when a person is arrested and their mugshot is taken by law enforcement, it’s saved in the database. Since people of color are arrested at a higher rate for minor crimes, their faces are more likely to be stored in the databases, which increases the odds of identification and other errors.

Law enforcement agencies and the justice system across the board need to consider that machines can be wrong. Just like humans, algorithms are infallible. For example, recent studies have documented subjective flaws in eyewitness identification of suspects, and those same weaknesses in human judgment can affect the use of facial recognition technologies. There is a human and algorithmic error, but the error rates slip into the design and “training” process when algorithms are tested. Simply put, the NIST tests for differential error rates over different parts of the population show substantial error-rate variations for certain races. As mentioned before, if there are millions of examples of white men in a database and only two Black women, the algorithms will have difficulty distinguishing the faces of Black women. It is not the lack of data training, but the software is less likely to identify features from certain kinds of faces.

Although groups are trying to make surveillance technology better for people of color, we must look at our history as a country, especially regarding tech racism in transportation. Looking back on the “lantern laws” and now facial recognition, government agencies like law enforcement agencies and the FBI are allowed to deploy invasive face surveillance technologies against Black and Brown communities merely for existing. Additionally, racial bias in law enforcement agencies can inform emerging technologies and carry over into the transportation sector. This intersection may be most obvious when we think of interactions such as traffic stops.

There are less obvious connections between systemic racism and FRS in transportation, including access to transportation or failure to recognize pedestrians or riders. Racial disparities within FRS that are used in personal vehicles, rideshares, buses, or trains are not only unfair and unequal, but they are also unsafe. Tech racism could mean that nonwhite people (namely Black people) are locked out of their vehicles, unable to start their vehicles, hit by buses, unidentified by automatic train doors, or unnoticed by safety features such as fatigue prevention at higher rates than white people.

 The Transportation Security Administration has been testing facial recognition technology at airports across the country and expects it to become a preferred method to verify a passenger’s identity. However, according to the National Institute of Standards and Technology, facial recognition software showed a higher rate of incorrect matches between Asian and Black people than white people, even with airport surveillance. The research clearly shows that technology in transportation has had its most significant impacts on people of color who are already dealing with transportation disadvantages. Therefore, if the technology used for transportation continues to reinforce human biases, this will perpetuate inequality as a result.

Facial recognition is a powerful tool for technology. It can have significant implications in criminal justice and everyday life, but we must build a more equitable face recognition landscape. The inequities are being addressed. Algorithms can train diverse and representative datasets, photos within the databases can be made more equitable, and regular and ethical auditing is possible, especially when it comes to skin tone. Though racial bias in facial recognition technology is being addressed, the question remains, should facial recognition technology be banned? There is a historical precedent for technology being used to survey movements of the Black population. Facial technology relies on the data that developers feed it, who are disproportionately white.

Leave a Reply

Your email address will not be published. Required fields are marked *