A Texas Crash Highlights Issues With Tesla’s Autopilot

Earlier this month, two Texas men died when the Tesla Model S they were traveling in crashed into a tree. However, just what led to the crash remains a point of contention between authorities and Tesla itself. The police have said that one passenger was found in the front passenger-side seat, while the other was in the back – meaning at the time of the accident, there was no one in the driver’s seat. That would indicate that the vehicle’s “Autopilot” advanced safety system was active, though last week Tesla CEO Elon Musk claimed that the company had data indicated the system was not in use at the time of the accident. The investigation is ongoing, and local police have said they will subpoena Tesla to obtain the vehicle data in question.

If the Texas case does turn out to have involved the Autopilot feature, it will be far from the first. In May of 2016, a Florida driver was killed when his Tesla, in Autopilot mode, crashed into the side of a semi-truck. The National Highway Traffic Safety Administration (NHTSA) investigation into that incident found no evidence of defects in the Tesla system, placing responsibility primarily on the driver, who wasn’t paying attention to the road while the vehicle operated under Autopilot. The National Transportation Safety Board (NTSB), on the other hand, put more of the blame on Tesla for allowing the driver to misuse the Autopilot features – i.e. that the system didn’t disengage when being used outside of its recommended limits. Then Secretary of Transportation Anthony Foxx echoed this when he made a point to say that automakers like Tesla had a responsibility to insure consumers understand the limits of driver assistance systems. In March 2018, a California man was killed when his Tesla Model X SUV crashed into a highway safety barrier, leading to a NTSB investigation and a lawsuit from the driver’s family. A third driver died in a 2019 accident while Autopilot was enabled, this time again in Florida.

At issue here is not only the safety of the Autopilot technology, but also the way it has been marketed, and the willingness of drivers to push the system beyond its capabilities. At its core, Autopilot is an advanced driver-assistance system (ADAS), meaning it can take over a number of driving tasks and help protect drivers, but a human is supposed to remain focused on the driving task. Over the years Tesla has upgraded their vehicle’s software to recognize things like stoplights and stop signs, starting with beta tests and then making their way into every Tesla on the road that is capable of supporting the update (though there have been humorous issues with these roll outs – like vehicles confusing Burger King signs for stop signs). In late 2020, Tesla rolled out a “Full Self-Driving” update to select vehicles, which expanded autopilot’s operational domain to local streets (previously it was only useable on highways).

The NTSB has taken Tesla to task over Autopilot not only for the aforementioned 2016 crash, but also for a 2018 crash were a Tesla ran into the back of a stopped fire truck (no one was hurt). In that incident, Autopilot had been engaged for 13 minutes before the crash, and the driver had been distracted by their breakfast coffee and bagel. In its investigation of the 2019 Florida crash, the NTSB again cited Tesla’s failure to ensure Autopilot couldn’t be used in situations outside of its designed domain, and pointed to NHTSA’s failure to generate clear safety rules for ADAS technologies. In other cases Autopilot has continued to operate while a driver sleeps, or was passed out due to drinking (requiring police officers to use their cars to force the vehicle to a stop).

What remains in question is the ability of Tesla vehicles to monitor human drivers and keep them engaged in the driving process. A recent Consumer Reports test illustrates how easy it can be to trick the existing monitoring system and even allow a driver to slip into the passenger seat while in motion. Tesla’s system for testing driver interaction is via the steering wheel, while some other automaker systems, like GM’s Super Cruise, use more direct observation via eye tracking cameras.  It’s clear there is an issue with Autopilot that needs further investigation, but what have governments done in reaction to these issues, beyond the NTSB reports we noted? And what issues are raised by the way Tesla has marketed Autopilot to consumers? I’ll explore both of those issues in my next post.  

Leave a Reply

Your email address will not be published. Required fields are marked *