Human Masters/Robot Servants: Highly Automated Vehicle Design, Intoxicated Drivers & Vicarious Liability

A traditional engineering role is to design a safe product. Safety engineering is an exercise in harm avoidance ex ante. In contrast, liability attribution is an exercise to compensate for loss post hoc—traditionally viewed as a legal matter. We observe that, when a natural person incurs liability for a loss that exceeds insurance coverage, economic ruin can follow. Neither engineering nor law focus on the loss suffered by defendants considering law as a “safety risk.” The highly automated vehicle (HAV) design space, however, provides an opportunity to prevent this kind of economic harm from occurring ex ante just as attention to safe design can prevent loss from physical injury and property damage. Including legal outcomes as design specifications allows engineers to create a product with physical features that achieve legal outcomes for consumers. It also allows for identification of legal risks that corporate management can target for law reform, leading to changes in the design of the legal system. Importantly, the legal system is malleable in ways that physical systems are not.

This Article explains why HAV manufacturers and developers should consider law during the design process for an AV intended as “fit-for-purpose” to transport intoxicated persons. It suggests that management, marketing, engineering and legal functions collaborate to develop product requirements and specifications that shield owner/occupants from criminal liability for DUI manslaughter, negligent homicide and similar charges, as well as guard against civil liability. This collaboration should occur for HAV deployments in any state of the United States.

Beyond addressing HAV feature design, this Article recommends that the HAV industry pursue a legislative agenda, as an adjunct to feature design, to expressly provide legal protection from liability in various scenarios in which the mere capability to control the HAV or mere ownership of the HAV can result in liability without fault on the part of an occupant or owner. The specific recommendation consists of a series of amendments to federal law to protect owner occupants of HAVs much as the Graves Amendment provides protection to car rental companies from liability for accidents caused by their customers.

Introduction

This Article explores the relationship among (i) the design process for automated vehicles (AVs) using automated driving systems (ADS) with automation features classified as Levels 4 or 5 under SAE J30161 See SAE International, Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles J3016_202104 4 (2021), https://www.sae.org/standards/content/j3016_202104 [hereinafter J3016].
(often described as “highly” or “fully” automated vehicles)(HAVs),2Notice that this excludes AVs with Level 3 automation features which are “automated vehicles” but not “fully” or “highly” automated vehicles. An AV with Level 3 features is not fully or highly automated because part of its design concept relies on a fallback ready human driver able to assume control of the dynamic driving task (DDT) to allow for safe operation.
(ii) the legal liability that voluntary users and owners of these HAVs may incur for accidents occurring during operation in autonomous mode, and (iii) the need for legal reform.3Our investigation of design features relevant to liability attribution for HAVs illuminates the even more challenging design issues for developers of AVs with only Level 3 features. See William H. Widen & Philip Koopman, The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities, 64 Jurimetrics J. 41 (2023) (discussing legal complexities associated with deployment of AVs with Level 3 features). See also Cassandra Burke Robertson, Litigating Partial Autonomy, 109 Iowa L. Rev. 1655 (2024).

We build on the idea of co-design familiar to engineers in the development of hardware and software, expanding upon it to include design features to produce legal outcomes. The design to produce legal outcomes can take two forms. First, it can focus on features of HAVs that create a liability risk under existing law—a product-oriented solution. Second, design may focus on law reform that can replace the need for an HAV feature—a solution focused on social constructions in law.

Engineers usually design products where the relevant domain of use is simply our material world with unchanging physical laws. For HAVs, we suggest that consideration of a dual domain of use is needed to create a product that is fit for purpose. The dual domain of use is the material world with unchanging physical laws and the social world governed by manmade laws which are malleable.

The approach to HAV design needs a new paradigm because HAV deployment has social goals as one of its objects. Autonomous systems challenge and disrupt the conventional notions of agency that underlie attribution of liability on which our social structure is built. Design for the dual domain of use cannot be confined to engineering departments alone. It requires a whole-of-enterprise effort which involves engineering departments, legal departments, marketing departments, management, and the boards of AV manufacturers and ADS designers. Additionally, it should involve industry associations and lobbying efforts.

We argue that the law should include an explicit “Shield Function” to protect voluntary users of HAVs at the consumer level. Ideally, this consumer protection should take the form of a federal statute to provide a uniform result throughout the U.S. The model for such a federal statute is the Graves Amendment which protects rental car companies from liability for the torts committed by their customers.4 49 U.S.C. § 30106 (2005).

Our recommended law reform aims to prevent unfair and unjust adverse outcomes for voluntary users who might incur liability for accidents that were, in no way, caused by their own negligent or reckless operation or use of a HAV. HAV feature design has the same goal (and is particularly important in the absence of law reform).

As a byproduct, the proposed legislation (or feature design) will limit unnecessary adverse publicity about vehicle automation technology based on an unwelcome allocation of liability that the public would view as surprising, unfair and unjust. It also should make HAVs more attractive to potential customers by offering consumer protection for three primary use cases for HAVs. Limiting adverse publicity has value because adverse publicity could slow the implementation of vehicle automation technology. This could delay the realization of potential benefits from HAV deployments such as reduced fatalities from drunk driving5AVIA Statement on NHTSA’s 2022 Crash Report, Autonomous Vehicle Indus. Ass’n (Apr. 21, 2023), https://theavindustry.org/newsroom/press-releases/avia-statement-on-nhtsas-2022-crash-report.
and lower carbon emissions.6 National Academy of Sciences, Engineering, and Medicine, Global Pathways to Net-Zero: Behavioral, Social, and Technological Research and Innovation Strategies for Transportation Decarbonization: Summary of the Seventh EU-U.S. Transportation Research Symposium: Proceedings of a Symposium 5 (2024) (noting that automated vehicle technology may play a role in reducing carbon emissions).
We assume, for purposes of this Article, that Congress has determined that benefits such as these will follow from deployment of AVs (an issue about which there is current debate but which is beyond the scope of our project in this Article).7The United States Department of Transportation (U.S. DOT) identifies the benefits of automated vehicles as “safety, economic and societal benefits, efficiency and convenience, and mobility.” Nat’l Sci. & Tech. Council & U.S. Dep’t Transp., Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 2 (2020); Automated Vehicles for Safety, Nat’l Highway Traffic Safety Admin., https://www.nhtsa.gov/vehicle-safety/automated-vehicles-safety#topic-benefits (last visited Mar. 28, 2025). NHTSA identifies increased safety as a “paramount” benefit of automated vehicle technology. See id. But see David Zipper, The Deadly Myth That Human Error Causes Most Car Crashes, Atlantic (Nov. 26, 2021) (questioning the premise of most commentators that autonomous vehicles will be safer because human error causes 94% of accidents) https://www.theatlantic.com/ideas/archive/2021/11/deadly-myth-human-error-causes-most-car-crashes/620808; Alexander B. Lemann, Autonomous Vehicles, Technological Progress, and the Scope Problem in Products Liability, 12 J. Tort L. 157, 159 (2019) (pointing out that a lot more “miles logged” will be necessary before valid comparisons can be made between the safety records of autonomous vehicles and human operators). We note that AV STEP, the NHTSA 2025 Notice of Proposed Rulemaking, infra note 46, expresses caution about the ability of AVs to achieve these social goals.

Our primary concern is with the liability of voluntary users of HAVs for accidents that occur while an owner/occupant is intoxicated.8This is not a novel concern. It has been discussed for at least 15 years, but lawmakers have not seen fit to address it. See, e.g., Pam Baker, The Law and Your Robot Chauffeur, Technewsworld (Nov. 2, 2010, 5:00 AM), https://www.technewsworld.com/story/The-Law-and-Your-Robot-Chauffeur-71155.html.
We also worry about owners who loan their private production HAVs to others. This class of “voluntary users” includes passengers who hire a robotaxi for a trip (Scenario 1), an owner/operator who purchases a highly automated HAV for personal use (Scenario 2), and a private HAV owner who periodically loans or leases her AV to a robotaxi operator or delivery service (Scenario 3).

Because the pace and trajectory of law reform is uncertain, this Article also describes the self-help steps that HAV manufacturers and developers might take in the design process to guard against adverse outcomes for their customers and users. It further explains why, if management elects not to address a risk directly in the design process by including or disabling certain features of a HAV, it nevertheless needs to consider legal outcomes as part of the design process so that proper disclosure of risks can prevent potential manufacturer liability for breach of a duty to warn found in strict products liability law (whether expressed in common law or statute form)9The details of potential liability for breach of a duty to warn are discussed infra Part II.
or liability based on similar disclosure based theories.10The presence of disclosure figured prominently in a recent Florida appellate court decision, which overruled a trial court’s decision to allow the possibility of punitive damages in a case involving Tesla’s “Autopilot” feature. See Tesla, Inc. v. Banner, — So.3d –, 2025 WL 61081 at *4 (Fla. 4th DCA Feb. 26, 2025).

Part I explains why we need law reform to protect consumers sooner rather than later. Part II describes the details of existing laws that create unwelcome risks for consumers.11We discussed some of these laws in a presentation to the Design, Automation and Test Europe 2025 Conference, March 31-April 2, 2025, in Lyon, France.
Part III describes the specifics of our recommended law reform, including suggested statutory language for consideration by Congress. The Article closes with the hope that people from across the political spectrum can support our proposals, including people who generally favor a reduction in regulatory requirements and oversight. Indeed, every advocate for vehicle automation technology should welcome reform to shape our laws to match the changing nature of drivers and the act of driving.

Though this Article refers to the extensive literature related to attribution of liability for AV accidents, the purpose of this Article is not to provide a comprehensive survey of relevant publications. Rather, it provides a reasoned argument for law reform and motivations for a board, managers and engineers to adopt a procedure in the design process which takes account of legal categories as part of requirements specifications and product disclosure. This design process differs in focus from the process used to ethically develop systems using artificial intelligence such as detailed in IEEE 7000.12See IEEE Standards Association, IEEE 7000-2021, IEEE Standard Model Process for Addressing Ethical Concerns During System Design (Sept. 15, 2021) (approved June 16, 2021) [hereafter IEEE 7000] (available via purchase or subscription, on file with the authors).
It also differs from an attempt to give a philosophical justification for, or criticism of, the imposition of liability without fault.13See, e.g., Filippo Santoni de Sio & Giulio Mecacci, Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them, 34 Phil. & Tech 1057 (2021) (discussing the difference between culpability and accountability).

This Article has relevance to a broad audience, including directors, senior management and marketing departments of HAV manufacturers and developers, engineers working on HAV design, HAV product managers, legal advisors to HAV manufacturers and developers, and legislators committed to the long-term success of vehicle automation technology.

I. Making the Case for Law Reform

Three Cases of Concern

We characterize the three cases that concern us as different flavors of “vicarious liability” because they are not cases in which we find fault with the voluntary user for active performance of the dynamic driving task (DDT). As explained in detail in Part II, this liability without fault can occur based on the owner/occupant of an AV having exceeded the threshold level of “minimum human control” that the law recognizes as giving rise to liability (which often is surprisingly low).14We borrow the term “meaningful human control” from the literature discussing attribution of liability for automated weapons systems. See, e.g., Filippo Santoni de Sio & Jeroen van der Hoven, Meaningful Human Control over Autonomous Systems: A Philosophical Account, 5 Ethics in Robotics and Artificial Intelligence (2018). We believe the general concept of meaningful human control is useful in discussing liability in the legal setting of highway driving, though we do not adopt the various nuances specific to automated weapons systems. But see Filippo Santoni de Sio, et. al, Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach, Minds Mach 1-25 (Dordr July 28, 2022), https://pubmed.ncbi.nlm.nih.gov/35915817 (applying notions of tracking and tracing to driving in mixed traffic). One commentator distinguishes between “operating a vehicle” and “operating a vehicle in a meaningful way.” See Atilla Kasap, States’ Approaches to Autonomous Vehicle Technology in Light of Federal Law, 19 Ohio St. Tech. L.J. 315 (2023).
Even if a court applies a legal theory for liability based on an attenuated concept of actual control (in Scenarios 1 and 2), we believe the term “vicarious liability” best captures the essence of the nature of this liability because the ADS performs the DDT, not the voluntary user yet the voluntary user may be found liable. Significantly, a HAV does not require any human supervision as part of the safety design concept for Level 4 and Level 5 features. This is why one can sleep in an HAV during a trip but one cannot sleep in a mere Level 3 AV.

Each scenario resembles an employer/employee relationship in which a human master may have liability for the actions of a robot servant that is “driving” an HAV, should those robot actions fall short of the minimum performance the law requires from a human driver. The law often describes this relationship as a “master/servant” relationship.

Three examples illustrate the general nature of the problem. We base each scenario on a real-world situation or a reasonable extrapolation from an actual accident case.

Scenario 1

In Scenario 1, a robotaxi has a “big red button” or urgent egress feature which gives a rider the power to terminate an itinerary, thus causing the robotaxi to initiate a minimal risk condition (MRC) maneuver. Such a “panic button” is a feature incorporated in some actual robotaxis today.15Actual robotaxis have included this feature. See Lora Kelley, A Robotaxi Experiment, The Atlantic (August 29, 2023), https://www.theatlantic.com/newsletters/archive/2023/08/robotaxis-self-driving-cars-san-francisco/675170/ (noting that the robotaxi had a panic button that said “END RIDE” in all caps).
Proper use of such a feature ranges from the passenger cabin filling with smoke to a passenger pulling over for a restroom break.

While traveling one evening, a passenger terminates her itinerary using this feature out of an excess of caution because she notices an accident ahead. The robotaxi initiates a MRC maneuver but fails to identify parked emergency vehicles with flashing lights on the side of the highway.16Accidents have occurred when flashing lights of an emergency vehicle disrupted accurate object and event detection and response (OEDR) function. See Aarian Marshall, Emergency Vehicle Lights Can Screw Up a Car’s Automated Driving System, Wired.com (Nov. 26, 2024), https://www.wired.com/story/emergency-vehicle-lights-can-screw-up-a-cars-automated-driving-system/ (citing research by Elad Feldman, et al., Securing the Perception of Advanced Driving Assistance Systems Against Digital Epileptic Seizures Resulting from Emergency Vehicle Lighting, https://sites.google.com/view/epilepticar).
The robotaxi hits and kills a first responder. In version A, the passenger is sober. In version B, the passenger is intoxicated.

Though robotaxi use is already widespread in some communities, the need to address Scenario 1 becomes more urgent with each deployment of robotaxis in a new city.17For example, Waymo recently announced testing in San Diego and has plans for expansion of its services to other cities. See Jasmine Ramirez, Autonomous Waymo cars test-driving in San Diego, KFMB San Diego (Jan. 30, 2025), https://www.msn.com/en-us/autos/news/autonomous-waymo-cars-test-driving-in-san-diego/ar-AA1y6oCb.
Using a robotaxi as a chauffeur when one has had alcoholic beverages to drink is a core use case. As noted in a recent Washington Post editorial:

Immense resources are expended on managing auto crashes and their consequences: traffic cops, liability insurance, car repairs, personal injury lawsuits, DUI prosecutions and, of course, health care for the injured.18Editorial, Your robot chauffeur is on its way. Welcome it., Wash. Post (Jan. 30, 2025), https://www.washingtonpost.com/opinions/interactive/2025/self-driving-cars-regulation-autonomous-vehicles.

The law should clearly provide a Shield Function for the robotaxi passenger in both version A and version B. If use of a robotaxi does not provide a Shield function, then resources will continue to be spent on personal injury lawsuits and DUI prosecutions even though a rider has made a prudent decision (and would have no liability risk had she taken a human driven taxi).

Scenario 2

In Scenario 2, an individual owns an HAV with a feature that allows the owner/occupant to change operation from fully autonomous mode to manual mode during the middle of an itinerary. A use case desired by the owner is to direct the HAV to transport her safely home after drinking alcoholic beverages at a party. While in route home after a party, the ADS in the HAV fails to identify a stop sign, causing an accident with another vehicle in cross traffic, killing a motorist. Though HAVs with Level 4 features are not currently available to consumers,19“There is no car that you can buy from a dealership today that’s fully self-driving”. See Rebecca Ruiz, Robot, take the wheel: What you need to know about autonomous vehicles rolling out across the U.S., Mashable.com (Feb. 20, 2025), https://mashable.com/article/everything-know-robotaxi-driverless-vehicles (quoting Jeff Farrah, CEO of the Autonomous Vehicle Industry Association).
AVs with Level 3 features have already arrived on the market.20See, e.g., Rob Stumpf, Mercedes-Benz Gets Approval to Deploy Level 3 Driving Tech in Nevada, The Drive.com (Jan. 6, 2023, 4:36 pm), https://www.thedrive.com/news/mercedes-benz-gets-approval-to-deploy-level-3-driving-tech-in-nevada; Dan Mihalascu, Mercedes Drive Pilot Level 3 ADAS Approved for Use in California, InsideEVs.com (June 9, 2023, 6:37 AM), https://insideevs.com/news/671349/mercedes-drive-pilot-level-3-adas-approved-use-california. For a listing of companies with deployed Level 3 AVs, or plans to deploy, as of March 2025, see Augustin Friedel, ADAS Stacks by Selected Automotive Brands Per Region, LinkedIn (Mar. 15, 2025), https://www.linkedin.com/posts/friedel_ai-automotive-adas-activity-7291394885222334464-siA5.
We thus consider it prudent to plan for consumer HAVs with Level 4 features now, rather than later.

We note that some existing accident cases involve owners of vehicles with ADAS features who mistakenly used the vehicle to drive home while the owner was intoxicated.21See Julius Whigham II, Boca-area attorney faces DUI manslaughter charge after 2022 crash killed motorcyclist, The Palm Beach Post (Aug. 28, 2023 5:10 AM), https://eu.palmbeachpost.com/story/news/crime/2023/08/28/fatal-crash-has-boca-raton-area-lawyer-facing-dui-manslaughter-charge/70658879007. The consequences of conviction on a DUI manslaughter case can be severe. See Florida Bar v. Dorfman, 2024 WL 5116382 (Fla. Dec. 16, 2024) (illustrating loss of bar admission as a consequence).
The law does not recognize a defense to the effect that “the computer was driving, not me”22See, e.g., Julia Doskoch, “Your Honor, the Car Crashed Itself”: Navigating Autonomous Vehicle Liability in an Age of Innovation, 1 Boston Coll. Int. Prop. & Tech Forum (2023).
because ADAS technology (like AVs with Level 3 features) require human supervision as part of the safety design concept. Consumers often wrongly assume that their vehicle automation technology is more capable than it really is, with sometimes fatal results. Consumer misperceptions can occur because of marketing efforts by companies selling the technology.23Liza Dixon, Autonowashing: The Greenwashing of Vehicle Automation, 5 Trans. Res. Interdisciplinary Perspectives, 5 (2020).
Nevertheless, when privately owned HAVs with Level 4 features arrive, one of the advertised benefits almost certainly will be the use case of transporting impaired persons who are unable to drive themselves or assume the role of a fallback ready user.

We note that members of the Texas legislature recognize this serious problem of consumer confusion.24 Tex. H.R. Comm. On Transportation, Interim Report 2024 (Nov. 2024) identifies an uninformed public not understanding the differences between Level 2 & 3 features from Level 4 & 5 features as the largest threat to public safety and public trust in the AV industry. A bill introduced in the Texas House of Representatives on March 5, 2025, by Rep. Canales recognizes the importance of disclosure to build trust in vehicle automation technology. H.B. 3837, 89th Leg. Sess. (Tex. 2025). It would amend Section 17.26(b)(36) of the Texas Business and Commerce Code to make it a false, misleading, or deceptive act or practice to advertise or represent a motor vehicle as autonomous or self-driving unless it has a Level 4 or Level 5 feature. This legislation addresses material misstatements in advertising. Our proposal addresses material omissions in advertising. We are grateful to Susanna Gallun for calling our attention to legislative developments in Texas.

Scenario 3

In Scenario 3, an individual owns an HAV with a Level 4 feature which the owner wishes to enroll in a robotaxi ride hailing program. The program allows the owner to “loan” her HAV to the ride hailing program so the HAV can earn a return for the owner when she is not using it. While engaged for hire, the HAV hits and kills a pedestrian in a crosswalk when the ADS fails to identify the pedestrian as a person.

Outside the HAV context, cases involving vicarious liability for an owner who loans her vehicle to another person are numerous.25Though not a case of commercial letting, a recent Florida Supreme Court opinion discusses vicarious liability law in depth. See Emerson v. Lambert, 374 So.3d 756 (Fla. 2023) (describing Florida’s common law implementation of liability for dangerous instrumentalities).
We believe it prudent to plan for the risk of vicarious liability in an HAV context because ride hailing companies, such as Lyft, already plan for the day when new HAVs sold to consumers come pre-equipped with features that streamline enrollment of the HAV into their ride hailing programs.26See David Risher, Motoring down the AV superhighway, Lyft.com (Feb. 18, 2025), https://www.lyft.com/blog/posts/av-transportation-revolution-david-risher (noting plans to enroll private AVs in Lyft’s ride hailing programs).
The law already protects rental car companies from torts committed by their customers, so the law should make clear that private owners of HAVs with Level 4 features have equivalent protection.27Existing federal law is not clear that a consumer who occasionally lets her vehicle to a ride hailing company would be engaged in the business of renting cars and thus come under protection from the Graves Amendment.

In our three identified scenarios, we believe reasonable consumers would not expect liability to attach because, in the words of a 1956 advertising campaign, the AV manufacturers have urged consumers “to leave the driving to us.”28The Greyhound Bus Company started a long-running advertising campaign in 1956 with the slogan “Go Greyhound and leave the driving to us.” See Judson Knight, Greyhound Lines, Inc.: Leave The Driving To Us Campaign, WARC (last accessed June 6, 2025), https://www.warc.com/content/paywall/article/gale-emmc/greyhound-lines-inc-leave-the-driving-to-us-campaign/en-gb/84425.
The concern is that imposition of liability on voluntary users and owners in these cases not only fails to promote justice and fairness, but it also could have the unwelcome byproduct of turning public opinion against vehicle automation technology because liability for accidents during use conflicts with reasonable consumer expectations.29Though beyond the scope of the advocacy in this Article, we note that the Federal Trade Commission may take action for engaging in deception. An action for deception requires a material representation, omission, or practice that is likely to mislead consumers, who are acting reasonably under the circumstances. See Federal Trade Commission, Policy Statement on Deception (Oct. 14, 1983), https://www.ftc.gov/public-statements/1983/10/ftc-policy-statement-deception.

A finding of fault-based liability unrelated to HAV performance of the DDT is another matter altogether (i.e., an owner’s failure to maintain the AV, ignoring a recall; or, as an occupant, maliciously hitting the panic button). We assume that, in each of our scenarios, we cannot find direct fault on the part of the human master for the details of the lateral and longitudinal movements of the AV and its trajectory which led to the accident. We also assume no direct fault based on a failure to maintain the AV or update its systems.30 Restatement (Third) of Torts does not approve of rules which impose liability without fault, but the comments make clear that this “does not preclude proving that the owner of the motor vehicle was independently negligent, such as by negligently entrusting the vehicle to the operator.” See Restatement (Third) of Torts: Apportionment of Liability § 5 cmt. c. (Am. L. Inst. 2000).

B. Existing Laws and Reform Efforts Do Not Address the Needs of Consumers

Addressing the future safety profile of HAVs as cyber physical systems from an engineering standpoint—essentially, as articles of equipment—is not the same exercise as addressing liability for accidents caused by an unsafe product when efforts to design for future safety have failed (i.e., for a design defect rather than a manufacturing defect).

One the one hand, the traditional role of engineering is to design a safe product to prevent an accident from occurring—namely, it is an exercise in avoidance of physical harm ex ante. On the other hand, liability attribution addresses the consequences of distributing an unsafe product in the market—it is an exercise in compensation for loss post hoc. We note that someone pays compensation, and when that someone is a natural person and the liability exceeds available insurance, economic ruin can follow. In the HAV space, we note that product design can, in some cases, prevent this harm from occurring ex ante just as attention to safe design can prevent physical harm and property damage from occurring.

Our suggestion at reform comes from a recognition that with HAVs the post hoc liability incurred by those not at fault (and perhaps suffering no physical injury) creates an unsafe financial situation that can have profound consequences for an owner/occupant of an HAV and that this financial risk can be addressed in the design process. This design process also protects AV manufacturers by giving them a procedure to identify the type of disclosure that will insulate them from liability.

To date, state level laws and regulations relating to AVs31The National Conference of State Legislatures maintains a database devoted to autonomous vehicles. NCSL, Autonomous Vehicles Legislation Database (updated Mar. 10, 2025), https://www.ncsl.org/transportation/autonomous-vehicles-legislation-database.
mostly focus on “enabling” legislation (i.e. allowing testing and deployment of AVs on public roads),32For example, Vermont allows testing of automated vehicles with a permit. Vermont Stat. Ann. § 4203. Testing of automated vehicles on public highways (West 2019) (current through Mar. 5, 2025).
pre-emption (i.e. prohibiting local governments from regulating AVs),33See, e.g., Pa. C.S.A. § 8510. Local Governance (Purdon’s 2023) (pre-empting ordinances, policies and rules of local authority but not eliminating the general police power).
and operation of HAVs without a human driver in the vehicle.34See, e.g., Texas Stat. & Codes Ann., Trans. Code § 545.453. Operator of Automated Motor Vehicle (Vernon’s 2017) (current on West through Nov. 7, 2023) (providing that “[n]otwithstanding any other law, a licensed human operator is not required to operate a motor vehicle if an automated driving system installed on the vehicle is engaged”). See also Fla. Stat. § 316.85(2).
State legislatures have spilled little ink on liability attribution, but some of those limited efforts are troubling for consumers.35Laws that protect manufacturers against defects in the autonomous vehicle technology caused by third-party modifications are not of particular concern. See, e.g., Fla. Stat. Ann. § 316.86 (West 2016).

The states have not attempted comprehensive liability reform such as that begun in Europe,36The European Commission proposed liability rules for artificial intelligence. See, e.g., European Commission, Commission Staff Working Document, Impact Assessment Report, accompanying Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (Sept. 28, 2022), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022SC0319.
an effort that itself seems to have recently stalled.37See Annex, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Commission work programme 2025, Annex IV, item 32, at p. 26 (Feb. 11, 2025), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52025DC0045.
But state level laws need clarification because they place voluntary users at risk. Such laws include back-handed attempts to impose liability on the AV itself by defining the AV or ADS as the “operator.”38See, e.g., Fla. Stat. Ann. §316.85(3)(a) (West 2019) (covering autonomous vehicles; operation; compliance with traffic and motor vehicle laws; and testing). See also S.B. 1541, 2022 Leg., Reg. Sess. (Ok. 2022) at 7 (making the ADS responsible for complying with traffic and motor vehicle laws). If a state legislature desires to attribute liability to an HAV or ADS, that attribution should be direct because a cyber-physical system is not a legal person. Europe has expressly identified this problem. See, e.g., Eur. Parl., Brief., Artificial intelligence liability directive (Feb. 2023) (legislation postponed), https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf.
This is a particularly problematic approach because blaming “human error” is all too common to absolve a company of liability.39See Madeleine Clare Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, 5 Engaging Sci., Tech., and Soc. 40 (2019).
Without a clear directive, a court is unlikely to be satisfied that a legislature intended that liability attach to a cyber physical system—a mere “thing”—without legal personhood. In such a case, the law will seek out a legal person to bear the liability—and that very well could be a voluntary user—even one without fault.

Other state laws pose a more direct threat to consumers who own HAVs by defining the owner of the HAV as being the “operator” for purposes of assessing compliance with applicable traffic and motor vehicle laws.40See Texas Stat. & Codes Ann., Trans. Code § 545.453. Operator of Automated Motor Vehicle (Vernon’s 2017) (current on West through Nov. 7, 2023).
It is unclear what the statutory limitation “for purposes of assessing compliance” is supposed to cover. One interpretation would make the owner/occupant in Scenario 2 responsible but not the occupant of a robotaxi in Scenario 1. These are plainly contradictory results for essentially the same activity, undertaken for the same purpose. The private HAV owner in Scenario 3 would appear responsible, for example in Texas and Florida, because she is the “owner”, but this result is plainly at odds with the favorable treatment given to rental car companies who have federal protection against attribution to them of the torts of their customers.

At the federal level, other than regulating through recall41See Matthew Wansley, Regulating Driving Automation Safety, 73 Emory L.J. 505, 537 (2024).
(supplemented by requirements to report accidents involving vehicle automation),42For example, the National Highway Traffic Safety Administration (NHTSA) issued Second Amended Standing General Order 2021-01 on June 29, 2021, amended on April 5, 2023, that requires identified vehicle manufacturers and operators to report to NHTSA crashes involving vehicles equipped with ADS or certain advanced driver assistance systems. U.S. DOT, NHTS, Second Amended Standing General Order 2021-01 (April 5, 2023).
the federal government largely has been absent. The need to modernize federal law is illustrated by the lack of regulations addressing the treatment of a motor vehicle without a brake pedal or mirrors (which federal law requires in a conventional vehicle).43See, e.g., Ian Duncan and Trisha Thadani, Robotaxis without a brake pedal or mirrors? Not so fast, feds say, Wash. Post (Mar. 11, 2025) https://www.msn.com/en-us/news/us/robotaxis-without-a-brake-pedal-or-mirrors-not-so-fast-feds-say/ar-AA1AGhhw?ocid=BingNewsVerp.
Prior notices of proposed rulemaking have not advanced,44The proposed standards are primarily ISO 26262, ISO 21448 and ANSI/UL 4600. See NHTSA Framework for Automated Driving System Safety, 85 F.R. 78058, 78065-66 (proposed Dec. 3, 2020) (to be codified at 49 C.F.R. pt. 571), https://www.federalregister.gov/documents/2020/12/03/2020-25930/framework-for-automated-driving-system-safety.
and an AV bill passed the House but did not get traction in the Senate.45SELF DRIVE Act, H.R. 3388, 115th Cong., (1st Sess. 2017).

What we do find is a new notice of proposed rulemaking suggesting a voluntary framework for evaluation and oversight of AVs.46U.S.DOT, NHTS, Notice of proposed rulemaking, ADS-equipped Vehicle Safety, Transparency, and Evaluation Program, Regulations.gov (Jan. 15, 2025), https://www.federalregister.gov/documents/2025/01/15/2024-30854/ads-equipped-vehicle-safety-transparency-and-evaluation-program (announcing the “AV STEP” initiative) [hereafter AV STEP].
AV STEP identifies vehicle manufacturers, ADS developers, fleet operators, and system integrators of ADS-equipped vehicles seeking to operate on public roadways in the United States as eligible to enroll in this voluntary program. Under the proposed program, an applicant would provide NHTSA with information and data related to the safety of the design, development, and operations of ADS-equipped vehicles for their intended deployment under the program. AV STEP recognizes these parties as the relevant persons to participate in the program because they can control the performance of AVs.

Our concern about the potential liability of voluntary users and private owners differs in focus from the mere identification of those legal persons to whom the tort system should attribute responsibility for losses suffered by involuntary creditors following an accident involving an AV (e.g., vulnerable road users with potential tort claims for injury suffered when an ADS fails to adequately perform the DDT). Voluntary users and private owners also suffer a loss (and not just involuntary creditors) if the law identifies one of them as the party responsible to compensate involuntary creditors. While a fair and just tort system must compensate involuntary creditors in appropriate cases, that system also should not place financial burdens or criminal penalties on the wrong parties.

C. The Importance of Public Acceptance

For advanced vehicle automation technology to succeed, senior management and directors of AV manufacturers and developers should consider the conditions necessary for the long-term success of the technology and work to promote those conditions in the design of their products. A broad conception of product design should include design to address legal risks. These considerations also should inform the direction of company sponsored efforts at legal reform because such reform can support public acceptance in the face of continuing doubts.

A recent survey confirms that the public continues to have a fear of self-driving vehicles.47Brittany Moye, AAA: Fear in Self-Driving Vehicles Persists, AAA.com (Feb. 25, 2025), https://newsroom.aaa.com/2025/02/aaa-fear-in-self-driving-vehicles-persists.
Only “13% of U.S. drivers would trust riding in self-driving vehicles—albeit an increase from last year, when this number was 9%.”48Id. Compare Lee Rainie et al., Report: Americans cautious about the deployment of driverless cars, Pewresearch.org (Mar. 17, 2022), https://www.pewresearch.org/internet/2022/03/17/americans-cautious-about-the-deployment-of-driverless-cars (discussing AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns, Pewresearch.org (March 17, 2022), https://www.pewresearch.org/science/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns) with Aaron Smith & Monica Anderson, Report: Americans’ attitudes toward driverless vehicles, Pewresearch.org (Oct. 4, 2017), https://www.pewresearch.org/internet/2017/10/04/americans-attitudes-toward-driverless-vehicles (discussing Press Release, Pew Research Center, Automation in Every Day Life, (October 4, 2017) https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2017/10/PI_2017.10.04_Automation_FINAL.pdf).
Continuing instances of vandalism to robotaxis reflect even stronger sentiments against the technology in some circles.49Alex Wigglesworth, Group vandalizes Waymo robotaxi in Los Angeles’ Beverly Grove neighborhood, police say, LA Times (Jan. 25, 2025 4:27 PM), https://www.latimes.com/california/story/2025-01-25/group-vandalizes-waymo-robotaxi-in-beverly-grove-police-say.

To be sure, public acceptance requires a perception that AVs operate safely (i.e., at least as safe as human drivers and perhaps much more safely).50Tim Adams, Daniel Kahneman: ‘Clearly AI is Going to Win. How People Are Going to Adjust Is a Fascinating Problem’, The Guardian (May 16, 2021), https://www.theguardian.com/books/2021/may/16/daniel-kahneman-clearly-ai-is-going-to-win-how-people-are-going-to-adjust-is-a-fascinating-problem-thinking-fast-and-slow (noting Kahneman’s view that “[b]eing a lot safer than people is not going to be enough”).
But it also requires that AVs deliver advertised benefits without unwelcome byproducts—such as liability without fault. Importantly, AV deployment is widely seen and advertised as a solution to the unacceptable level of drunk driving fatalities.51See supra note 5.

It is reasonable to conclude that public acceptance also requires that AV deployments occur within a legal environment in which liability for accidents involving AVs is clearly understood and allocated justly and fairly so the public perceives that “appropriate” persons bear financial losses for accidents. One of us has written about how the law might justly and fairly allocate the risk of loss sustained by involuntary creditors through legal reform by statute.52See William H. Widen & Philip Koopman, Liability Rules for Automated Vehicles: Definitions and Details, 27 SMU Sci. & Tech. L. Rev. 77 (2024).

Significantly, the withdrawal of Cruise from the robotaxi business53Cruise initially suspended its uncrewed robotaxi operations nationwide in response to the incident in which its robotaxi hit a pedestrian (who was initially struck by another vehicle) but later expanded the pause to include all supervised and manual trips. Compare David Shepardson, GM Cruise Unit Suspends All Driverless Operations After California Ban, Reuters.com (Oct. 27, 2023, 12:59 AM), https://www.reuters.com/business/autos-transportation/us-auto-safety-agency-investigating-two-new-gm-cruise-crash-reports-2023-10-26 with David Shepardson & Ben Klayman, GM’s Cruise Suspends Supervised and Manual Car Trips, Expands Probes, Reuters.com (Nov. 14, 2023, 7:43 PM), https://www.reuters.com/business/autos-transportation/gms-cruise-suspends-supervised-manual-car-trips-expands-probes-2023-11-15.
illustrates the importance of public acceptance and how a single adverse instance, if not well managed, can spiral out of control with severe financial consequences.54Tyson Fisher, Self-driving vehicles not ready for prime time, study suggests, LandLine.media (Mar. 11, 2025), https://landline.media/self-driving-vehicles-not-ready-for-prime-time-study-suggests (noting that “[j]ust one fatal crash involving a self-driving vehicle could send a company – and the entire autonomous vehicle industry – backward”); cf. Mary Missy Cummings, Benjamin Wheeler & John Kliem, A Root Cause Analysis of a Self Driving Car Dragging a Pedestrian, Computer (July 2024), https://www.computer.org/csdl/magazine/co/2024/11/10720344/215PD0vqgTe (warning of public backlash if safety critical systems are not held to high safety standards).
In that case, Cruise first suspended operations, and then ceased robotaxi operations altogether, following a single, non-fatal accident.55See supra note 53.

The AV industry’s approach to foster public acceptance has, to date, concentrated on a public relations campaign to convince potential AV customers and third-party vulnerable road users of the safety benefits AV technology provides. Significantly, however, these safety claims are not supported by meaningful statistical comparisons.56See, e.g., Missy Cummings & Ben Bauchwitz, Identifying Research Gaps through Self-Driving Car Data Analysis, IEEE Transactions on intelligent Vehicles (Nov. 2024) (noting that “the industry is very far from making meaningful statistical comparisons with human drivers”).
The public might view an express lobbying effort to protect consumers as a positive image enhancing step for the industry.

To the extent that federal regulatory efforts in the current administration do not focus on the safety of ADS design,57We think a focus on safe design for vehicle automation technology unlikely in the absence of data collection. There are indications that data collection efforts may cease in the current administration. See, e.g., Jarrett Renshaw et al., Exclusive: Trump team wants to scrap car-crash reporting rule that Tesla opposes, Reuters (Dec. 17, 2024, 8:11 PM), https://www.reuters.com/business/autos-transportation/trump-transition-recommends-scrapping-car-crash-reporting-requirement-opposed-by-2024-12-13.
it becomes even more important for management of AV manufacturers to responsibly manage perceptions because they will become the only party providing the public with assurances that the technology is acceptably safe.58Government regulation can convey information about safety and quality. For example, establishment of federal government standards for participants in Medicare and Medicaid provide some assurance of health care quality in those programs. See 42, C.F.R. The failure of the AV industry to build trust creates an obstacle to advancement of the technology. See William H. Widen & Philip Koopman, Autonomous Vehicle Regulation and Trust, 27 UCLA J.L.& Tech. 169 (2022).
This is particularly true if meaningful affirmative federal regulation of safety features never occurs.59Prior notices of proposed rulemaking have not advanced. See NHTSA Framework for Automated Driving System Safety, 85 Fed. Reg. 78058, 78065-66 (proposed Dec. 3, 2020) (to be codified at 49 C.F.R. pt. 571). For a description of regulatory frameworks and the “absence of more robust federal action,” see Olivia Dworkin, Jorge Ortiz & Nicholas Xenakis, Regulatory Frameworks for Smart Mobility: Current U.S. Regulation of Connected and Automated Vehicles And The Road Ahead, 2023 J. L. & Mob. 1 (2023).
And doubly so if the efforts at “regulation by recall” become less active, or cease altogether, based on staff reductions at NHTSA or the NTSB.60It appears that the NHTSA requirement of reporting on AV incidents may cease, and that the staff devoted to accident investigations has been, or will be, reduced. See Ian Duncan, DOGE employee cuts fall heavily on agency that regulates Musk’s Tesla, Wash. Post (Feb. 20, 2025), https://www.washingtonpost.com/business/2025/02/21/musk-doge-tesla-autonomous-vehicles-nhtsa; Magdalena Del Valle, The Trump Administration Targets Transportation Research, Bloomberg (Feb. 26, 2025, 12:56 PM), https://www.bloomberg.com/news/newsletters/2025-02-26/the-trump-administration-targets-transportation-research.

D. Promotion of Efficient Legal Rules

Placing vicarious liability on an owner/occupant of an HAV provides no incentive in the law for safety improvements.61One theoretical justification for tort liability is that imposing financial responsibility for accidents creates an incentive to act with greater care. See, e.g., Herring v. United States, 555 U.S. 135, 153 (2009) (Ginsberg, J., dissenting). If anything, imposing vicarious liability exposure on the HAV owner would incentivize disabling the ADS, which runs counter to industry and government-stated goals of encouraging use of the technology to improve road safety.
The HAV manufacturer and ADS developer can have a positive impact on safety whereas the owner and the occupant cannot. One of the prime theoretical justifications for the imposition of tort liability is to motivate a party to take cost-effective safety measures. For tort law to serve its traditional purpose and role, and to promote the use of AV technology (rather than avoid it), the voluntary users in our scenarios need a clear Shield Function in the law.62In this article, we do not engage with suggestions that an insurance pool or other novel approach should be created to address automated vehicle accidents and how claims are paid. See, e.g., Steven Shavell, On the Redesign of Accident Liability for the World of Autonomous Vehicles, Nat’l Bureau of Econ. Rsch., Working Paper No. w26220 (last revised Feb. 22, 2023). In our view, the law needs to get the attribution and allocation of liability correct before considering overhauls to the insurance system–and a key part of that reform involves passing laws which perform the Shield Function.

There is reason to believe that protecting voluntary AV users from legal liability for accidents has the side benefit of promoting efficient legal rules, allowing the market to produce the optimal level of investment in safety. For example, a recent article presents an economic model which explains how the law should assign responsibility for accidents in various scenarios (including with application to automation technology).63Barbara Luppi & Francesco Parisi, The Multiplication Effect of Third-Party Liability, Minnesota Legal Studies Research Paper No. 25-06 (Jan. 21, 2025), https://ssrn.com/abstract=5106423.
Importantly, it shows that a legal regime which provides for “third party residual liability” produces the optimal investment in safety.64Id.
This model also shows that efficiency is not promoted by attributing liability to actors who do not behave in a negligent or reckless manner. Thus, economic theory suggests we should not want a legal system which places liability on those voluntary users who are not responsible for the DDT at the time of an accident, and certainly not on those who do not have an element of meaningful human control over the technology (whether in the operation or the design phase for an HAV). We note that NHTSA focuses on meaningful elements of control in its recent Notice of Proposed Rulemaking by identifying potential participants in AV STEP who have some element of control over development of the technology.

II. The Law’s Broad Reach for Intoxicated Driving & Vicarious Liability

In its details, how should management structure a top-down, bottom-up design process for a self-driving vehicle that is fit-for-purpose to transport potentially intoxicated passengers from a bar, restaurant or social event safely home? The concern is broader than simply achieving a safe journey without incident. Automation technology in action will fall short if a fatal accident in route exposes the owner/operator to criminal liability for DUI manslaughter or vehicular homicide.

To meet consumer expectations and the promise of expanding mobility options, a truly successful design for a private self-driving vehicle will perform what we called a “Shield Function” in Part I by protecting an intoxicated owner/occupant from criminal and civil liability during operation.65Frank Douma & Sarah Aue Palodichuk, Criminal Liability Issues Created by Autonomous Vehicles, 52 Santa Clara L. Rev. 1157, 1163 (2012) [hereinafter Criminal Liability Issues].
To promote economically efficient liability rules, the Shield Function should extend protection to an owner who leases or loans her AV to a robotaxi company or delivery service because, in those cases, the owner has no involvement with the DDT and no ability to revise or improve performance of the ADS.

The Shield Function is needed because the law often has a broad reach for finding a case of intoxicated driving. Moreover, vicarious liability rules which hold an owner responsible for accidents caused after loaning a motor vehicle to an intoxicated person or other driver with diminished capacity, has no place when leasing or loaning an AV because the owner has no role to play in promoting safety by avoiding negligent or unwise entrustments—the ADS is supposed to eliminate the kind of human errors that create the possibility for a negligent entrustment in the first place.

A. Terminology

In the following discussion, terminology matters. The term “automated vehicle” or “AV” applies to vehicles with Level 3, 4 or 5 features as specified in SAE J3016).66J3016, supra note 1.
For ease of reference, the Article sometimes refers to an L3, 4 or 5 vehicle even though the SAE levels are features which vehicles possess. A fully or highly automated vehicle is a vehicle with L4 or L5 features. The fully automated vehicle can transport passengers without any human intervention because the system itself is designed to transition to a MRC if the AV encounters an environment or situation which it is unable to safely navigate.

B. Reducing Drunk Driving Fatalities as a Core Benefit

Advocates for the autonomous vehicle industry cite reduction in drunk driving deaths as a significant benefit flowing from deployment of vehicle automation technology.67Autonomous Vehicle Industry Association, AVIA Statement on NHTSA’s 2022 Crash Report (April 21, 2023), https://theavindustry.org/newsroom/press-releases/avia-statement-on-nhtsas-2022-crash-report.
A voluntary user of an HAV might reasonably assume that use of any HAV will perform the Shield Function as a simple byproduct of the level. But the wording and interpretation of statutes prohibiting “driving under the influence” or “driving while intoxicated” in different states (collectively, “DUI Statutes”) paint a more complicated picture because case law in some states interprets DUI statutes to apply when a defendant is not actively driving a vehicle, but merely has the power to control it.68Criminal Liability Issues, supra note 55, at 1163.
Thus, a privately owned L4 vehicle with a control feature, such as the ability to change from fully autonomous mode to manual mode “on-the-fly” mid-itinerary, may fail to perform the Shield Function.69Id. (suggesting an “I’m drunk, take me home” button).

Though we focus on interpretive details of United States law (particularly state law in Florida), we note the same type of interpretive issues arise under some European laws (which we illustrate with two Dutch examples).70See infra notes 77-80 and accompanying text.
We recommend that HAV manufacturers and ADS developers consider legal considerations as part of product requirements and specifications for deployments of HAVs in any state of the United States and in any European country to confirm that a mismatch does not result between physical performance characteristics of the HAV and legal sufficiency to perform the Shield Function. We would recommend considering each jurisdiction as a potentially separate operational design domain (ODD) when an AV manufacturer seeks to expand the deployment area of its HAVs.71Careful consideration of ODD expansion already figures in safety engineering processes. See Daniel Hillen & Jan Reich, Cross-Domain Safety Analysis to Support ODD Expansion for Autonomous Systems, Fraunhofer.de (Jan. 13, 2025), https://www.iese.fraunhofer.de/blog/cross-domain-safety-analysis-to-support-odd-expansion-for-autonomous-systems.

C. Differences in Functional Requirements

The use case of transporting an intoxicated person without risk of criminal liability is a functional requirement but, perhaps, an unusual one in an engineering context because functionality for this use case is not measured purely against physical performance characteristics. Engineers are familiar with laws and regulations that specify a functional physical requirement (such as limiting permitted levels of emissions of RF energy by a transmitting device).7247 C.F.R § 1.1310 (Radiofrequency radiation exposure limits).
Rather, functionality for our subject use case depends on the interaction between design requirements and specifications, on the one hand, and an applicable legal system’s likely treatment of the HAV considering its features, on the other hand.

Satisfaction of the Shield Function requirement does not relate directly to the process used to create an ethical design (such as the model process found in IEEE 7000)73IEEE 7000, supra note 12.
nor is it measured by a test in a laboratory.74A third party might certify compliance as occurs with the FCC-recognized Telecommunications Certification Bodies for devices that emit RF energy.
Rather, we suggest that satisfaction of the Shield Function should be measured by receipt of a favorable legal opinion from counsel opining that operation of the vehicle will perform the Shield Function under applicable law. Failure to receive such a legal opinion should require a specific product warning to avoid false advertising claims and imposition of product liability for a failure to warn.

We recommend these disclosure steps as a precaution that promotes judicial economy. We are less focused on the ultimate likelihood of the success of such claims. The history of tort law is filled with examples of extension of liability to novel situations.75See, e.g., Tarasoff v. Regents of the University of California, 17 Cal.3d 425 (Cal. 1976) (establishing a duty to warn for mental health workers) superseded on other grounds by Cal. Civ. Code § 43.92 (changing the appellation “duty to warn” to a “duty to protect”).
We know from recent cases that the presence of disclosure has been important to protecting an AV manufacturer from liability76See, e.g., Tesla, Inc. v. Banner, –So,3d–, 2025 WL 610816 (Fla. Dist. Ct. App. Feb. 26, 2025) (reversing a circuit court order allowing a plaintiff to seek punitive damages).
and thus emphasize identification of appropriate disclosure as part of the design process.77We note the increased focus on disclosure relating to AVs as evidenced by recent activity in the Texas legislature. See supra note 24.

D. Prosecutions Motivate the Need for Design to Account for Legal Categories

The question of appropriate design process is timely against the backdrop of DUI manslaughter78Julius Whigham II, Boca-area attorney faces DUI manslaughter charge after 2022 crash killed motorcyclist, The Palm Beach Post (Aug. 28, 2023 5:10 AM), https://www.palmbeachpost.com/story/news/crime/2023/08/28/fatal-crash-has-boca-raton-area-lawyer-facing-dui-manslaughter-charge/70658879007.
and vehicular homicide charges79Tom Krisher & Stefanie Dazio, Felony charges are 1st in a fatal crash involving Autopilot, Assoc. Press (Jan. 18, 2022, 2:42 PM), https://apnews.com/article/tesla-autopilot-fatal-crash-charges-91b4a0341e07244f3f03051b5c2462ae.
brought against users of Tesla automation systems. Fatal accidents have occurred, with prosecutors filing charges, when owner/occupants traveled with a Tesla automation feature engaged. For ease of reference, we refer to all current Tesla consumer automation features (e.g. Autopilot, Full Self-Driving, etc.) as “Autopilot.”

The issue of proper attribution of criminal liability arises when, as a defense, the owner/operator claims that he or she was not “driving” or “operating” the vehicle at the time of the accident because Autopilot was engaged. This defense asserts that the Autopilot feature was, as a legal matter, the driver or operator at the relevant time, and not the owner/operator. Identity of the driver or operator is central because, to get a conviction, the general rule in the United States requires a prosecutor to prove beyond a reasonable doubt that the defendant was driving or operating the vehicle while intoxicated. The devil is in the details of state law because “driving” and “operating” come in different flavors based on statutory language, judicial interpretation and model jury instructions.

A defendant’s attempt to substitute Autopilot for the owner/occupant generally has failed in the United States because the Tesla Autopilot design concept requires the human owner/occupant to always monitor the on-road performance of the vehicle. An intoxicated person cannot safely do so.

Court proceedings in the Netherlands produced similar outcomes under European law when defendants attempted to substitute Autopilot for themselves in distracted driving cases, showing the widespread importance of the concept of “driver” across legal systems. In one case involving an administrative sanction, the defendant had been speaking while holding a telephone in violation of a provision of the Road Traffic Act:

The driver of a 2017 Tesla Model X was fined € 230 in an administrative sanction for using his mobile phone while driving. Before the county court, he claimed that because the autopilot was activated, he could no longer be considered the driver, and therefore the acts of driving and using a hands-on phone did not constitute the simultaneous act prohibited [by law]  . . . This narrative did not save the day.80Jeanne Gaakeer, The Knowledge of Causes and the Secret Motions of Things, The Interdisciplinary and Doctrinal Challenges of Automated Driving Systems and Criminal Law, in III – Human-Robot Interaction in Law and Its Narratives, Legal Blame, Procedure and Criminal Law 335, 340 (S. Gless & H. Whalen-Bridge, eds. 2024)) [hereinafter Gaaker, Knowledge of Causes].

In a 2019 Dutch criminal case, the driver had focused attention elsewhere (and not on the road) for 4 or 5 seconds.

“ . . . the defendant’s vehicle had swerved from its lane and collided head-on with an oncoming car.  . . . The defendant pleaded not guilty, arguing that the threshold test for recklessness and/or carelessness had not been met, as he had taken his eye off the road for only a few seconds because he had assumed that the Autosteer System of his Tesla was activated. This position was not given any weight by the court.81Id. at 356.

As noted, “[l]ike the Netherlands, many legal systems lack a codified definition of the term ‘driver,’ which leads courts to define the term in context.”82Id. at 345.
Particularly in the absence of codified definitions of terms such as ‘driver’ or ‘operator’ a counsel opinion addressing the legal outcome in the context of accidents involving AVs with different features becomes important as a matter of consumer disclosure.83It is not uncommon for a regulator to require a legal opinion as part of a requirement for taking some action, such as the SEC’s requirement of a legal opinion in connection with offering a security to the public. Rating agencies require legal opinions as a condition to receiving a high credit rating for various types of structured financings.

E. Relevance of Automation Levels

For legal purposes, determining whether a person is “driving” or “operating” a motor vehicle can depend on the level of control the owner/occupant has over the vehicle’s operation. In many jurisdictions, the answer to this question depends on a person’s ability to control the vehicle, regardless of whether the person is exercising control at the time of an accident.

Tesla classifies Autopilot as a Level 2 feature. The feature’s design concept requires that a human driver always remain attentive with a hand on the steering wheel, able to assume complete control of the vehicle at the spur of the moment. The Tesla owner’s manual discloses that Autopilot is an advanced driver assistance system (ADAS) and not an automated driving system (ADS)—technically, not an automated vehicle at all. This partial automation design concept does not allow the owner/operator to engage the ADAS and then read a book, watch a movie or take a nap.

Autopilot is not fit-for-purpose to safely transport an intoxicated person from point A to point B because the design concept requires that the owner/operator remain vigilant with the power to assume complete operation of the dynamic driving task (DDT) on a moment’s notice to insure safe operation. An intoxicated driver cannot safely perform the task of a fallback-ready user, let alone instantly respond to unsafe conditions.

Despite Autopilot’s L2 design concept, on November 5, 2024, NHTSA requested information from Tesla based on concerns that Tesla conveyed mixed messages to consumers about the capabilities and proper use cases for Autopilot.84Information Request ID PE24031-01, NHTSA (Nov. 5, 2024), https://miamiedu-my.sharepoint.com/personal/w_widen_miami_edu/Documents/000-AV-Liability-Structure/Wolf-Corp-Engineering/IEEE-Wolf-submission-DUI/Information Request ID PE24031-01.
Potentially exaggerated performance claims and worrisome use case suggestions endorsed by Tesla on social media included mention that Autopilot might replace a human designated driver to take an intoxicated person home.85Email from Gregory Magno (NHTSA) to Eddie Gates, et al. (Tesla) (May 14, 2024, 5:56:00 PM), attached to Information Request ID PE24031-01, supra note 84.
NHTSA also observed that Tesla’s own literature suggests in places that Autopilot provides full, rather than partial, automation—perhaps leading some owner/operators wrongly to conclude that Autopilot is a suitable replacement for a designated driver.

Though we cite cases involving Tesla systems, the same analysis applies to Ford’s BlueCruise and GM’s hands-free OnStar Super Cruise because they also are L2 features.86Maryclaire Dale, Philadelphia woman who was driving a partially automated Mustang Mach-E charged with DUI homicide, Assoc. Press (Sept. 3, 2024, updated 3:16 PM), https://apnews.com/article/mach-e-fatal-crash-philadelphia-driver-charged-2c66cb74d08201cd3f3673cf3dc6bf1b.
No L2 vehicle is an “automated” vehicle within J3016 terminology because that level is not designed to perform the entire DDT for sustained periods of time. A similar analysis applies to the Mercedes-Benz DrivePilot feature, though Mercedes classifies that feature as L3. An L3 feature is an ADS (not an ADAS) because its design intent contemplates performing the entire DDT for sustained periods of time. However, for safe operation, the L3 design concept requires the presence of an owner/operator to respond to a “takeover” request to assume control of the vehicle if required for safety (i.e., function as a fallback-ready user). The L3 ADS is designed to issue a takeover request to the human owner/operator when it encounters environments or situations for which the ADS has not been “trained” to navigate within its operational design domain (ODD). The ADS also may initiate a takeover request upon an impending exit from its ODD.

Unlike an L2 feature, an L3 feature is designed to give owner/operators some of their time back because the L3 design concept allows a person to attend to other tasks during an itinerary, such as reading, watching a movie or playing a video game so long as the owner/occupant remains seated behind the steering wheel, able to promptly respond to a takeover request from the ADS. The L3 use case does not, however, include taking a nap in the back seat of the vehicle.

Just as an intoxicated person cannot safely assume control of an L2 vehicle on the spur of the moment, an intoxicated person cannot reliably and safely respond promptly to a takeover request from the ADS. For this reason, the L3 vehicle is not fit for purpose to transport intoxicated persons safely home—just as the L2 vehicle is not fit.

The analysis placing liability on the intoxicated driver in an L2 or L3 vehicle operating with the automated driving feature engaged is consistent with prior U.S. case law. The auto industry’s introduction of cruise control in the 1970’s provided an opportunity for drivers to argue that they should not be responsible for speeding tickets by placing blame on the cruise control feature—a defense rejected by courts.87See State v. Packin, 257 A.2d 120, 121 (N.J. Super. Ct. App. Div. 1969); State v. Baker, 571 P.2d 65, 69 (Kan. Ct. App. 1977).

A motorist who entrusts his car to the control of an automatic device is driving the vehicle and is no less responsible for its operation if the device fails to perform a function under which the law [they are] required to perform. The safety of the public requires that the obligation of a motorist to operate his vehicle in accordance with the Traffic Act may not be avoided by delegating a task he normally would perform to a mechanical device.88State v. Packin, 257 A.2d at 95-96.

Case law had similarly concluded that autopilot systems in aircraft do not absolve pilots of responsibility.89Brouse v. United States, 83 F. Supp. 373 (N.D. Ohio 1949).
The negotiated pleas in recent cases involving Tesla supports this analysis. Other legal observers note that this level of responsibility creates risks for owner/operators of AVs.90See, e.g., Mbilike M. Mwafulirwa, The Common Law and the Self-Driving Car, 56 Univ. S.F. L. Rev. 395 (2022); Jonathan Layton, Proof That Driver Was “Operating” Motor Vehicle While Intoxicated, 61 Am. Jur. Proof of Facts 3d 115 (2001; updated June 2025).
One law review article suggests AI-Chaperone Liability if a human chaperone “has either supervisory authority over or may override the AI device/system.”91Peter Y. Kim, Where We’re Going, We Don’t Need Drivers: Autonomous Vehicles and AI-Chaperone Liability, 69 Cath. U. L. Rev. 341, 366 (2020).

NHTSA expressed concern that Tesla’s online behavior gave the false impression to consumers that Autopilot functioned like a chauffeur or robotaxi. Vigilance by both L2 and L3 owner/operators is critically important for safety because neither design concept requires that the vehicle achieve a MRC solely by operation of the ADS.

Absence of a requirement to achieve an MRC without human intervention prevents the vehicle from performing like a chauffeur or robotaxi. Only an L4 or L5 vehicle must achieve a MRC without human operator involvement. The requirement that the vehicle achieve an MRC without human intervention is the feature that allows a person to take a nap in the back seat of the vehicle while the L4 feature is engaged. Arguably, ability to achieve an MRC is the feature which relieves the owner/operator of supervisory responsibility once the ADS is engaged. (Importantly, achieving an MRC does not technically equate with safety despite its reference in legislation which often makes that implicit assumption. Achieving an MRC does not imply safety per the SAE J3016 terminology.)92J3016 is not a safety standard. It is a taxonomy and satisfaction of a definition does not imply any judgment in terms of system performance. See J3016, supra note 1, at 8.1.

Though not yet available for purchase by consumers, Cruise (formerly) and Waymo (currently) operate robotaxis with L4 features. Just as we would consider an intoxicated person prudent if he or she took a conventional taxi home after a party (rather than drive), so too should we approve of an intoxicated person taking a robotaxi home instead. One would expect a similar result if an intoxicated person activated the L4 feature in his or her privately owned AV rather than using a commercial robotaxi to go home. But the legal status of private L4 vehicles is not so clear, for the reasons explained below.

Uncertainty over the ability of some L4 vehicles to perform the Shield Function explains why the question of “fit for purpose” cannot be answered solely by evaluation of the functional capabilities of the ADS in an AV. Per one recent observer:

If manufacturers focus on the development of new technologies rather than on the legal frameworks within which their products are going to be handled, any opacity as far as product information is concerned can lead to someone, somewhere, avoiding compliance with the law.93Gaaker, Knowledge of Causes, supra note 82.

The law in a state may render an AV not fit-for-purpose based on legal categories and definitions rather than the functional capabilities of the vehicle because motor vehicle laws can take different approaches to the assignment of liability for an accident, both within a single state across categories of crimes and among the several states. Differing language used in statutes and model jury instructions might assign liability to the human owner/occupant, to the AV manufacturer or even, potentially, to the AV itself (a problematic result because an AV is not a legal person).94 Fla. Stat. Ann. § 316.85 (West 2019). See supra notes 125-28 and accompanying text.

Assignment of liability may depend on the capabilities and level of autonomy as well as the driver’s actions. The automobile manufacturer’s choice of driving automation level, and capabilities within a level, thus may depend on a combination of legal, marketing, and engineering considerations.

F. Relevant Legal Liability Categories

Case law in the U.S. generally interprets “drive” and “driving” more narrowly than “operate” or “operating”—with “drive” and its cognates requiring motion of some sort, while “operate” and its cognates do not typically require motion. Case law also suggests that the facts required to satisfy either category may be the mere capability to drive or operate the vehicle even if that capability is not exercised. For example, if an intoxicated person enters her vehicle and starts the engine, a conviction for intoxicated operation of a motor vehicle may be upheld in the United States.

1. DUI Manslaughter

In general, an owner/operator has liability for actions taken while “driving” or “operating” a vehicle. When an ADS is performing the DDT, it is tempting to believe that the owner/operator is not driving or operating the vehicle. This view is bolstered by state statutes which provide that the ADS is the “operator” of the vehicle when it is engaged. Florida law provides an example:

316.85 Autonomous vehicles; operation; compliance with traffic and motor vehicle laws; testing.—

(3)(a) For purposes of this chapter, unless the context otherwise requires, the automated driving system, when engaged, shall be deemed to be the operator of an autonomous vehicle, regardless of whether a person is physically present in the vehicle while the vehicle is operating with the automated driving system engaged.95 Fla. Stat. Ann. § 316.85(3)(a) (West 2019) (emphasis added).

Notice that an L3 feature, such as Mercedes’ DrivePilot, is an “automated driving system” and the vehicle in which it is installed is an “automated vehicle” within J3016. Yet the “context otherwise requires” when an owner/operator is intoxicated because no intoxicated person can responsibly serve as a fallback ready user for an L3 feature. Moreover, despite statutes such as this, under the details of the law, engaging the ADS does not always have the legal effect of insulating an owner/operator from liability. This is the case with DUI Manslaughter in Florida. The Florida Statute for DUI Manslaughter provides:

316.193 Driving under the influence; penalties.–
(1) A person is guilty of the offense of driving under the influence and is subject to punishment as provided in subsection (2) if the person is driving or in actual physical control of a vehicle within this state and:
(a) The person is under the influence of alcoholic beverages, any chemical substance set forth in s. 877.111, or any substance controlled under chapter 893, when affected to the extent that the person’s normal faculties are impaired…96 Fla. Stat. Ann. § 316.193 (West 2019) (emphasis added).

The Florida Standard Jury Instruction for DUI Manslaughter approved by the Florida Supreme Court has broad application, including situations in which an operator is not performing the dynamic driving task:

Actual physical control of a vehicle means the defendant must be physically in [or on] the vehicle and have the capability to operate the vehicle, regardless of whether [he] [she] is actually operating the vehicle at the time.97Florida Standard Jury Instructions in Criminal Cases, 7.8 Driving Under the Influence Manslaughter, at 217 (updated Aug. 2, 2024) (emphasis added).The emphasized language was introduced in 2005 by the Supreme Court Committee on Standard Jury Instructions in Criminal Cases. Amendments to standard jury instructions for criminal cases, Florida Bar News (Dec. 1, 2005), https://www.floridabar.org/the-florida-bar-news/amendments-to-standard-jury-instructions-for-criminal-cases.

From the statutory language, as interpreted by the jury instruction, it follows that an operator of an L2 Tesla (Autopilot) and an L3 Mercedes (DrivePilot) can be guilty of DUI Manslaughter even if, at the time of the fatal collision, the ADAS (Tesla) or the ADS (Mercedes) is engaged.

Though the ADS is “driving”—by performing the entire DDT in the Mercedes—as a legal matter, the operator is in “actual physical control” of the vehicle because, per the jury instruction, physical control includes the “capability” to operate the vehicle regardless of whether the operator is operating the vehicle at the time. Indeed, the design concepts of both Tesla and Mercedes automation systems require that the operator be prepared to assume control of the vehicle at any time during any itinerary.

Even though an owner/operator should not use an L3 vehicle while intoxicated because of the need to assume control of the vehicle per the L3 design concept, the owner/operator would have liability even if an accident occurred that was unrelated to the intoxicated status of the owner/occupant (for example, because the accident occurred before the AV initiated a takeover request).

Thus, L2 and L3 vehicles are not fit-for-purpose to transport intoxicated owner/operators for both legal reasons, as well as engineering reasons based on the design concepts of the ADAS and ADS.98Discussions often equate Level 2 with “ADAS” but, just as J3016 does not sanction references to fractional levels such as Level 2+, treating ADAS as equivalent to Level 2 is not a J3016 supported usage. See Philip Koopman, SAE J3016 User Guide, Carnegie Mellon University, https://users.ece.cmu.edu/~koopman/j3016/index.html#myth20 (last updated Sep. 4, 2021).
What may surprise some, however, is that a HAV similarly may not be fit-for-purpose either—but entirely for legal reasons. This depends on the context presented by particular features of the L4 vehicle.

In the United States, courts likely will interpret the scope of DUI Statutes against the backdrop of a concern about sanctioning behavior that poses an unreasonable risk to public safety. Intoxicated persons often make bad choices—and a decision by an intoxicated person to switch from automated mode to manual mode mid-itinerary is a signature example of a bad choice that risks public safety during operation of an HAV that permits this flexibility.

The biggest issue for HAVs is a consumer-oriented model (not a robotaxi) which allows the owner/operator to disengage the ADS during a trip, reverting to manual control. This flexibility may be a critical marketing feature for potential purchasers. To keep the HAV fit-for-purpose, a design team might consider an “impaired” or “chauffeur” mode which, when activated, prevents an owner operator from assuming control mid-itinerary.99Frank Douma & Sarah Aue Palodichuk, Criminal Liability Issues Created by Autonomous Vehicles, 52 Santa Clara L. Rev. 1157, 1163 (2012) (suggesting an “I’m drunk, take me home” button).

A borderline case might be an HAV that contained no steering wheel or gas pedal. This vehicle would appear fit for the purpose of transporting intoxicated persons just as a robotaxi. However, if this HAV had an emergency feature, such as a panic button, that allowed an occupant to terminate an itinerary, causing the vehicle to maneuver into an MRC, it would be for the courts to decide whether this modest level of vehicle control amounted to “capability to operate the vehicle,” thus exposing the occupant to potential liability for DUI manslaughter.100A state law might consider a human occupant of an AV to be a “driver” even though that occupant only had partial control over the DDT. See, e.g., Pa. Chap. 85, § 8508 (Operation requirements), providing:

(c) Operation with driver —A highly automated vehicle may operate on a highway with a highly automated vehicle driver, subject to the following:

(1) A highly automated vehicle driver may control all or part of a highly automated vehicle’s DDT.

(2) If a failure of an ADS renders the ADS unable to perform the entire DDT within the intended ODD, the highly automated vehicle or the highly automated vehicle driver must achieve a minimal risk condition.

Id. (emphasis added). A human occupant with a panic button has partial control over the DDT.

A design team might consider eliminating the panic button for this reason—but there could be a trade-off with safety if the panic button provides a final layer of protection to avoid a dangerous situation for which the ADS is unprepared. There may be a difference of opinion on whether a panic button feature adds to safety.101Compare NHTSA, Interpretation Letter to Chris Urmson, Interpretation ID: Google–compiledresponseto12Nov15interprequest–4Feb16final, 2016 WL 11897773 (Feb. 4, 2016) (stating that an AV is safer if there is no possibility of human intervention) with Podcast, Missy Cummings: Insights into Self-Driving Tech and Safety, The Center for Auto Safety at 7:00 min. (Dec. 30, 2024), https://www.autosafety.org/podcast/missy-cummings-insights-into-self-driving-tech-and-safety (identifying a “big red button” as an essential safety feature) (transcript provided on website). Including a “big red button” feature available only to a remote operator is not a reliable safety feature because of signal latency according to Dr. Cummings. Id.
As an alternative, the AV manufacturer might seek an opinion from the attorney general of a state (in this example, Florida) seeking clarification if the design team concludes that, on balance, the panic button feature mitigates harm, and its retention creates both a positive risk balance and a marketing benefit.

2. Reckless Driving and Vehicular Homicide

A United States prosecutor will resort to a vehicular homicide charge in cases of distracted driving and cases in which evidence of intoxication may be successfully challenged.

The Florida statutes that govern reckless driving and vehicular homicide differ in structure from DUI manslaughter because they do not contain language referencing “actual physical control” nor a jury instruction which indicates that actual physical control does not require present control of the vehicle. An argument can be made, based on this statutory construction, that an accident which occurred while an ADS was engaged did not create vehicular homicide liability for an owner/operator because the relevant statutes seem to require a finding that the owner/operator actually drove and operated the vehicle.

316.192 Reckless driving.—

(1)(a) Any person who drives any vehicle in willful or wanton disregard for the safety of persons or property is guilty of reckless driving. (emphasis added)

782.071 Vehicular homicide.—”Vehicular homicide” is the killing of a human being, or the killing of an unborn child by any injury to the mother, caused by the operation of a motor vehicle by another in a reckless manner likely to cause the death of, or great bodily harm to, another.102 Fla. Stat. Ann. § 316.192 (West 2019) (emphasis added).

The applicable Florida model jury instruction contains no definition for “drive” in the definition of reckless driving or of “operation of a motor vehicle” in the vehicular homicide statute. The suggestion that a person should not be convicted of vehicular homicide while traveling with the L4 ADS engaged may be bolstered by comparison with the definition of “operate” applicable to the crime of boating while intoxicated (as opposed to driving while intoxicated). The definition of “operate” for boating states:

§ 327.02(33), Fla. Stat. Applicable only to Vessel Homicide.

“Operate” means to be in charge of, in command of, or in actual physical control of a vessel upon the waters of this state, to exercise control over or to have responsibility for a vessel’s navigation or safety while the vessel is underway upon the waters of this state, or to control or steer a vessel being towed by another vessel upon the waters of the state. 103 Fla. Stat. Ann. § 327.02 (West 2019) (emphasis added).

The argument against liability observes that the definition of “operate” with respect to boating appears intended to have broader scope than the mere use of the word “operate” in the case of a motor vehicle. In the case of boating, mere responsibility for navigation or safety suffices. An owner/operator of an L2 or L3 vehicle has responsibility for safety. A safety driver in a prototype or test L4 vehicle similarly has responsibility for safety even when the ADS, rather than the safety driver, is performing the DDT. In the private L4 production vehicle, however, the design concept does not assign responsibility for navigation or safety to the owner/occupant while the ADS is engaged because of its ability to achieve an MRC without human involvement.

The legal outcome in vehicular homicide (in contrast to the engineering design concept and performance-based outcome) turns on the question of whether an owner/operator may legally delegate responsibility for the dynamic driving task to a highly automated vehicle (and by such delegation relieve himself of liability by virtue of the delegation). Importantly, engineering design concept responsibility may not track legal responsibility.

Case law suggests that a pilot remains responsible for the safe operation of an aircraft when autopilot is engaged.104Brouse v. United States, 83 F. Supp. 373, 374 (N.D. Ohio 1949).
And, drivers have been found responsible for speeding tickets when cruise control was set because the driver remained responsible for operation of the vehicle within the speed limit.105See State v. Packin, 257 A.2d 120, 121 (N.J. Super. Ct. App. Div. 1969); State v. Baker, 571 P.2d 65, 69 (Kan. Ct. App. 1977).
The delegation of the task to automation in those cases did not simultaneously create a release for the aircraft pilot or absolve the driver using cruise control of liability.106Cf. Dixie Farms, Inc. v. Timmons, 323 So. 2d 637, 639 (Fla. Dist. Ct. App. 1975) (noting that “[a] driver does not have an absolute right to rely upon the judgment of a third person concerning whether he should proceed into a dangerous area with his automobile.”)
The case of the boat captain may be similar insofar as the captain may use automation as a tool, but the captain retains ultimate responsibility for the voyage if that tool fails.

This reasoning explains liability of the safety driver in the 2018 Uber fatality in Arizona.107Assoc. Press, Backup driver of an autonomous Uber pleads guilty to endangerment in pedestrian death, NPR (July 28, 2023, 5:50 PM), https://www.npr.org/2023/07/28/1190866476/autonomous-uber-backup-driver-pleads-guilty-death.
Although the vehicle involved had an engaged L4 feature, it was a prototype vehicle and, as such, the safety driver had responsibility for the operation of the vehicle just like the captain of a vessel or the pilot of an aircraft retains responsibility. The safety driver owed a duty of care to other road users.

An owner/operator of a private L4 vehicle would have a strong argument that he or she could effectively delegate driving responsibility to the ADS (and release his or her responsibility) if the law provided that the ADS itself owed a duty of care to other road users—a point conceded in a responsive pleading by GM in a non-fatal collision of its AV with a motorcycle.108Nilsson v. Gen. Motors LLC, No. 18-471 (N.D. Cal. Jan. 22, 2018) (Answer and Demand for Jury Trial filed 3/30/18, Defenses and Affirmative Defenses. at 7, ¶ 2) (case settled before verdict).
One of us has argued that legislatures should confirm this legal status by statute.109William H. Widen & Philip Koopman, Winning the Imitation Game, 25 Minn. J.L. Sci. & Tech. 113 (2023).
Once a duty of care has been established for the ADS, the task remains to assign responsibility for breach of that duty of care. It was suggested that responsibility for a breach of this duty of care should fall upon the AV manufacturer, rather than the owner/operator.110Id. at 134-138
Placing liability on the owner/operator based simply on ownership status limits the value of the Shield Function if a large civil penalty can be assessed for negligent driving even if criminal liability is avoided. And, such a liability attribution rule is inconsistent with the structure of rules which economic theory describes as producing the optimal level of safety.111See Parisi, supra note 65, at 12-14.

3. Liability Based on Vehicle Ownership

Management should initiate a risk analysis at the start of the design process, whether they contemplate using a traditional “V” model or other methodology. Legal officers should identify how the applicable law allocates residual liability for accidents to determine whether the owner/operator can avoid civil liability if the vehicle can perform the Shield Function under criminal law. It will be cold comfort to the owner/operator of a private L4 vehicle if the law absolves him of responsibility to oversee safety during ADS operation, but civil liability nevertheless attaches through the back door by assigning residual liability for accidents to the owner of the vehicle. This question arises because neither an L4 vehicle nor an ADS has status as a legal person. The law will seek to place liability on a legal person rather than allowing liability to evaporate.112See Elish supra note 41, at 52-55.

To ease administration and regulation of automated driving, it may make policy sense to require the owner of the AV to pay liability insurance premiums. But a rule which creates strict liability or vicarious liability for the owner of the AV in excess of policy limits should the ADS fail to perform as intended (i.e. by violating a duty of care to other road users) does not fully achieve the purpose of the Shield Function. Even if the owner/operator cannot be convicted of DUI Manslaughter, the law needs to be clear that the owner does not retain vicarious liability—otherwise the intoxicated owner/operator is at risk for civil liability (if not some lesser flavor of vehicular homicide) by mere ownership. This possibility creates an uneasy journey home for the intoxicated owner/operator of a fully featured private L4 production vehicle. (The problem is even more acute for an owner/operator of an L3 production vehicle.)

G. Design Requirements to Avoid Legal Liability

AV manufacturers cannot passively assume that any L4 or L5 vehicle will perform the Shield Function because the Shield Function is not a mere byproduct of the automation level. Rather, performing the Shield Function must be a conscious design requirement specified for a project with a view to legal categories, requirements and outcomes.

Design risk is one category of consideration in requirements analysis. Design time, non-recuring engineering or NRE cost, and manufacturing cost are all instances of design risk for management to address early in the design process. Conceptually, legal costs should be bundled with NRE cost because the identification of design features that simultaneously satisfy marketing, engineering and legal considerations should be incorporated into the requirements analysis. If management determines that law reform should be pursued (or clarification sought from state authorities) to expand the scope of available features, design time risk will increase. By its nature, successful design requires iterative collaboration among management, marketing, engineering and legal staff.

First management and marketing must confirm that the vehicle model under design is intended to perform the Shield Function. Second, they must identify those additional features desired in the model. Third, management and marketing must specify the target jurisdictions for deployment of the new model of vehicle (whether one state or multiple states). Management might make the business decision to produce a model which can perform the Shield Function across several jurisdictions or adopt a strategy which makes specific models tailored for each state. The legal officers must then compare the list of desired features to the applicable laws in the target jurisdictions and identify those features that are inconsistent with the Shield Function.

Suppose one desired feature is the ability of the owner/occupant to switch from autonomous mode to manual mode in the middle of a trip but the legal officers determine this feature is inconsistent with the Shield Function because it gives the owner/occupant too much control. Management and marketing must then decide whether to pursue a design “work around” to retain some portion of this flexibility. The engineers will consider the feasibility of any proposed work-around using traditional design considerations. Design risk, including cost considerations, will factor in any decision.

For example, a possible solution might be to create a “chauffer” mode which an owner/operator might select for a trip home from a social function. The chauffer mode would lock the human controls for the trip—making the private L4 AV function like a robotaxi or a private AV without human controls. If the legal officers confirmed that a “chauffer mode” would solve the control problem under state law, then the engineers would determine how to implement this feature. For example, steering by a human driver might be disabled whether the steering is electronic (steer by wire) or via a physical steering column using the existing anti-theft lock included in conventional vehicles (typically engaged when the vehicle is parked).

If the team decides to include a chauffeur mode, then they must decide whether to include a “panic” button that allows termination of the itinerary.

The process must be repeated each time a feature is added or removed from the product requirements. The following is a non-exclusive list of factors for consideration.

Depending on the nature of the design solution, an AV manufacturer should consult legal counsel to determine whether including a “chauffer” feature or “drunk driving” mode conflicts with the requirement of the U.S. Code that a manufacturer may not knowingly make inoperative a device or element of design installed in the vehicle to comply with a motor vehicle safety standard.11349 U.S. Code § 30122 (Making safety devices and elements inoperative).
The first step is to decide whether the proposed feature makes a device or design element “inoperative” and, if so, the second step is to seek an exemption from the Secretary of Transportation.

1. Absence of Control

The presence or absence of the ability to control the AV will be central. This requires the legal system to decide what constitutes meaningful human control in a motor vehicle context. Elements of control should be considered broadly. Termination of autonomous mode mid-itinerary with a shift to manual mode, termination of a trip mid-itinerary via an emergency panic button, the ability to honk a horn, the ability of the occupant to issue voice commands—all may be relevant under state law. The engineers must evaluate the impact on safety of eliminating a feature, such as an emergency panic button, and how best to maximize a positive risk balance by consultation with legal advisors.

2. Nature of Data Recorded

The electronic data recorder (EDR) required in conventional vehicles records limited information. The data required by regulation was specified before the arrival and deployment of automation technology. The nature of the information recorded, together with the frequency of recording, should be updated to reflect the new technology, and specified with a view to helping an owner/operator avoid legal liability (both for impaired driving and otherwise). AV manufacturers should advocate for more robust recording (whether in statute or regulation) and avoid an impulse to limit data and frequency of collection to hinder proof of a design defect. As a practical matter, the AV manufacturer will not have any liability for a design defect proved by examination of computer code and the like.114See infra note 130.
In any case, the continuing engagement of the ADS should be recorded in narrow increments. Moreover, the ADS should not disengage immediately prior to an accident (as has been reported with respect to Tesla’s automation systems)115NHTSA, ODI Resume, Investigation EA 22-002, U.S. DOT (June 8, 2022), https://static.nhtsa.gov/odi/inv/2022/INOA-EA22002-3184.PDF (noting that Autopilot aborted vehicle control less than one second prior to the first impact).
when ADS engagement limits liability.

3. Maintenance Data

Even if an owner/occupant has no control over the longitudinal and latitudinal vehicle movement, the owner/occupant may have liability for failure to maintain various systems on the AV, including failure to keep sensors both clean and unobstructed. Failures of system maintenance in an AV provides an analog to impaired driving in a conventional vehicle. The design team should consider how to measure and record maintenance data for the AV, and whether to prevent operation of the AV altogether in the absence of required scheduled maintenance, as well as maintenance needed in response to warning lights or indicators.

4. Operational Design Domain

Marketing must identify states in which the model under design can perform the Shield Function to facilitate accurate consumer advertising. Some have noted the importance of legal considerations.

Suppose an ADS is of US American design. Surely the designer has US American law at the back of his mind during construction? Does such a vehicle fully comply with the demands of civil-law European systems  . . . ?116Gaaker, Knowledge of Causes, supra note 82, at 345 (emphasis added).

The conscious focus required goes beyond merely having “law at the back of [one’s] mind” during design.

5. Relevance of Design Specifications to Advertising

To avoid allegations that a manufacture has engaged in false advertising and misled the public, any instructions for vehicle use should indicate whether the model is fit for the purpose of performing the role of “designated driver.” Drafting disclosure after careful consideration of the design requirements and specifications, and comparison of those against applicable legal requirements would complement ethical design under IEEE 7000 because it treats users as stakeholders and considers their interests, needs and well-being.

6. The Specific Liability Risk Based on a Dangerous Instrumentality

Scenario 3 differs from Scenarios 1 and 2 because there is no apparent design specification available to engineers that might perform the Shield Function for vicarious liability. The law in many states treats a motor vehicle as a “dangerous instrumentality” and provides strict liability for an owner (titleholder) for accidents caused by any person to whom an owner loans a vehicle.117See Emerson v. Lambert, 374 So.3d at 761 (Fla. 2023)
In some cases, this status-based liability attaches even if the entrustment of the vehicle is not negligent or reckless.118Florida law does not have a defense–an owner is liable for any entrustment, not merely a negligent one. Other states do have a defense. For example, the Alabama Supreme Court has adopted Restatement (Second) of Torts § 390 (1965) which sets forth liability for negligent entrustment. See Edwards v. Valentine, 926 So.2d 315 at 320 (Ala. 2005).

Despite the absence of an engineered workaround, part of the design process should nevertheless identify these risks in applicable jurisdictions for deployment to facilitate disclosure of potential liability for owner/operators. Our concern here is forward looking to prevent product liability for a breach of the duty to warn or similar theory of liability.119Young v. Tesla, Inc., No. 1:21-cv-00917-JB-SCY, 2022 WL 3355832 at *2 (D.N.M. Aug. 15, 2022) (describing Tesla’s self-driving car as being designed so as not to require any driver action).
We want to foreclose this kind of potential liability even though some might view a duty to warn about legal liability as an extension of existing law.120For an example of a plaintiff asserting a duty to warn claim in a vehicle automation context, see Jing Wang v. Tesla, Inc., No. 20-CV-3040, 2021 WL 3023088 (E.D.N.Y. 2021).
In the past, courts have extended a duty to warn based on common law thinking.121Tarasoff v. Regents of the University of California, 17 Cal.3d 425 (Cal. 1976) (establishing a duty to warn for mental health workers) superseded on other grounds by Cal. Civ. Code § 43.92 (changing the appellation “duty to warn” to a “duty to protect”).

III. Why Law Reform is Needed

The replacement of human agency by a cyber-physical system presents uncertainty for application of current laws because those laws were structured by legal categories developed prior to the arrival of advanced vehicle automation technology. Moreover, the regulatory environment for motor vehicles is further complicated in the United States because of multiple regulatory regimes unlike aviation where national authorities regulate design, manufacturing and operations.

The use case of transporting intoxicated persons illustrates the uneasy fit that results from divided regulatory responsibility—with the federal government regulating equipment and individual states licensing human drivers and sanctioning improper operation of vehicles.122 National Academies of Sciences, Engineering, and Medicine, Multistate Coordination and Harmonization for AV Legislation 24 (National Academies Press, 2024).
An ADS, as both equipment and operator, clouds the proper jurisdiction to identify the party responsible for improper AV operation.123Matthew T. Wansley, Regulating Automated Driving, 73 Emory L. J. 505, 540-545 (2024) (describing allocation of regulatory responsibility between federal and state regulators).
To date, federal leadership to create uniform standards across jurisdictions has stalled. Moreover, we consider it unlikely that the current administration will advance substantive legislation containing equipment level safety specifications.

But perhaps with a change in administrations, federal leadership to clarify owner/operator criminal and civil liability for use of automated vehicles could become a priority because it makes ownership and use of AVs more attractive to consumers. We referred to federal protection provided for rental car companies against vicarious liability for the torts of their customers in the Graves Amendment12449 U.S.C § 30106 (Rented or leased motor vehicle safety and responsibility) (2005).
as the template to address our concerns about the consequences of a mismatch between liability attribution rules and consumer expectations. Statutory language could address our three scenarios involving intoxicated drivers and owners who occasionally let their vehicles. We first identify and reject an alternative approach found in some existing state laws. We then provide recommended statutory language.

A. An Unreliable Method to Provide the Shield Function

Some states, like Florida, have a statute which specifies that the ADS, when engaged, is deemed to be the “operator” of the autonomous vehicle.125See Fla. Stat. Ann. §316.85(3)(a) (West 2019) (covering autonomous vehicles; operation; compliance with traffic and motor vehicle laws; and testing).
To the extent the AV industry suggested this language for the purpose of performing a Shield Function for privately owned AVs, we think it falls short for several reasons.

First, when an accident occurs, the law tends to look for a legal person who might have responsibility. A cyber-physical system is not a legal person. Though the law allows a party to create a liability shield in other contexts by using a corporation or other special purpose entity,126See, e.g., Kenneth C. Kettering, Securitization and Its Discontents: The Dynamics of Financial Product Development, 29 Cardozo L. Rev. 1553, 1564-1567 (2007).
ascribing liability to a mere “thing” is a novel method to protect against claims by involuntary creditors. In this context, solving the problem via this type of statutory definition is too clever by half.127A voluntary creditor may agree to a non-recourse debt in which the obligation is satisfied solely with reference to an asset. If a new law is intended to create a non-recourse debt, it should so state expressly because non-recourse debt is created by operation of law in limited cases. Non-recourse debt is created by operation of law when a purchaser acquires property subject to a mortgage. Liens on real property for unpaid property taxes, assessments, and the like also are non-recourse, as are “redeemable ground rents”. See, e.g., Frederick H. Robinson, Nonrecourse Indebtedness, 11 Va. Tax Rev. 1 (1991) (describing contractually created nonrecourse debt and nonrecourse debt arising by operation of law).
“Covert tools are never reliable tools.”128Karl Llewellyn, Book Review, 52 Harv. L. Rev. 700, 703 (1939) (reviewing Otto Prausnitz, The Standardization of Commercial Contracts in English and Continental Law).

Second, if such a provision also is intended to protect AV manufacturers (the more likely motivation for the provision), the statute creates a not so subtle “responsibility gap” when liability is ascribed to an artifact and not any legal person connected to an accident.129See, e.g., Sven Nyholm, Minding the Gap(s): Different Kinds of Responsibility Gaps Related to Autonomous Vehicles and How to Fill Them, in Connected and Automated Vehicles: Integrating Engineering and Ethics 1 (Fabio Fossa & Federico Cheli eds., 2023).
We believe many courts are unlikely to accept this outcome. As a practical matter, the responsibility gap will remain despite the potential for strict products liability based on a design defect because strict products liability for AV manufacturers will prove ephemeral in real world cases and thus unable to close a responsibility gap.130For a class action to proceed, plaintiffs must prove “commonality.” See Fed. R. Civ. P. Rule 23(a)(2). Commonality will be difficult, if not impossible, to prove for a large class of HAVs because over-the-air updates to software constantly alter the ADS systems. For difficulties associated with introducing expert testimony across different vehicles, see Johnson v. Ford Motor Company, No. 3:13-6529, 2018 WL 1512377 at *2 (S.D. West Va. 2018). We believe the number of experts who could satisfy Daubert will be small whether in a class action or a standalone case. See Daubert v. Merrell Dow Pharm. Inc., 509 U.S. 579, 597 (1993) (describing requirements for admission of expert testimony). In addition, it will be difficult to gain access to the details of the ADS “programming” needed to perform any analysis. See, e.g., Johnson v. Ford Motor Company, No. 3:13-cv-06529, 2018 WL 1440833, at *1 (S.D. West Va. 2018) (sanctioning the defendant for discovery violations). Even if an expert is qualified, we are unaware of any method currently available to trace the details of the opaque operation of neural networks such as those used in ADS. This differs from analysis of algorithms in conventional computer programs.

When a court is faced with an apparent responsibility gap, based only the slender reed of a definition, the consumer stands in a more precarious position than the manufacturer because the common law places vicarious liability on owners for harm caused by use of dangerous instrumentalities (such as motor vehicles). For this reason alone, defining the AV to be a “driver” or an “operator” is an inadequate fix if the goal is to provide solid protection for consumers.

A third reason to doubt the efficacy of such language to perform the Shield Function is that statutes typically contain qualifying language such as “unless the context otherwise requires” and “unless otherwise provided by law”131 Fla. Stat. Ann. §316.85(3)(a) (West 2019).
which gives a judge room to maneuver if she does not want to accept a statutory interpretation that tosses liability down a rabbit hole.

In the broader picture, the AV industry sometimes plays games with terminology relating to “drivers” and “operators”—taking potentially contradictory positions. In the case of remote operators, AV companies often attempt to frame the role of their remote employees as “monitors” or a “supervisors” to avoid them being deemed an operator or a driver—the later classifications being ones to which liability might attach. The potential liability of remote employees varies, of course, based on what the remote operator does.132Missy Cummings, Issues in teleoperation of autonomous vehicles, Missy Cummings Lab (2023) (unpublished manuscript), https://www.researchgate.net/publication/375371840_Issues_in_teleoperation_of_autonomous_vehicles.
A remote operator calling a tow truck should pose no issue, whereas conveying information such as “the light is green, not red” which allows the AV to complete the OEDR is troublesome because a Level 4 vehicle should not need help completing an OEDR—help completing the OEDR is a characteristic of a Level 2 vehicle.

The goal should be to provide rock solid assurances to consumers to promote acceptance of vehicle automation technology. To that end, we reject legislative solutions that rely on clever definitions and misdirection. We instead propose a direct approach which we believe is sound policy and clearly achieves the desired outcomes.

We first consider a legislative solution for Scenario 3 which addresses our concerns directly. It closely mirrors an existing federal law which can serve as precedent for the recommended approach.

B. Specific Legal Reform to Provide a Shield Function in Scenario 3

In Scenario 3, an individual owns an AV with a Level 4 feature which the owner wishes to enroll in a robotaxi ride hailing program. The program allows the owner to “loan” her AV to the robotaxi company so the AV can earn a return for the owner when she is not using it. While engaged for hire, the AV hits and kills a pedestrian in a crosswalk when the ADS fails to identify the pedestrian as a person.

Congress created protection for rental car companies as part of a federal highway bill in 2005 (the “Graves Amendment.”).13349 U.S.C. § 30106 (addressing rented or leased motor vehicle safety and responsibility) (2005).
This law protects rental car companies against imposition of vicarious liability based merely on their ownership of the vehicle. It provides that rental car companies and those involved in leasing cannot be blamed for accidents simply because they own or lease the motor vehicle to a third-party customer.

Graves Amendment protection, however, currently only extends to an owner, or an affiliate of an owner, who is “engaged in the trade or business of renting or leasing motor vehicles.”134Id.
As currently drafted, the Graves Amendment would not clearly protect the AV owner in Scenario 3 who arranges for a robotaxi company or delivery service to occasionally use her vehicle. It is reasonable to clearly address liability in this scenario ahead of deployment of L4 production AVs to consumers.

Just as some owners currently use their personal motor vehicles in a ride hailing service to make extra money, we can foresee owners of L4 production AVs maximizing their investment by letting their AVs to ride hailing businesses which offer a “driverless” option (which some customers may prefer). Vicarious liability protection should extend in these cases without the need to prove that one letting the AV is “engaged in the trade or business”—a phrase which may not cover occasional letting of the L4 AV. Because this law reform would take place at the federal level, it would create uniform treatment throughout the United States. The precedent is the Graves Amendment which, as federal law, pre-empts state vicarious liability laws. The following is an example of language such an amendment might use to cover Scenario 3:

An owner of a fully automated vehicle that rents or leases the vehicle to any person shall not be liable under the law of any State or political subdivision thereof, merely by reason of being the owner of the fully automated vehicle, for harm to persons or property that results or arises out of the use, operation, or possession of the vehicle during the period of the rental or lease, regardless of whether the rental or lease takes place as part of a trade or business conducted by the owner.

This type of protection is relevant in states like Florida in which the common law creates liability under a “dangerous instrumentality doctrine.”135Emerson v. Lambert, 374 So.3d at 760 (Fla. 2023) (discussing Florida’s dangerous instrumentality doctrine and its application to motor vehicles).
This doctrine creates vicarious liability for operation of potentially dangerous things—which has been interpreted to cover motor vehicles. The theory behind such a law is that some assets, such as motor vehicles, have the potential to be so dangerous that the law should not allow the legal owner to avoid all liability to an involuntary creditor injured by the vehicle they allowed someone else to drive.

The laws of many states provide that liability requires proof that the owner was negligent in giving permission to use the vehicle. This doctrine is particularly concerning in Florida because Florida does not require proof of negligence.136Id. at 761 (not mentioning negligent entrustment as an element of a claim). Florida does, however, limit the amount of any award against the title holder to the vehicle. See Vargas v. Enterprise Leasing Co., 60 So.3d 1037, 1042 (Fla. 2011) (discussing Florida’s statutory cap on vicarious liability for permissive users of motor vehicles under Fla. Stat. §324.021(9)(b)).

C. Specific Legal Reform to Provide a Shield Function in Scenario 1

In Scenario 1, a robotaxi has a “big red button” or urgent egress feature which gives a rider the power to terminate an itinerary, thus causing the robotaxi to initiate a MRC maneuver. While traveling one evening, a passenger terminates her itinerary using this feature. The robotaxi initiates a minimal risk condition maneuver but fails to identify parked emergency vehicles with flashing lights on the side of the highway. The robotaxi hits and kills a first responder. In version A, the passenger is sober. In version B, the passenger is intoxicated. In neither case does a consumer expect to incur liability.

In this case, we think the best policy outcome is a federal law which clearly states that, for purposes of liability, no occupant of an automated vehicle shall be deemed a driver, an operator, or in control, of the HAV simply because the automated vehicle has a feature which allows the occupant to terminate an itinerary, whether the occupant uses that feature or not. The following is an example of language such an amendment might employ for Scenario 1:

An occupant of a fully automated vehicle equipped with a feature that allows the occupant to terminate an itinerary or cause the automated vehicle to transition to a minimal risk condition shall not be liable under the law of any State or political subdivision thereof, merely by reason of being an occupant of a fully automated vehicle that is equipped with this feature, for harm to persons or property that results or arises out of the use, operation, engagement or hire of the fully automated vehicle during such period of use, operation, engagement or hire.

The above suggested language is modest in scope insofar as it merely protects the occupant against liability resulting from the mere presence of the feature. It does not purport to address liability that may result if the occupant negligently or recklessly activates or otherwise uses the feature—a matter we would leave for traditional determination by a court.

D. Specific Legal Reform to Provide a Shield Function in Scenario 2

In Scenario 2, an individual owns an AV with a Level 4 feature which allows the owner/occupant to change operation from fully autonomous mode to manual mode during the middle of an itinerary. One use case desired by the owner is to direct the AV to transport her safely home after drinking alcoholic beverages at a party. While in route home after a party, the ADS in the AV fails to identify a stop sign, causing an accident with another vehicle in cross traffic, killing a motorist.

While we think the proper legal treatment of Scenario 1 and Scenario 3 are straightforward, the proper treatment of Scenario 2 presents some important policy questions to answer.

We think it clear that, if a fully automated vehicle has a “drunk driving” mode which prevents the occupant from assuming control of the DDT, then no liability should result. What is less clear is whether the occupant should have no liability if, in fact, the occupant never assumes control even though she had the power to do so while impaired. The law might adopt such an approach to deter intoxicated persons from placing themselves in the position to make a bad decision to assume control.

Our inclination is to impose liability only if the occupant actually exercises control and harm results because this approach makes the design process simpler and gives the manufacturer more freedom to offer features without compromising just and fair compensation for an injured party. This is not a case like a Level 2 ADAS or Level 3 HAV where the occupant must be ready to assume control to maintain safe operation as part of the design concept. We impose liability in those cases because the law should discourage use by persons who cannot responsibly assume control of the vehicle as needed to satisfy the design concept for providing a safe journey. The Level 4 safety concept does not require a fallback ready user. The following is an example of language such an amendment might employ for Scenario 2:

An occupant of a fully automated vehicle equipped with a feature that allows the occupant to assume control and perform the DDT during the course of an itinerary shall not be liable under the law of any State or political subdivision thereof, merely by reason of being an occupant in a fully automated vehicle which gives the occupant the capability to transition from autonomous mode to manual mode at the election of the occupant, for harm to persons or property that results or arises out of the use, operation, or engagement of the fully automated vehicle if the occupant does not exercise this power.

While we prefer the solution that gives the most freedom to provide features while still performing the Shield Function, we think it more important that a uniform federal law provides clarity on this issue of concern. If policy makers determine that extending the protection of the Shield Function is best served by requiring engineers to develop a “drunk driving” mode, we would consider that choice a reasonable exercise of legislative judgment.

Conclusion

On our analysis, the AV industry has sometimes failed to pursue a strategy best designed to create public trust – even though the industry correctly identifies the importance of building trust to the long-term success of AV technology.137See William H. Widen & Philip Koopman, Autonomous Vehicle Regulation and Trust, 27 UCLA J.L.& Tech 169 (2022). For example, while advocating for deployment of AVs, the industry sometimes wrongly claims (or implies) that 94% of serious crashes are due to human error. They also fail to mention that, while deployment of AVs may eliminate certain types of human errors (texting, drunk driving), AV deployments introduce different kinds of ADS errors. See Hope Yen & Tom Krisher, NTSB chief to fed agency: Stop using misleading statistics, Assoc. Press (Jan. 18, 2022, 3:45 PM), https://apnews.com/article/coronavirus-pandemic-business-health-national-transportation-safety-board-transportation-safety-6638c79c519c28bb4d810d06789a2717 (describing a request that NHTSA stop misusing a report referencing the 94% statistic).
The public will not long embrace a complicated product which can cause serious injury and death when vehicle crashes and other mishaps inevitably occur if the public is distrustful of both the product and the people who make it. We believe the shortcomings in the approach the industry uses to develop trust stem from a general adversarial posture towards law, regulation and disclosure. In our scenarios, however, we think law and disclosure are clearly a positive factor contributing to the success of vehicle automation technology.

At least with respect to the Shield Function, all parties should agree that clear legislative action is appropriate to reduce the burden which current law places on AV design and the uncertainty it creates for consumers.


* William H. Widen is a Professor at the University of Miami School of Law, Coral Gables, Florida. His current research focuses on law and regulation relating to autonomous vehicles. Prof. Widen’s work was supported in part by a Provost’s research award from the University of Miami.

** Marilyn C. Wolf is a Professor in the School of Computing at the University of Nebraska-Lincoln. Dr. Wolf led the creation of this school and was its first director. Dr. Wolf’s work was supported in part by the NSF grant 2002854.

Leave a Reply

Your email address will not be published. Required fields are marked *