Accounting for Spatial Effects and Social Norms in Making Algorithmic Law: Insights from and Applications in Urban Mobility

 

This Article examines a prominent idea in the law and technology literature: that algorithms and big data can be used to make law dynamic and personalized. As currently envisioned by legal scholars, “algorithmic law” entails laws that adjust in real time to changing conditions and vary across individuals, improving welfare by tailoring legal rules and standards to personal characteristics.

This Article argues that this vision of algorithmic law is incomplete—and often counterproductive. Existing proposals treat personalization as a function of individual attributes alone, overlooking the fact that effects of individual behavior are fundamentally interactive. Individual behavior is shaped by spatial spillovers, social norms, enforcement costs, externalities, and feedback effects in complex systems. As a result, optimizing the law at the level of isolated individuals does not always produce welfare gains at the aggregate level, and may fail even on its own terms. Urban mobility provides illustrative examples of how personalized algorithmic law can misfire when interactive effects are ignored. These examples suggest that the central challenge of algorithmic law is not simply personalization, but coordination.

To address this problem, the Article proposes an alternative method for designing algorithmic law that treats all activity as interdependent. Rather than generating separate laws for each individual, lawmakers would construct coordinated sets of laws applied simultaneously across populations, designed to account for spatial and social interactions and to maximize social welfare system-wide. Properly accounting for interaction effects often leads not to greater legal differentiation, but to renewed justification for uniform laws in many settings.

Introduction

Legal scholars have proposed using computers and big data to make the law dynamic and customized for each person.1* Visiting Assistant Professor of Law and Ribstein Fellow, University of Illinois College of Law. E-mail address: gaoj@illinois.edu. I would like to thank Eric Johnson, Amitai Aviram, Jennifer Robbennolt, Robert Lawless, Arden Rowell, Michael Gerrard, and Jason Jackson for very helpful comments. I would like to give special thanks to Jinhua Zhao for inspiring me to explore the relationship between law and algorithms. An earlier draft of this article was included in my PhD dissertation accepted at the Massachusetts Institute of Technology.

. See Timothy D. Robinson, A Normative Evaluation of Algorithmic Law, 23 Auckl. Univ. L. Rev. 293, 294 (2017) (describing the two key characteristics of algorithmic law).
Algorithms are already used to make the law dynamic in many instances. For example, some cities have begun to adjust parking rates in response to changes in demand, making parking more available in areas and times with higher demand.2For example, San Francisco has enacted the SFPark program, which is a demand-responsive parking pricing program. See SFMTA, Demand-Responsive Parking Pricing, https://www.sfmta.com/demand-responsive-parking-pricing (last visited Jan. 13, 2026).
Using algorithms to personalize the law could make the law even more effective. Consider the potential benefits of personalized law when purchasing a laptop. Under current law, every consumer has the same withdrawal period. Under a personalized scheme, the consumers would have withdrawal periods tailored to their past purchase behavior and psychological profile instead of the uniform withdrawal period for all consumers.3See Omri Ben-Shahar & Ariel Porat, Personalized Law: Different Rules for Different People 71–72 (2021).
A consumer who rarely withdraws from her purchases could have a shorter withdrawal window and purchase the laptop for a slightly lower price. However, if the consumer’s past purchase behavior suggests that she is likely to require a longer withdrawal period, a personalized rule could make a longer withdrawal period available for a slightly higher price.

Here is another example that illustrates the advantages of personalized law. Under current law, if someone dies without a will, his estate is distributed to his heirs based on default intestate rules.4See Ariel Porat & Lior Strahilevitz, Personalizing Default Rules and Disclosure with Big Data, 112 Mich. L. Rev. 1417, 1419–1420 (2014).
These rules are currently the same for everyone even though their actual preferences for distributing their property differ. Since many people die without wills, current intestacy rules often fail to distribute property in a manner consistent with what the decedent would have intended. Tailoring intestacy rules to an individual’s characteristics could produce distributions that more closely approximate the outcome that person would have chosen had they executed a will.5Throughout this Article, the terms “tailoring,” “customizing,” and “personalizing” refer to the same idea—making the law granular at the individual level.

Personalizing the law using algorithms is a promising idea. The law can protect our rights more effectively and fairly by adapting to changing environments and our unique personal situations. However, the previous conceptions of algorithmic law have some troubling shortcomings. Consider the following example. Imagine that instead of imposing the same speed limit (e.g., 55 mph) on all drivers throughout a particular roadway, an algorithmic speed limit would vary based on road conditions such as weather and road curvatures and vary for each driver, depending on the driver’s driving risk profile. Under such a regime, a driver who has ten years of driving experience and attends accident prevention courses annually would be subject to a 60 mph speed limit on a clear day. In contrast, a driver who has only held a driving license for less than a year would be subject to a 50 mph speed limit on a clear day. During inclement weather, the former driver would be subject to a 51 mph speed limit, and the latter driver would be subject to a 41 mph speed limit. The difference between the speed limit on a clear day and the speed limit during inclement weather is already embedded in the law,6See, e.g., WA Rev Code § 46.61.400 (3) (2023), providing:

The driver of every vehicle shall, consistent with the requirements of subsection (1) of this section, drive at an appropriate reduced speed when approaching and crossing an intersection or railway grade crossing, when approaching and going around a curve, when approaching a hill crest, when traveling upon any narrow or winding roadway, and when special hazard exists with respect to pedestrians or other traffic or by reason of weather or highway conditions.

See also ID Code § 49-654 (1) (2023), providing that:

No person shall drive a vehicle at a speed greater than is reasonable and prudent under the conditions and having regard to the actual and potential hazards then existing. Consistent with the foregoing, every person shall drive at a safe and appropriate speed when approaching and crossing an intersection or railroad grade crossing, when approaching and going around a curve, when approaching a hillcrest, when traveling upon any narrow or winding highway, and when special hazards exist with respect to pedestrians or other traffic or by reason of weather or highway conditions.
and using algorithms to calculate what precisely constitutes a reasonable speed limit under different weather conditions (for all drivers) would indeed incentivize safer driving behavior. However, the personalization aspect of algorithmic speed limits would be very problematic.

If speed limits were personalized and all drivers obeyed their personalized speed limits, there could be more variance in driving speed. The greater variance in driving speed would lead to more attempts to overtake and, thus, more frequent collisions; this idea has found support in statistical studies.7See generally Charles A. Lave, Speeding, Coordination, and the 55 MPH Limit, 75 Am. Econ. Rev. 1159 (1985).
There would be more accidents when half of the drivers drive at 50 mph and the other half drive at 60 mph than when nearly all drivers drive at 55 mph. Another possible result of personalized speed limits, given that drivers rarely comply with speed limits and that social norms strongly influence the choice of driving speed,8See, e.g., Henriette Wallén Warner & Lars Åberg, Drivers’ Decision to Speed: A Study Inspired by the Theory of Planned Behavior, 9 Transp. Res. Part F Traffic Psychol. Behav. 427 (2006); Mauricio Leandro, Young Drivers and Speed Selection: A Model Guided by the Theory of Planned Behavior, 15 Transp. Res. Part F Traffic Psychol. Behav. 219 (2012).
is that drivers subjected to lower speed limits would drive at the same speed as drivers subjected to higher speed limits. In other words, a driver subject to a 50 mph personalized speed limit would observe that other drivers drive at 55 mph or 60 mph and choose to drive at 55 mph or 60 mph. The result is that the aggregate driving speed under a personalized speed limit regime would be higher than that under a uniform speed limit regime, and more accidents would occur due to the higher aggregate speed.

The speed limit example has been used in several proposals by other legal scholars for using algorithms and big data to make the law dynamic and personalized. The dynamic component of algorithmic law in previous proposals would indeed increase the effectiveness of the law. However, the personalization component of previous proposals focuses on maximizing the welfare associated with each individual based solely on individual characteristics. These proposals assume that maximizing individual welfare by accounting only for individual characteristics would also maximize welfare at the social level. As the example above shows—and as this Article argues—that assumption is mistaken. Individual-level optimization ignores the interactive effects of human behavior. Lawmakers should instead design algorithmic law to maximize social welfare by accounting for these interactions. Rather than optimizing rules for each person in isolation, they should evaluate system-wide combinations of rules to identify those that best advance collective welfare. Urban mobility examples illustrate that although algorithmic law can be dynamic and individualized, it must also be coordinated to achieve social welfare goals. Such coordination requires accounting for the effects of legal differentiation, including spatial impacts, social norms, enforcement costs, externalities, and system predictability. Although it is possible to consider the idea of using algorithms to personalize the law from deontological or legal perspectives, this Article presents a utilitarian critique and offers a utilitarian solution. In the following sections of this Article, I argue that lawmaking is a process that must account for the spatial and social effects of individual behavior. I explain how coordination should be integrated into the process of making algorithmic laws to achieve aggregate optimality. I discuss rules and regulations mainly from the urban mobility context to support my argument because the spatial and social consequences of human movement are particularly salient and helpful in illustrating the interactive effects of individual behavior and the need for coordination.

Part I shows that using algorithms to make the law in urban mobility management has worked because these algorithms are dynamic, spatially coordinated, and aggregate-welfare-maximizing. The current use of algorithms to dynamically manage signals and signs in making traffic-responsive light signals, ramp metering, junction control, merge control, lane reversals, electronic tolls, and demand-responsive parking rates yields combinations of system-optimal commands that simultaneously account for and govern the behavior of all individuals involved. By adjusting the law in response to real-time information on urban mobility and coordinating the behaviors of all road users, traffic managers have made driving and parking less dangerous, time-consuming, costly, and environmentally harmful. Coordination is an essential feature of each usage of algorithms to manage urban mobility, and coordination entails accounting for the behavior of all road users, making more effective use of space, and improving aggregate utility.

Part II explains that extending algorithmic law to account for personal characteristics must preserve the law’s spatial coordination and aggregate welfare maximization properties. Part II uses speed limits to illustrate that using algorithms to make the law would be problematic if the result is differentiated speed limits. I explain that differentiated speed limits would be more dangerous than the current uniform speed limit when considering the spatial effects, social norms, enforcement costs, externalities, and system unpredictability. I delineate a three-step process for making the law algorithmic to maximize social welfare, incorporating interactive spatial and social effects and coordinating individual behavior.

Part III applies the three-step algorithmic law-making process to various types of laws and regulations in urban mobility and considers the likely outcomes of applying the coordination process. Rather than maximizing welfare at the individual level, this three-step process incorporates interactive effects and produces sets of coordinated laws that maximize aggregate welfare. These applications show that coordination often requires some degree of uniformity in the law. Although this Article demonstrates the importance of coordination through the lens of urban mobility, the concept of coordination also applies to many other areas. I mention some examples of coordination that require uniformity in areas outside of urban mobility.

Part IV examines instances when using algorithmic law would be inappropriate or impossible. The challenges of designing autonomous vehicle (AV) steering algorithms show that using algorithmic law to maximize social utility would be impossible when algorithms must confront difficult normative questions. Algorithmic law might also fail to establish legitimacy and thus fail to obtain compliance with the law.

The Article concludes with a summary of the argument that using algorithmic law to achieve its utilitarian purpose should employ a process that maximizes social utility, and coordination that accounts for the interactive effects among individuals is an essential component of the algorithmic law-making process. There are instances in which the implementation of algorithmic law would not be possible or appropriate. Lawmakers should carefully consider the costs and benefits of using algorithms in lawmaking in each scenario.

I. Current Uses of Algorithmic Regulation in Urban Mobility

Rules and regulations that manage urban mobility are coordination mechanisms intended to protect and promote the welfare of road users. Municipal transportation administrators have recently begun to incorporate algorithms into urban mobility management. In each instance, the algorithm uses input data from all road users involved in a scenario, predicts their movements, and computes the optimal sets of coordinated commands for all these road users to achieve some system-level utilitarian objective such as minimizing delay time, maximizing traffic flow, and minimizing driving time. The following applications of algorithms to traffic management illustrate that spatial coordination is a key feature of laws crafted by algorithms. Coordination, as the following examples show, requires considering the spatial effects of each individual’s behavior and issuing directions to each individual to reduce the incidence of collisions, facilitate the flow of movement, and/or redirect resources to more efficient use.

A. Smart Traffic Lights

Algorithms have recently begun to be utilized to manage traffic lights to improve the efficiency of urban mobility. Traffic lights are currently either pre-timed, semi-actuated, or fully actuated.9See Federal Highway Administration, Traffic Signal Timing Manual: Chapter 5 – Office of Operations, Traffic Signal Timing Manual (2021), https://ops.fhwa.dot.gov/publications/fhwahop08024/chapter5.htm#5.2 (last visited Jan. 13, 2026) (explaining various types of traffic signals and the way in which they operate).
Most traffic lights used today are pre-timed with fixed light cycles determined in advance by traffic engineers based on past traffic data. In contrast, actuated traffic lights detect car movement to change light intervals based on specific parameters.10See id.
The movement of cyclists and pedestrians can also be incorporated into calculating the optimal traffic light cycle to maximize traffic flow and reduce waiting time.11See Paul Smits, Smart bike path helps to organize future accessibility of cities, Innovation Origins (2022), https://innovationorigins.com/en/smart-bike-path-helps-to-organize-future-accessibility-of-cities (last visited Jan. 13, 2026) (explaining that PlasticRoads can detect the movement of pedestrians and cyclists, and incorporate this data into traffic light determinations).
Adaptive traffic control systems use algorithms to optimize light cycles based on real-time traffic flow rather than pre-determined schedules and movements. Adaptive traffic control reduces the time when cars have to wait at intersections while no cars are crossing, alleviating congestion and reducing time wasted waiting at intersections. Several cities and regions in the United States have installed adaptive traffic control and observed significant improvements in traffic flow.12Using the Scalable Urban Traffic Control (SURTRAC) system, a system of smart traffic lights, Pittsburgh “has reduced travel time by 25%, braking by 30% and idling by more than 40%” in places where the SURTRAC system was installed. Jackie Snow, This AI Traffic System in Pittsburgh Has Reduced Travel Time by 25%, Smart Cities Dive (Jul. 20, 2017), https://www.smartcitiesdive.com/news/this-ai-traffic-system-in-pittsburgh-has-reduced-travel-time-by-25/447494. Another example of using algorithms to manage traffic lights is the Sydney Coordinated Adaptive Traffic System (SCATS). SCATS is used in “more than 1,000 intersections in the US,” including in Chula Vista, CA; Menlo Park, CA; Santa Rosa, CA; Atlanta, GA; and the Meadowlands, NJ. New Jersey Meadowlands Commission Deploys TransCore’s Adaptive Signal Control System, TransCore, https://transcore.com/new-jersey-meadowlands-commission-deploys-transcores-adaptive-signal-control-system.html (last visited Jan. 13, 2026). In Menlo Park, CA, “travel time was reduced up to 25 percent and delays were reduced by an extraordinary 70 percent” by SCATS. Metro Atlanta’s Cobb County Doubles Use of SCATS Adaptive Signal Control System, Awards TransCore Deployment Contract, TransCore, https://transcore.com/metro-atlantas-cobb-county-doubles-use-of-scats-adaptive-signal-control-system-awards-transcore-deployment-contract.html (last visited Jan. 13, 2026).

A smart traffic light is a classic illustration of how algorithms are used to facilitate the coordination function of law. Traffic lights (in general) work by coordinating the movement of vehicles, requiring some vehicles to stop to allow others to pass. For vehicles traveling in one direction through the intersection to pass, vehicles traveling orthogonally must stop to clear the intersection. The traffic light issues simultaneous commands to vehicles from all approaching directions to coordinate movement through the intersection. If traffic signals were not coordinated, vehicles would collide in the intersection. Of course, smart traffic lights facilitate coordination beyond merely collision avoidance (and I will explain later in the Article that coordination consists of more than accounting for spatial effects). Traffic lights are also designed to make travel faster and avoid wasted time. Incorporating algorithms to control traffic lights enhances this coordination by reducing the time wasted waiting at intersections when no other vehicles are passing through. Smart traffic lights take as input the information of all vehicles approaching the intersection and use algorithms to calculate the optimal cycle that accounts for all vehicles to maximize total traffic flow (or another system-level objective requested by the traffic manager), which is for the benefit of all vehicles. It is worth pointing out that traffic lights do not merely aim to achieve welfare maximization at the social level; it is also in the interest of each driver for traffic lights to coordinate traffic. For each driver, some coordination at the intersection is preferable to no coordination. Everyone is worse off when there is a collision.

B. Adaptive Ramp Metering

Traffic signals can also be installed on ramps leading to freeways as ramp meters to control the frequency of vehicles entering the highway.13See E.D. Arnold, Ramp Metering: A Review of the Literature 1 (1998), https://rosap.ntl.bts.gov/view/dot/19491 (last visited Jan. 13, 2026) (defining and explaining ramp metering).
Sometimes, many cars try to enter the freeway, particularly during peak hours. If too many vehicles enter the highway in a tightly packed platoon, traffic could decelerate or stop, causing congestion. Moreover, the entry of platoons of cars onto the freeway could cause accidents. Cars entering the highway could be separated by the ramp meters to break up the platoons and avoid disrupting traffic flow on the highway.14See Federal Highway Administration, About Ramp Metering, Freeway Management Program (Mar. 12, 2020), https://ops.fhwa.dot.gov/freewaymgmt/ramp_metering/about.htm.
Ramp meters have been found to increase the driving speed on the motorway on which the ramps are metered and increase the driving speed on a parallel road because diversion to the parallel road is reduced due to decreasing congestion on the ramp-metered motorway.15See M. Papageorgiou, H. Hadj-Salem & F. Middelham, ALINEA Local Ramp Metering: Summary of Field Results, 1603 Transp. Res. Rec. 90, 96 (1997) (“Ramp metering reduces the recurrent congestion . . . and increases the mean speed on the motorway, along with a slight increase in the served demand. In this way, diversion from the motorway to the parallel arterial decreases, and consequently traffic conditions on the parallel arterial are ameliorated.”).
Ramp metering also benefits the environment by reducing fuel consumption and air pollution. Ramp metering may be predetermined or adaptive.16See Arnold, supra note 13, at 4–5.
Predetermined rates are calculated offline using historical data and could be fixed or variable. Adaptive rates are determined in real-time using sensors and algorithms in response to traffic conditions.17See Khaled Shaaban, Muhammad Asif Khan & Rida Hamila, Literature Review of Advancements in Adaptive Ramp Metering, 83 Procedia Comput. Sci. 203, 204 (2016) (“[A]daptive or traffic responsive ramp metering where variable metering rates are allocated to ramps in response to actual traffic conditions”).
Algorithms implementing adaptive ramp metering have been found to significantly increase the average speed on freeways and reduce congestion in both simulations and field tests.18One study in the Twin Cities area found that without ramp metering, traffic volume on the mainline would decrease by 14%, travel time would increase by 22%, and the number of crashes in the system would increase by 26%. See Cambridge Systematics, Inc., Twin Cities Ramp Meter Evaluation Executive Summary, 1, 7–9 (2001), https://rosap.ntl.bts.gov/view/dot/2765/dot_2765_DS1.pdf (last visited Jan. 13, 2026) (discussing the benefits of ramp metering). Other cities including Los Angeles, Seattle, Portland, and Kansas City have implemented adaptive ramp metering using algorithms such as System Wide Adaptive Ramp Metering (SWARM) and Fuzzy Logic Algorithm. Sarah Simpson, et al., ADOT System-Wide Ramp Metering Evaluation 26 (2013).
Moreover, ramp metering works more effectively when algorithms manage multiple ramps on a freeway simultaneously in coordination.

Adaptive ramp metering is another example of using algorithms as coordination mechanisms to attain collective benefits for all road users. Ramp metering uses algorithms to manage multiple vehicles, stopping some vehicles on ramps to enable others to travel more quickly on the freeway. On a fundamental level, a ramp meter, regardless of whether algorithms are used to improve its efficiency, manages space use; it provides coordinated signals so that some vehicles stop to allow other vehicles to pass. Without coordinated signals, vehicles would collide more often. Algorithms improve the coordination function of ramp meters by taking the positioning and movement of multiple vehicles, calculating the optimal set of commands for these vehicles collectively, and issuing the commands to all vehicles simultaneously. The net travel time and crash rate are reduced due to the coordinated commands.19See Ioannis Papamichail & Markos Papageorgiou, Traffic-Responsive Linked Ramp-Metering Control, 9 IEEE Trans. Intell. Transp. Syst. 111, 111 (2008) (“Coordinated ramp-metering strategies make use of the measurements from an entire region of the network to control all metered ramps included therein; they may be more efficient than the local ramp-metering strategies . . . “).

C. Dynamic Junction Control

Another traffic management strategy that facilitates the movement of vehicles from ramps to highways is dynamic junction control, which manages the use of lanes in areas where ramps merge into the mainline motorway. When the traffic volume on the mainline is relatively high, and the traffic volume on the ramp is relatively low, dynamic junction control signals allow all lanes on the mainline to be used. When the traffic volume on the mainline is relatively low, and the traffic volume on the ramp is relatively high, the rightmost lane on the mainline—onto which vehicles merge from the ramps—can be closed upstream of the ramp to facilitate the movement of vehicles onto the mainline.20Pennsylvania Department of Transportation, Dynamic Junction Control, Transform76.com, https://transform76.com/smart-corridor-initiatives/dynamic-junction-control (last visited Jan. 13, 2026).
Dynamic junction control reduces congestion, improves travel time reliability, and reduces accidents.21Dynamic Merge Control, Texas A&M Transportation Institute, https://policy.tti.tamu.edu/strategy/dynamic-merge-control (last accessed Apr. 10, 2026). However, a driving simulator study found that dynamic junction control (referred to as dynamic merge control in some literature) only has a material effect on rural roads, where average speed is high, and has no effect on urban roads. See Nora Reinolsmann. et al., Investigating the Impact of Dynamic Merge Control Strategies on Driving Behavior on Rural and Urban Expressways – A Driving Simulator Study, 65 Transp. Res. Part F Traffic Psychol. Behav. 469 (2019).
Dynamic junction control has been implemented in the State of Washington, reducing the frequency of accidents by as much as twenty-five percent.22Texas A&M Transportation Institute, supra note 36.

Dynamic junction control is a less direct coordination mechanism than other algorithmic traffic management techniques. Dynamic junction control does not issue simultaneous commands to multiple vehicles directly. Usually, only the vehicle closest to the junction (on the mainline) encounters flashing lights or signs requiring the vehicle to merge into another lane. However, dynamic junction control still has a spatial coordination function. The algorithm controlling the junction takes the movement and position of all affected vehicles (including vehicles merging into the mainline and vehicles upstream on the mainline) as inputs; it determines lane closures intended to optimize traffic flow for all vehicles in the system.23See Riccardo Scarinci & Benjamin Heydecker, Control Concepts for Facilitating Motorway On-Ramp Merging Using Intelligent Vehicles, 34 Transp. Rev. 775 (2014).

D. Dynamic Merge Control

Sometimes, segments of traffic lanes must be closed for construction or maintenance. The merges necessitated by lane closures cause accidents and delays due to poor merge management.24See Hong Yang, et al., Work Zone Safety Analysis and Modeling: A State-of-the-Art Review, 16 Traffic Inj. Prev. 387 (2015).
Merging techniques can be categorized as early merging and late merging. Early merging entails notifying drivers to move out of the to-be-closed lane well ahead of the point where they must merge due to the lane closure; early merging works better when the traffic volume is low and the average speed is high.25 American Traffic Safety Services Administration, Guidance for the Use of Dynamic Lane Merging Strategies (2012), https://workzonesafety.org/publication/guidance-for-the-use-of-dynamic-lane-merging-strategies.
Late merging entails notifying drivers to stay in the to-be-closed lane almost to the point where they must merge due to the lane closure; late merging works better when the traffic volume is high and the average speed is low.26Id.
Dynamic merge control enables traffic managers to collect traffic flow data using sensors and use algorithms to process the data and determine when to switch between merge control techniques in response to traffic conditions.27Patrick T. McCoy & Geza Pesti, Dynamic Late Merge–Control Concept for Work Zones on Rural Interstate Highways, 1745 Transp. Res. Rec. 20 (2001).
A pilot late dynamic merge control system in Minnesota shortened queues by more than one-third,28D. Taavola, J. Jackels & T. Swenson, Dynamic Late Merge System Evaluation: Initial Deployment on U.S. Route 10 Summer 2003 (2004), https://trid.trb.org/view/704199.
and a late dynamic merge control system on I-94 in Michigan was found to have produced $5 per hour in travel time savings and significant savings in gasoline consumption.29Lia F. Grillo, Tapan K. Datta & Catherine Hartner, Dynamic Late Lane Merge System at Freeway Construction Work Zones, 2055 Transp. Res. Rec. 3 (2008).

Like dynamic junction control, dynamic merge works slightly differently from other traffic management strategies as a coordination mechanism. Usually, only the vehicle closest to the merge encounters flashing lights or signs advising it to merge into another lane. Dynamic merge does not command multiple vehicles simultaneously to reduce collisions. Nevertheless, the algorithm facilitating dynamic merging manages the use of road space to minimize accidents and facilitate traffic flow. The algorithm takes the movement and position of all affected vehicles as inputs and determines the merge strategy that optimizes traffic flow for all vehicles and improves safety for all vehicles.30See, e.g., id. at 5.

E. Dynamic Lane Reversal

Another congestion reduction strategy often managed by algorithms is reversing the traffic flow of underutilized lanes to increase the capacity of congested roads in the opposite direction.31See Matthew Hausknecht, et al., Dynamic lane reversal in traffic management, in 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC) 1929, 1929 (2011) (explaining the way in which dynamic lane reversal works).
Like dynamic junction control, lane reversals involve reducing collisions and reallocating road space for more efficient use. Lane reversal can be used to relieve congestion during peak hours, construction periods, spectator sporting events,32See Richard Arcand, Traffic Control Plans for the July 21st NASCAR Race at New Hampshire Motor Speedway in Loudon, New Hampshire Department of Transportation (2019), https://www.nh.gov/dot/media/nr2019/20190709-nascar-loudon.htm [https://web.archive.org/web/20230806225526/https://www.nh.gov/dot/media/nr2019/20190709-nascar-loudon.htm] (announcing parts of NH 106 corridor will have lanes converted to reverse traffic to accommodate the spectators attending the NASCAR race).
and evacuations during emergencies such as hurricanes.33See Billy M. Williams, et al., Simulation and Analysis of Freeway Lane Reversal for Coastal Hurricane Evacuation, 133 J. Urban Plan. Dev. 61 (2007).
For example, during emergencies, roads leading to disaster areas are likely to be empty, while roads leading away from disaster areas are likely to be congested; evacuation planners may reverse the flow of traffic on road segments leading to the disaster so that vehicles moving away from the disaster area may use the empty road segments (originally designated for vehicles moving toward the disaster area) and thus relieve congestion.34See Urmila Pyakurel, et al., Efficient Dynamic Flow Algorithms for Evacuation Planning Problems with Partial Lane Reversal, 7 Mathematics 993 (2019).
Lane reversals can be indicated using overhead traffic lights signaling the direction of traffic flow or zipper machines to move the traffic barriers dividing traffic flowing in opposite directions.35See Hausknecht, et al., supra note 31.
Lane reversals may be planned before the event or implemented dynamically in response to real-time traffic data using algorithms.36See, e.g., id.; Williams, et al., supra note 33; Quanlu Fu, Ye Tian & Jian Sun, Modeling and Simulation of Dynamic Lane Reversal Using a Cell Transmission Model, 26 J. Intell. Transp. Syst. 717 (2022).
The adoption of AV in the future will make lane reversals easier to implement and, therefore, become even more commonplace.37See Fu, Tian & Sun, supra note 36.

Simulations have shown that dynamic lane reversal can significantly alleviate congestion, reducing travel times by as much as forty percent.38Salomón Wollenstein-Betech, et al., Planning Strategies for Lane Reversals in Transportation Networks, in 2021 IEEE International Intelligent Transportation Systems Conference (ITSC) 2131, 2135 (2021).
It can also reduce the total evacuation time during emergency evacuations by as much as sixty-nine percent.39Chi Xie, Dung-Ying Lin & S. Travis Waller, A Dynamic Evacuation Network Optimization Problem with Lane Reversal and Crossing Elimination Strategies, 46 Transp. Res. Part E Logist. Transp. Rev. 295, 310 (2010).
Many bridges and tunnels in the United States have adopted reversible lanes, including the Golden Gate Bridge near San Francisco,40Traffic Management – Bridge Operations, Golden Gate Bridge Highway & Transportation District https://www.goldengate.org/bridge/bridge-operations/traffic-management (last visited Jan. 13, 2026).
the Lincoln Tunnel near New York City,41Port Authority NY NJ, Lincoln Tunnel Exclusive Bus Lanes, https://www.panynj.gov/bridges-tunnels/en/lincoln-tunnel/xbl.html (last visited Jan. 13, 2026).
and the Chesapeake Bay Bridge near Annapolis, Maryland.42Mike Murillo, New Lane Closing Gates Are Ready for Prime Time on Chesapeake Bay Bridge, WTOP News (Dec. 19, 2022), https://wtop.com/maryland/2022/12/new-lane-closing-gates-are-ready-for-prime-time-on-the-chesapeake-bay-bridge.

Dynamic lane reversal improves aggregate welfare and the welfare of most individual road users through spatial coordination, although dynamic lane reversal’s coordination works slightly differently from that of a traffic light in that dynamic lane reversal does not necessarily stop some vehicles to allow other vehicles to pass. Nevertheless, the signals and signs used to implement lane reversal coordinate the movement of vehicles in multiple directions, not merely one vehicle or vehicles traveling in one direction.43See Fu, Tian & Sun, supra note 36.
Without coordinated signals and signs, vehicles would collide head-on during lane reversals. Moreover, without the incorporation of traffic data in both directions, lane reversals would not be able to optimize traffic flow. The algorithm making lane reversals accounts for all vehicles in the system and calculates a lane reversal schedule that optimizes traffic flow for the system rather than for individual vehicles.44See, e.g., id.

F. Dynamic Electronic Tolling

Dynamic electronic tolling is a form of congestion pricing. A fee is imposed on drivers to discourage them from driving on the road, thereby reducing congestion (as well as other externalities such as pollution). Dynamic tolling entails varying the fee in response to real-time traffic conditions—increasing the fee when demand is high, particularly during peak hours, and decreasing the fee when demand is low.45Federal Highway Administration, Congestion Pricing — A Primer: Overview – Frequently Asked Questions – FHWA Operations, Congestion Pricing — A Primer: Overview (Mar. 25, 2021), https://ops.fhwa.dot.gov/publications/fhwahop08039/cp_prim1_08.htm.
Traffic volume is measured, and the toll adjusts in real-time using algorithms to keep traffic flowing at high speeds.46See Haipeng Chen, et al., DyETC: Dynamic Electronic Toll Collection for Traffic Congestion Alleviation, 32 Proc. AAAI Conf. Artif. Intell. (2018), https://ojs.aaai.org/index.php/AAAI/article/view/11337.

Numerous states have begun using dynamic electronic tolling to reduce congestion. Northern Virginia, for example, has implemented dynamic electronic tolling on I-66. The toll adjusts every 6 minutes in response to changing demand and traffic volume to maintain an average vehicle speed of 55 mph. Dynamic tolling with discrete time intervals and rates has also been implemented in other locations, such as Houston and Orange County in California.47Texas A&M Transportation Institute, Variable Pricing, https://static.tti.tamu.edu/tti.tamu.edu/documents/policy/congestion-mitigation/variable-pricing.pdf (last accessed Apr. 10, 2026).

Although dynamic electronic tolling operates as an indirect coordination mechanism—shaping driver behavior through incentives rather than explicit commands—it nonetheless fulfills this role. There is inefficiency without tolling because some road users value the road space more, but cannot purchase the right to use it from those who value it less. Dynamic tolling provides the market mechanism that enables this exchange and solves this coordination problem. The outcome of coordination is the reduction of the aggregate net cost of travel and the reduction of the net cost of travel for most individual drivers. The algorithm takes the traffic data of all vehicles in the system. It calculates the tolls to reduce the total travel times of all vehicles in the system, not any individual vehicle.48See Chen, et al., supra note 46.

G. Demand-Responsive Parking

Varying parking prices to manage traffic could yield many benefits, including allocating parking spaces more efficiently and reducing underpriced parking. Underpriced parking is one of the primary reasons that people choose driving over other travel modes, and it could cause significantly more cruising.49 Nicole Ngo & Chandra Krishnamurth, The Effects of Demand-Responsive Parking on Transit Usage and Congestion: Evidence From Sfpark 2 (2017), https://pdxscholar.library.pdx.edu/trec_reports/145.
In fact, about thirty percent of all traffic is cruising,50Donald Shoup, Cruising for parking, 13 Transp. Policy 479, 480 (2006).
which increases congestion, pollution, and energy consumption. Most cities set flat meter rates that only vary by location. Using algorithms to enact demand-response parking could reduce the social cost of underpriced parking.

San Francisco is the first city in the US to enact demand-responsive parking through a program known as SFPark.51Ben Jose, San Francisco Adopts Demand-Responsive Pricing Program to Make Parking Easier, SFMTA (Dec. 5, 2017), https://www.sfmta.com/blog/san-francisco-adopts-demand-responsive-pricing-program-make-parking-easier.
Parking rates at the meters are adjusted based on the occupancy rates in the immediately preceding time periods52See Ngo & Krishnamurth, supra note 49, at 4, writing:

Parking rate changes were based upon average occupancy over the previous rate adjustment period. If average occupancy in the previous period fell within the target range, prices did not change. However, if average occupancy exceeded 80% in the previous period, then hourly parking rates increase by [52].25 per hour. If occupancy rates fell between 30 and 60% or <30% then parking rates decreased resp. by [52].25 and [52].50.
and based on the time of the day.53Id.
SFPark’s pilot phase increased parking availability,54Adam Millard-Ball, Rachel R. Weinberger & Robert C. Hampshire, Is the Curb 80% Full or 20% Empty? Assessing the Impacts of San Francisco’s Parking Pricing Experiment, 63 Transp. Res. Part Policy Pract. 76 (2014); SFMTA, SFpark Pilot Project Evaluation 9 (2014), https://www.sfmta.com/sites/default/files/reports-and-documents/2018/08/sfpark_pilot_project_evaluation.pdf.
reduced cruising and congestion,55Millard-Ball, Weinberger & Hampshire, supra note 54.
increased traffic speed, reduced emissions,56 SFMTA, supra note 54, at 10.
and increased transit ridership.57 Ngo & Krishnamurth, supra note 49, at 13–15.

Like dynamic electronic tolling, demand-responsive parking improves system utility by coordinating drivers through incentives rather than restricting access. It promotes more efficient use of space without explicitly denying it. The underlying coordination problem is similar: drivers value parking spaces differently, but without such a system, higher-value users cannot pay for more convenient spots, nor can others accept lower rates to park farther away. Demand-responsive parking enables these exchanges, resolving the coordination problem.

H. Coordination in Current Urban Mobility Management

In each application of algorithms to urban mobility management described above, the rules determined by the algorithm act as coordination mechanisms. In some cases, there is some road space that multiple road users would like to pass through, and coordination is needed to ensure that road users pass through that space without collision. In other cases, there is some road space that multiple road users would like to utilize, but some need the space more than others; coordination mechanisms are required to allocate the road space to those who need the space more. In each case, coordination improves aggregate welfare by accounting for the interactive effects among individuals (particularly the spatial effects) and directing each individual to behave accordingly. Algorithms make the coordination and the use of the limited road space more efficient. The spatial coordination function of algorithmic law must be preserved as lawmakers attempt to use algorithms to customize the law for each individual. The following Part shows some of the problems created by algorithmic law when algorithmic law is not made in a way that coordinates the movement of individuals while accounting for the spatial implications of movement.

II. Speed Limits: The Potential of Personalization and Necessity of Coordination

Applications of algorithms to urban mobility and parking management show that spatial coordination is a quintessential function of algorithms in lawmaking. Coordination is a process that accounts for how each individual’s behavior in the system affects others and directs each individual’s behavior to achieve a system-wide welfare maximization objective and benefit each individual in most circumstances. Algorithmic lawmaking without coordination would be inefficient and harmful to society as a whole and to each individual in many instances. As the example of personalized speed limits shows, even when tailoring the law for each person based on personal characteristics is possible, extending algorithmic lawmaking to tailoring the law to each individual must incorporate spatial coordination to maximize social welfare. The coordination process must also account for the influence of social norms on human behavior. In this part, I will first examine the idea of personalizing the law based on individual characteristics. I will point out several flaws of this idea from a utilitarian perspective, particularly the interactive effects neglected when the lawmaker makes algorithmic laws based only on personal characteristics. I will then present a solution to this problem by describing a robust three-step process for properly making algorithmic law that accounts for the interactive effects.

A. Personalization of the Law

Algorithms can make the law not only more dynamic, but also more granular and thus better suited for different contexts. The impetus for this idea is that uniform law fails to account for the heterogeneities in personal physical and psychological characteristics, ability to comply with the law, and personal preferences. Uniform law imposes different burdens on heterogeneous individuals and offers them various degrees of protection, which raises concerns about the law’s effectiveness and equity. In the context of property law, property taxes are often levied regardless of age. Stern and Lewinsohn-Zamir argue that property taxes should be lowered for the elderly, as they are less financially able to pay property taxes but are more psychologically attached to their homes.58 Stephanie M. Stern & Daphna Lewinsohn-Zamir, The Psychology of Property Law 66–69 (2020).
Many areas of law have already been differentiated for different groups based on physical characteristics. In negligence law, for example, children are subject not to the reasonable person standard but to the standard of a “reasonable child of the same age, intelligence, and experience under similar circumstances.”59 Restatement (Second) of Torts § 283A (1965).
In a handful of states, medical professionals are subject to the standard of care in their localities rather than to a national standard of care.60The number of states having the locality rule declined from twenty-one in 2007 to five in 2017. Brian K. Cooke, Elizabeth Worsham & Gary M. Reisfield, The Elusive Standard of Care, 45 J. Am. Acad. Psychiatry Law Online 358, 361 (2017). Although the locality rule has become less prevalent, medical professionals are still subject to an elevated standard of care rather than ordinary care because of their skill. See Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 12 (2010).

Algorithms will enable further customization of the law and differentiate the law at a more granular level than singular partitions like the divisions along the lines of age and gender. The availability of data and computing power will enable the law to be tailored to each individual based on that individual’s unique set of characteristics and preferences. Ben-Shahar, Porat, and others have explored personalizing bright-line rules such as the drinking age,61See Ben-Shahar & Porat, supra note 3, at 112.
default rules such as intestacy laws,62See id. at 155.
mandatory contract terms, such as the withdrawal period from the contract,63Omri Ben-Shahar & Ariel Porat, Personalizing Mandatory Rules in Contract Law, 86 U. Chi. L. Rev. 255 (2019).
and legal standards like the reasonable person standard in negligence law.64Omri Ben-Shahar & Ariel Porat, Personalizing Negligence Law, 91 N.Y.U. L. Rev. 627 (2016).

Several legal scholars use the personalization of speed limits as an example to illustrate their idea of personalizing negligence standards. According to Ben-Shahar and Porat’s proposal, drivers’ speed limits would vary according to their driving skill or ability to avoid driving-induced risk.65Ben-Shahar & Porat, supra note 64, at 633.
In the personalization-by-risk scheme, for example, “safe” drivers would be subject to a lower standard of care, whereas “risky” drivers would be subject to a higher standard of care.66Id. at 650–652.
Different levels of care mean different speed limits. Their welfare analysis shows that the total social cost of personalized speed limits in both the skill-based and risk-based personalization schemes would be lower than that of a uniform speed limit for all drivers.67Id. at 648, 651.

Although speed limits can be expressed either as rules (drivers are prohibited from driving at speeds over 30 mph) or as standards (drivers are prohibited from driving at excessive speeds), 68See Louis Kaplow, Rules Versus Standards: An Economic Analysis, 42 Duke L. J. 557, 560 (1992).
the problems arising from different speed limits for different drivers discussed in this Article would exist in either case, since speed limits under the current law apply uniformly to all drivers. Under current law, when speed limits are expressed as standards, the criteria for determining whether a particular speed is excessive include impersonal factors such as the value of time; the personal characteristics of the driver such as the driver’s driving skill are not considered.69Id. at 562.
Thus, under current law, speed limits are uniform laws even when they are expressed as standards. Ben-Shahar and Porat argue that this standard should be personalized and differentiated based on personal characteristics. In contrast, according to my conception of algorithmic law, when speed limits are expressed as standards and whether someone was driving at excessive speeds is adjudicated ex post, judges should maintain the uniform objective standard for all tortfeasors. In instances where the speed limit is expressed as a rule and lawmakers want to personalize the rule, I would argue that the rule should be the same for all drivers.

B. Problems with Personalization Without Coordination

The personalization of speed limits is intended to make driving safer by making each road user drive at personally optimal speeds calculated based on personal characteristics. Since each driver has a unique set of characteristics, personalizing speed limits under the Ben-Shahar and Porat approach would differentiate the speed limit for each driver. This conception of personalizing speed limits would work only in some situations that Ben-Shahar and Porat envisioned. For example, this would work when one is driving alone on a particular road, and the only potential type of accident would be hitting a pedestrian. In this scenario, the driver exercises greater care by driving at a lower speed. However, most road segments are shared by many drivers, and each driver’s behavior affects other drivers. Optimizing individualized behavior without considering the effects of each individual’s behavior on others in inherently dangerous, interactive, and complex environments, such as driving, does not necessarily increase safety for everyone. It is quite possible, and perhaps very likely, that the Ben-Shahar and Porat proposal for personalizing speed limits would negatively impact safety. The personalization of speed limits, if it means significantly different speed limits for each driver, would work only in those circumstances where there are no other drivers. Its implementation will likely lead to many operational issues when other drivers are on the road.70Ben-Shahar and Porat try to address the potential coordination problems caused by differentiated speed limits, but their explanation is limited to the interaction between drivers and pedestrians, and does not address the increased danger among drivers. See Ben-Shahar & Porat, supra note 3, at 66. They see the coordination problem as the one of equalizing risk among drivers. Ben-Shahar and Porat do not address the spatial implications of differentiated speed limits or the role of social norms. I explain the difference in the way I view coordination and the way Ben-Shahar and Porat view coordination in Part II.D. One potential solution under the Ben-Shahar and Porat framework that would account for the interactive effects of varying driving speeds is to redefine the speed associated with each negligence standard. For example, instead of establishing an inverse relationship between speed and level of care (meaning that driving at a lower speed is considered a higher level of care), the level of care would be defined by the deviation from the median driving speed. If the median driving speed is 55 mph, high care would be driving at 55 mph, median care could be driving at 52 mph or 58 mph, and low care could be driving at 50 mph or 60 mph. However, the median speed can only be determined after determining the driving speed of each driver, which in this approach should be defined as a function of the median, and thus this would put the lawmaker in an endless loop without a solution. One solution out of this dilemma is to use the simulations-based approach in Part II.C.
The problems arising from the differentiation of the law due to personalization have not been sufficiently explored.

Insights from two fields present difficulties for the speed limit personalization idea. First, traffic engineering models show that differentiated speed limits create more risk than uniform speed limits because differentiated speeds would create more opportunities for overtaking. Second, social psychology models predict that the riskiest drivers would not obey their (lower) personalized speed limits but drive at speeds similar to those of most other drivers, who have much higher speed limits under the personalized speed limit scheme because of their lower risk profile. In the following subsections, I discuss these issues and other consequences of differentiating speed limits in greater detail to illustrate the problems of differentiation and personalization without coordination. Before I discuss factors that make differentiated laws problematic, it is worth restating that my critique of personalization based on only individual characteristics is a utilitarian critique. Normative and legal considerations are largely beyond the scope of this Article.

1. Traffic Engineering and Spatial Effects

Concepts and findings from traffic engineering predict that differentiated speed limits will likely create more risk than uniform speed limits. The example at the beginning of the Article shows the spatial consequences of differentiated speed limits. Here is an even more extreme example. Consider a scenario where the speed limit is personalized, most drivers are subject to speed limits between 65 mph and 70 mph, and one particularly risky driver is subject to a 48 mph speed limit. Although the 48 mph speed limit in isolation might make the most sense for that driver, driving 48 mph while others drive between 65 and 70 mph would be a hazard on the road. This risky driver would endanger himself and other drivers by obeying the 48 mph speed limit instead of driving at least 60 mph. Now, let’s consider the consequences of increasing the variance in speed more generally. Some of the most severe repercussions of differentiating speed limits include increasing the frequency of braking to adjust to the more frequent speed changes by other vehicles and increasing the speed variability among other vehicles. There would also be a higher incidence of overtaking because more frequent and larger gaps between cars will be created due to greater speed differentials. Overtaking creates a significant danger to drivers. Studies have found that overtaking other drivers and being overtaken are associated with a higher risk of accidents and that less frequent brake use and not moving about in traffic were associated with lower risk.71See, e.g., Tay Wilson & John Greensmith, Multivariate Analysis of the Relationship between Drivometer Variables and Drivers’ Accident, Sex, and Exposure Status, 25 Hum. Factors 303 (1983).
Moreover, differentiated speed limits might be particularly problematic on roads where only one lane runs in each direction, as more drivers might be inclined to overtake by crossing into the lane with oncoming traffic.

Statistical evidence and analysis support the notion that increasing speed variance would increase traffic accidents. An examination of national highway crash data led Charles Lave to conclude that the average speed on the road has no statistically significant effect on the fatality rate, but the variance of speed has a significant effect.72Lave, supra note 7. See also Mohammed Quddus, Exploring the Relationship Between Average Speed, Speed Variation, and Accident Rates Using Spatial Statistical Models and GIS, 5 J. Transp. Saf. Secur. 27 (2013) (Using geospatial data to show that average speeds are not statistically significantly related to accident rates, but that “a 1% increase in speed variation is associated with a 0.3% increase in accident rates, ceteris paribus.”); Dirk Helbing & Bernardo A. Huberman, Coherent moving states in highway traffic, 396 Nature 738 (1998) (Using traffic simulations to show that a coherent state, where vehicles move together in similar speeds, reduces accident rates by reducing lane changes). However, Garber and Graham found that an increase in the speed limit is associated with higher accident rates in a different study, see, e.g., Steven Garber & John D. Graham, The Effects of the New 65 Mile-per-Hour Speed Limit on Rural Highway Fatalities: A State-by-State Analysis, 22 Accid. Anal. Prev. 137 (1990). Nevertheless, such studies do not contradict the primary point that speed variance is a significant contributor to accidents.
Lave succinctly summarizes his findings as “Variance kills, not speed.”73Lave, supra note 7, at 1159.
Based on his findings, Lave concludes that speed limits serve as coordination mechanisms rather than limits on behavior.74Id. at 1161.
Lu and Shladover, in discussing information systems and algorithms that calculate speed limits, lend indirect support to Lave’s thesis by emphasizing the need for all vehicles to use the same information and algorithm to calculate the speed limit when traffic managers try to vary speed limits on sections of roads so that all vehicles on a particular section will have the same speed limit at a given time, which is necessary to avoid disrupting traffic flow.75Xiao-Yun Lu & Steven Shladover, Review of Variable Speed Limits/Advisories – Theory, Algorithms and Practice For presentation and publication at 93rd TRB Annual Meeting, 2423 Transportation Research Record: Journal of the Transportation Research Board 1, 15 (2014) (“There is a problem here: no coordination occurs unless all the vehicles calculate with the same algorithm and with the same set of data. If the use of the same algorithm and data set cannot be achieved, each vehicle may have a different speed limit value. Different values cannot help to reduce speed variance and shock waves”).
The interactive effects among vehicles on the road are a subset of the spatial effects of individual behavior. It is important to note that uniform speed limits would not only achieve the social objective of reducing overall collision rates but also serve the interests of each driver by directing all cars to drive more safely. As stated in the previous sections regarding traffic lights, everyone would be worse off if traffic rules enabled more collisions.

For personalizing speed limits to achieve a net improvement in safety, the algorithm calculating the speed limits should dynamically account for the position and movement of neighboring vehicles and the driving behavior patterns of their drivers as well. In addition, the algorithm should account for the effects of social norms and enforcement costs. Findings from traffic psychology and traffic engineering indicate that, in effect, a generally uniform speed limit for all drivers would likely be the output of the algorithm that adopts this approach.

The point that differentiated speed causes more accidents, however, is not that uniformity of law is preferable to differentiated law. The point is that personalization of the law requires spatial coordination to work, and that such coordination sometimes requires uniformity in the law. Optimization using algorithms at the system level rather than at the individual level is needed to determine the set of commands that maximizes social utility (and the utility of each individual in many instances).76Although this Article is intended to illustrate the necessity of system optimization by contrasting system optimization with individual optimization, the important step of finding the appropriate level of jurisdiction to conduct system optimization is not discussed in this article. One example that illustrates this issue is the possible failure of New York City and the Metropolitan Transportation Authority to fully account for the effects of its proposed downtown Manhattan congestion pricing program on New Jersey in terms traffic congestion and pollution. See Ana Ley, New Jersey Sues Over Congestion Pricing in New York City, N. Y. Times (Jul. 21, 2023), https://www.nytimes.com/2023/07/21/nyregion/nj-congestion-pricing-nyc-lawsuit.html (explaining New Jersey’s lawsuit against the Federal Highway Administration for approving the congestion pricing program and failing to account for and address the impact of the congestion mitigation program on New Jersey). The New York congestion pricing program not only shows the importance of law-making at the appropriate jurisdictional level, but also underscores the primary idea in this Article that all drivers affected (not only New York residents but also residents of neighboring states) must be accounted for when making the law to maximize aggregate utility.

2. Social Norms

Differentiating speed limits may create harm, not because of more frequent overtaking, but because drivers do not obey their personalized speed limits. Concepts and findings from social psychology show that differentiating speed limits would likely reduce safety by motivating riskier drivers to undertake more dangerous driving behavior. According to the Theory of Planned Behavior, a social psychology theory that describes a person’s motivations to carry out a behavior, the likelihood of enacting a behavior is determined by the intention to enact that behavior, and the intention to enact that behavior is determined by attitudes toward that behavior, subjective norms, and perceived behavioral control.77Icek Ajzen, The Theory of Planned Behavior, 50 Organ. Behav. Hum. Decis. Process. 179, 182 (1991). Models and empirical studies other than those based on the Theory of Planned Behavior have also found that social norms predict the intention to undertake a behavior in the urban mobility context. For example, using Tom Tyler’s model of normative and instrumental motivations, Jingkang Gao and Jinhua Zhao have found that social norms motivate local drivers to comply with Shanghai’s license plate auction policy. Jingkang Gao & Jinhua Zhao, Normative and Image Motivations for Transportation Policy Compliance, 54 Urban Stud. 3318 (2017).
Subjective norms include both injunctive norms (the extent to which one believes others approve of the behavior)78Richard McAdams argues that the existence of a law provides a signal that the majority of the public approves of the content of that law, and that the promulgation of a law (without the effects of sanctions for violating the law) helps obtain compliance with that law from people because they wish to obtain the approval of others. Richard McAdams, An Attitudinal Theory of Expressive Law, 79 Or. L. Rev. 339 (2000).
and descriptive norms (the extent to which one observes others carry out the behavior).79Icek Ajzen, Constructing a Theory of Planned Behavior Questionnaire, https://people.umass.edu/aizen/pdf/tpb.measurement.pdf (last accessed Jan. 13, 2026).
The Theory of Planned Behavior is a useful model for driving behaviors such as speeding and drunk driving,80Dianne Parker, et al., Intention to Commit Driving Violations: An Application of the Theory of Planned Behavior, 77 J. Appl. Psychol. 94, 98 (1992).
as well as other behaviors in urban mobility, such as using bicycle-sharing81Sigal Kaplan, et al., Intentions to Use Bike-Sharing for Holiday Cycling: An Application of the Theory of Planned Behavior, 47 Tour. Manag. 34 (2015).
and taking public transit.82Yuko Heath & Robert Gifford, Extending the Theory of Planned Behavior: Predicting the Use of Public Transportation1, 32 J. Appl. Soc. Psychol. 2154 (2002).
Subjective norms, in particular, have been found to exert even more significant influence on one’s behavior than concerns about legal sanctions.83See Tony Benson, Marian McLaughlin & Melanie Giles, The Factors Underlying the Decision to Text While Driving, 35 Transp. Res. Part F Traffic Psychol. Behav. 85, 92 (2015) (showing that the beliefs about the approval of “most people” has a higher correlation with the intention to text while driving than beliefs about the approval of law enforcement).
In other words, people tend to do what others do and what they believe others approve of.

The findings from traffic psychology imply that differentiated speed limits and other differentiated laws may motivate riskier actors to carry out riskier behavior despite laws discouraging them from doing so because they will mimic the behavior of less risky actors, and thus increase the net social cost. Consider the following scenario involving speed limits. Suppose there is a 45 mph uniform speed limit on a road with multiple lanes. In this scenario, most drivers would see that other drivers drive at 45 mph, and the social norm of driving at 45 mph would induce nearly all drivers to drive at 45 mph. Now imagine if the law is crafted using algorithms that consider only personal characteristics; we increase the speed limit to 50 mph for most drivers, but for a few risky drivers—because they have uncommon physical conditions—the speed limit is only 35 mph. The Theory of Planned Behavior, as supported by empirical findings, posits that risky drivers would nevertheless drive at speeds similar to those of most other drivers and above their personalized speed limits. These risky drivers observe that others drive at 50 mph, and thus would also drive at 50 mph.84The tendency to drive at speeds similar to others’ speeds could be exacerbated by the fact that the risky drivers do not know that others have higher personalized speed limits. In addition to the physical difficulties of knowing others’ personalized speed limits, privacy laws could prevent drivers from knowing others’ personalized speed limits.
The actual outcome of a speed limit personalization scheme where most drivers have speed limits of 50 mph and a few drivers have significantly lower speed limits is likely to be that those who have speed limits of 50 mph drive at 50 mph, and most of the others who have substantially lower speed limits also drive at 50 mph. Personalizing the speed limit has thus increased the overall speed of drivers from 45 mph to 50 mph, resulting in more collisions and damage.85It’s likely that most people do not observe the speed limit and drive a few miles per hour over the speed limit. This example would still support the general point that personalizing the speed limit would raise the overall speed limit. Suppose people generally drive 5 mph over the speed limit. With a 45 mph uniform speed limit, nearly all drivers would drive at 50 mph. However, with a differentiated speed limit where most drivers have a 50 mph speed limit and others have a 35 mph speed limit, nearly all drivers would drive at 55 mph.

A lack of uniform law, especially when the law is too differentiated, would create more dangerous social norms in terms of behavior and erode (if not eliminate) the social norm of complying with the law. When most people abide by the law, non-compliance with the law results in a feeling of loss of esteem. The greater the level of compliance, the greater the loss of esteem from non-compliance.86See Richard McAdams, The Origin, Development, and Regulation of Norms, 96 Mich. L. Rev. 338, 366–367 (1997).
A social consensus on the appropriate behavior, expressed through uniformity in law, would promote more uniformity of behavior and induce individuals to voluntarily comply with the law to avoid the loss of esteem.87The idea that the presence of social norms promote compliance with the law has been found in urban transportation. Lior Strahilevitz posits that as the FasTrak program, in which drivers may use HOV lanes on the I-15 highway near San Diego by carpooling or prepaying a monthly fee, became more popular and more drivers used the HOV lane, violations of HOV lane use decreased because the loss of esteem for violating the law increased. See Lior Strahilevitz, How Changes in Property Regimes Influence Social Norms: Commodifying California’s Carpool Lanes, 75 75 Ind. L. J. 1231, 1264–1265 (2000).
Differentiating the law for each individual would make it impossible to tell whether others are complying with the law since any particular individual cannot know what the law is for others and whether others are complying with the law. This would reduce the incentive to comply with the law voluntarily.

The influence of social norms extends far beyond driving behavior, as the theory of planned behavior has been found to explain behaviors such as smoking,88Gabriela Topa & Juan Antonio Moriano, Theory of Planned Behavior and Smoking: Meta-Analysis and SEM Model, 1 Subst. Abuse Rehabil. 23 (2010).
tax compliance,89Serkan Benk & Tamer Budak, An Investigation of Tax Compliance Intention: A Theory of Planned Behavior Approach, Eur. J. Econ. Finance Adm. Sci. 180 (2011).
and illegal digital downloading.90Xiao Wang & Steven R. McClung, Toward a Detailed Understanding of Illegal Digital Downloading Intentions: An Extended Theory of Planned Behavior Approach, 13 New Media Soc. 663 (2011).
Therefore, the effect of differentiating the law on social norms should be considered when algorithms are used to make the law more granular.91Social norms can only influence behavior when differentiated behaviors are observable. If people cannot observe others’ actions or reliably infer their beliefs, then descriptive and injunctive norms are unlikely to shape their decision-making.

There are two other ways in which the differentiation of law can decrease one’s compliance with the law and decrease the effectiveness of the law. First, the law can express societal attitudes about appropriate behavior in a given context.92See McAdams, supra note 86.
Citizens often interpret the law as a signal of how others view what is acceptable. If the law is personalized—particularly through algorithmic differentiation—it may cease to reflect a shared societal judgment. As a result, individuals may no longer feel compelled to comply with the law in order to gain social approval. Second, when the law is differentiated for each person and behavior is differentiated accordingly, people subject to more stringent or more burdensome laws might see themselves as the “suckers,” and they see those who behave in adherence to less stringent or less burdensome laws as free riders. Experiments have shown that people dislike being the “suckers” and want to punish the free-riders.93See Ernst Fehr & Simon Gächter, Cooperation and Punishment in Public Goods Experiments, 90 Am. Econ. Rev. 980, 984 (2000).
In addition, it has been empirically demonstrated that the degree to which people contribute to public goods depends on the degree to which they believe others contribute.94See generally Urs Fischbacher, Simon Gächter & Ernst Fehr, Are People Conditionally Cooperative? Evidence from a Public Goods Experiment, 71 Econ. Lett. 397 (2001) (showing that half of the subjects in an experiment measuring their willingness to contribute to a public good are conditional cooperators).

3. Law Enforcement Cost

While spatial effects and social norms present the most salient difficulties for personalization of the law without coordination and thus are the primary factors that lawmakers must account for in making the law, there are also several other issues arising from differentiation that lawmakers must consider. Another concern regarding the personalization scheme proposed by Ben-Shahar and colleagues arises from the cost of enforcing differentiated law. The cost of enforcing differentiated law includes determining the law for each individual, communicating the personalized law to each individual, and communicating the personalized law to police officers in real time where appropriate. Returning to the speed limit example, it would be much more costly for police officers to enforce differentiated speed limits than uniform ones. Instead of simply measuring the speed of each vehicle and pulling over any vehicle traveling above the uniform speed limit, the police officer would need to measure the speed of the vehicle, identify the driver and the personalized speed limit for that driver, and then determine whether that specific driver has violated his or her personalized speed limit. When law enforcement costs are properly accounted for in creating algorithmic law, it is likely to shift some differentiated law schemes to uniform ones.

Some might argue that the enforcement cost issue only applies to rules, and that in instances where the law is a standard, the reasonableness of one’s behavior is determined ex post. Since the personalized law does not need to be communicated to the individual or law enforcement ex ante, there would be no additional law enforcement cost arising from individually differentiated laws. However, this argument relies on the assumption that each individual knows, ex ante, all the human characteristics that determine the relevant standard and his or her data for each characteristic (or perhaps more importantly, his or her standing in the population for each characteristic) so that one could determine the personalized standard for oneself and behave accordingly. This is not a reasonable assumption because most individuals cannot be expected to know all the parameters used to determine the standard, and numerous studies have shown that humans do not accurately gauge their own abilities.95See, e.g., Laila M. Martinussen, Mette Møller & Carlo G. Prato, Accuracy of Young Male Drivers’ Self-Assessments of Driving Skill, 46 Transp. Res. Part F Traffic Psychol. Behav. 228 (2017).
Moreover, for the law to incentivize each individual to engage in collectively optimal behavior, each individual would need to know the effects of his or her behavior on others. This is also not a reasonable assumption, as people frequently engage in individually welfare-maximizing behavior that is not collectively optimal.96See generally Thomas C. Schelling, Micromotives and Macrobehavior (rev. ed., 2006).
For algorithmic law to incentivize people to engage in socially optimal behavior, the law should be determined and communicated to each individual and law enforcement ex-ante. This process would significantly increase the cost of law enforcement in many cases if the law for each individual were differentiated.

4. Externalities

Negative externalities would arise if the initial personalization scheme is not appropriately designed. Ben-Shahar and Porat considered two personalization schemes: personalization by risk and personalization by skill. In the personalization by skill scheme, a driver with poor driving skills would be subject to a lower level of care because the cost of complying with the medium level of care for the driver with poor driving skills would be too high. However, initially allowing this individual to take a lower level of care would create a negative externality by disincentivizing this individual to improve his or her skills. Since the algorithm making the law takes past behavior as inputs, this scheme would continually incentivize an individual with low skills to engage in behavior that is harmful to the individual and society as a whole, and the behavior would become more dangerous with each iteration of algorithmic law creation.

Consider two other examples from urban mobility that illustrate the role of externalities: the use of daytime running headlights (DRL) and route assignment. Studies generally find that DRL use is associated with a slight reduction in crashes,97 CTC & Associates LLC, Effects of 24-Hour Headlight Use on Traffic Safety 2 (2010), https://www.dot.state.mn.us/research/TRS/2011/TRS1009.pdf.
and naturalistic studies support statistical findings.98Archana Venkatachalapathy, et al., A Naturalistic Study Assessing the Impact of Daytime Running Lights and Vehicle Passing Events on Cyclist’s Physiological Stress, 16 Transp. Res. Interdiscip. Perspect. 100703 (2022).
However, Elvik points out a paradox in the findings on the use of DRL: an increase in the percentage of cars using DRL is not associated with a proportionate reduction in crashes.99Rune Elvik, Can a Road Safety Measure Be Both Effective and Ineffective at the Same Time? A Game-Theoretic Model of the Effects of Daytime Running Lights, 59 Accid. Anal. Prev. 394, 394 (2013).
Elvik explains this paradox by pointing out that DRL, like vaccination, has internal and external effects.100Id. at 395.
The internal effect of using DRL, other than seeing the road more clearly, is becoming easier for other cars to detect, thus increasing the safety of the car using DRL.101Id.
However, as experiments have shown, when one vehicle uses DRL, vehicles not using DRL become relatively less detectable, and road users relying on seeing headlights to detect vehicles may be less capable of detecting vehicles that are not using DRL.102Id.
The internal effect of using DRL to make one’s vehicle safer is likely offset to some degree by the external effect of making other drivers’ vehicles less safe. Eventually, it becomes no longer socially beneficial or even socially harmful for an additional vehicle to use DRL. To underscore this point, Elvik notes that the percentage of accident reduction associated with DRL use has diminished in more recent studies. A study in 1965 (when few drivers used DRL) showed roughly a 40% reduction in accidents from using DRL, whereas a study in 2005, when DRL had become much more common, showed merely a 5% reduction; in fact, a Danish evaluation in the mid-1990s on laws requiring DRL found that the law had no impact on safety.103Id. at 396.
If a law were made regarding the use of DRL, the personalization of that law must account for the externalities of each individual’s use of DRL.

Route assignment also illustrates the externalities of individual behavior in urban mobility. If every vehicle took the route that the routing algorithm considers to be the shortest route for its passenger, some roads would end up being heavily congested because they are in the shortest route between popular origins and destinations, while the roads that are not along the shortest paths between popular points would be underutilized. If, instead, some cars were directed to take less direct paths that would require longer travel times when the path was initially calculated, congestion on the shortest path would be relieved and the aggregate time that drivers spend on the road could significantly decrease. Models show that the total travel time when a planner chooses paths for drivers to achieve system optimality would be substantially lower than the total travel time when each individual chooses the shortest path for himself or herself.104See, e.g., Md Mamun, Ashfia Siddique & S M Rahman, Comparison of User Equilibrium (UE) and System Optimum (SO) Traffic Assignment Methods for Auto Trips, International Conference on Recent Innovation in Civil Engineering for Sustainable Development (IICSD-2015), https://www.researchgate.net/publication/296846245_Comparison_of_User_Equilibrium_UE_and_System_Optimum_SO_Traffic_Assignment_Methods_for_Auto_Trips (2015); Hani S. Mahmassani & Srinivas Peeta, Network Performance Under System Optimal and User Equilibrium Dynamic Assignments: Implications for Advanced Traveler Information Systems, Transp. Res. Rec. (1993), https://trid.trb.org/view/385082.

Route planning illustrates that individual optimization does not, in the aggregate, lead to system optimization. The directive and behavior for each individual effect other individuals in the system. To achieve system optimality, a planner must choose some paths that are not individually optimal for some drivers, thereby serving as a coordinator for all drivers in the system. The system optimal solution can be found using simulations, a process where the planner generates multiple (if not all) possible path combinations for all drivers in the system and chooses the path combination that minimizes the total travel time.

5. System Unpredictability

It is also possible that we wouldn’t be able to model or measure the effect of customizing the law based on personal characteristics; this has consequences on each individual’s future behavior, other individuals’ future behavior, and the welfare of society as a whole. This problem of differentiating the law only based on individual characteristics is slightly different from the problem described in the previous section. Whereas the previous section discussed negative externalities, some of which might not be visible or anticipated, this section is concerned with the unpredictability of differentiated law from a utilitarian perspective.

Making the law algorithmic instead of fixed, uniform law, particularly when the law is differentiated for each individual based on individual characteristics only, is likely to increase the unpredictability of the feedback effects of human behavior on the law and the unpredictability of the impact of individual behavior on society. The content of the law influences people’s behavior and their preferences.105See Oren Bar-Gill & Chaim Fershtman, Law and Preferences, 20 J. Law Econ. Organ. 331 (2004).
Behavior and preferences, in turn, are used as inputs for determining the content of the law made by algorithms. Even if the law’s impact on each individual’s behavior could be anticipated when algorithms are used to make the law for each individual, the feedback effects of the individual’s behavioral changes on the content of the law to be made in the next iteration would be challenging to anticipate.

Suppose someone’s personalized blood alcohol limit is raised to 0.1% (above the current 0.08% limit). What would be the impact on this person’s driving behavior? What would be the impact of the behavioral change, as a result of this increase in the blood alcohol limit, for the next iteration for calculating this person’s blood alcohol limit? What about the change in behavior and the change in the law in the iteration after that? What would be the change in collective driving behavior if a uniform blood alcohol limit of 0.08% were changed to a personalized scheme, where Person A was subject to a 0.1% blood alcohol limit, Person B was subject to a 0.05% blood alcohol limit, and Person C was subject to a 0.03% blood alcohol limit, and so on? And what would be the impact of their behavior when the law is to be determined in the next iteration? The outcome of differentiating the blood alcohol limit could be that there would be enough people with elevated levels of blood alcohol limit that they exert a negative social influence on drivers with lower blood alcohol limits, and the drivers with lower alcohol limits (and drivers collectively) drink more alcohol before driving. This change in behavior and social consequences becomes part of the input data for the next iteration of adjusting the law. In the next iteration, many drivers could end up having higher blood alcohol limits because the cost of enforcing lower blood alcohol limits appears to be difficult and expensive. The ultimate consequence of differentiating blood alcohol limits could be an upward feedback loop of raising the blood alcohol limit, increasing the incidence of drunk driving, lowering the enforcement of drunk driving because it becomes more expensive, and increasing accidents. Of course, there are many possible outcomes. The point is that the social consequences of differentiating blood alcohol limits are highly unpredictable. Our current understanding of driving behavior and the law’s impact on behavior is unlikely to generate robust predictions past the initial iteration.

Allowing algorithms to make the law, especially if the process only accounted for personalized characteristics but not the effects of each individual’s behavior on others, could yield highly unpredictable consequences. Complexity theory, which describes complex systems with interconnected components and feedback effects, has been used to describe the interactive evolution of law, behavior, and social change.106Complexity theory includes concepts such as chaos theory, which characterizes the interactions among law, behavior, and social change. See Mark J. Roe, Chaos and Evolution in Law and Economics, 109 Harv. L. Rev. 641 (1996).
In complex systems, changes in microscopic components of the system can lead to unpredictable macroscopic changes.107See Melanie Mitchell, Complexity: A Guided Tour 9–10 (2011).
A frequently cited example that illustrates the unpredictability of complex systems is that a butterfly flapping its wings could induce a hurricane on the other side of the Earth.108See Emilio Ferrara, The Butterfly Effect in Artificial Intelligence Systems: Implications for AI Bias and Fairness, 15 Mach. Learn. Appl. 100525, 1 (2024).
Similarly, even small changes in the law could induce significant and unpredictable changes in individual behavior and social behavior when the law is made and constantly updated by algorithms using behavior as input.

Although unpredictability is an issue arising from making the law algorithmic with any approach, including the approach introduced in this Article, it is likely to be even more problematic if algorithmic law were personalized without coordination. The three-step coordination process discussed in the following sections would significantly ameliorate the unpredictability issue since the coordination process is likely to produce uniform and constant law in many instances once the interactive effects are accounted for and thus maintain the status quo. In other words, incorporating coordination into the lawmaking process could reduce much of the unpredictability of algorithmic law because the coordination process would preserve the nature of current law, which is uniform and constant in most instances. Of course, the current uniform and constant form of the law still has evolutionary effects, some of which could be difficult or even impossible to predict. The point is that the unpredictability of differentiated law is much greater than that of uniform law.

6. Generalization of the Problems of Personalization

The issues with personalizing speed limits without considering the interactive effects of personalization described in the previous sections can be generalized as follows. First, differentiation could create unanticipated interactive effects (from a spatial perspective) that cannot be accounted for if the lawmaker only optimized the law based on individual welfare considerations. Second, social norms could cause those who are given more stringent rules or higher standards to adopt behaviors similar to those who are given less stringent rules or lower standards, and thus create more overall risk in a scenario where the law is differentiated for each person than in a scenario where everyone is given the same intermediate rule or standard. Third, the additional cost of law enforcement is not accounted for in personalization schemes. Fourth, improper personalization can create negative externalities. Lastly, the interactions and feedback effects can be highly unpredictable and path-dependent. The system as a whole can become much more erratic with more differentiation. In summary, personalization without proper coordination often results in a scenario in which social welfare is not maximized. A process that accounts for the interactive effects is necessary to optimize social welfare. Thomas Schelling highlights the importance of accounting for interactive effects when using individual incentives to predict collective behavior:

People are responding to an environment that consists of other people responding to their environment, which consists of people responding to an environment of people’s responses . . . These situations, in which people’s behavior or people’s choices depend on the behavior or the choices of other people, are the ones that usually don’t permit any simple summation or extrapolation to the aggregates. To make that connection we usually have to look at the system of interaction between individuals and their environment, that is, between individuals and other individuals or between individuals and the collectivity.109 Schelling, supra note 96, at 14.

Setting speed limits for each individual without accounting for the interactions among drivers is ineffective. This illustrates the more general principle that welfare maximization at the individual level is not equivalent to welfare maximization at the aggregate level. The Prisoner’s Dilemma game reflects this principle more broadly.110See, e.g., Robert Axelrod, Effective Choice in the Prisoner’s Dilemma, 24 J. Confl. Resolut. 3 (1980) (explaining the Prisoner’s Dilemma game).
In the Prisoner’s Dilemma game, if each player tries to optimize his/her utility without cooperating with the other player, confession would be the optimal strategy for each player. However, if both players commit to a binding agreement that neither would confess, they can achieve a higher level of utility collectively and for both individually. It is important to emphasize that the payoff for each player in the Prisoner’s Dilemma game is higher when both cooperate with each other (don’t confess) than if both don’t cooperate with each other (confess). Similarly, when two cars collide, not only are the two drivers collectively worse off, but each driver is also worse off. The law, as Richard McAdams explains, provides mechanisms to facilitate coordination in human interactions, such as those that have payoff structures like the Prisoner’s Dilemma game, so that society is better off.111See generally Richard McAdams, Focal Point Theory of Expressive Law, 86 Va. L. Rev. 1649 (2000).
In order to issue the optimal combination of laws that maximizes social utility, the lawmaker must first compute the payoffs that result from each possible combination of laws and then identify the optimal combination. In the following section, I describe this process in greater detail.

C. Three-Step Process for Making Algorithmic Law

As the example from the previous section on speed limits illustrates, the law must be coordinated to achieve system optimality and to sufficiently protect each individual in many instances;112The necessity of coordinating personalization of law applies to both rules and standards in certain instances. The difference between rules and standards in personalization and coordination is not an essential feature of the concept of coordination explained in this Article. I refer to both as “the law” or “commands” in this discussion.
coordination must account for spatial norms, social norms, enforcement costs, externalities, and system unpredictability. As explained, one person’s behavior affects the behavior and utility of other individuals in many contexts. The law for each person should not be determined using an optimization process at the individual level but using an optimization process at the system level; the objective function and constraints at the system level would account for the interactive effects of each individual’s behavior. Rather than identifying the optimal law for each individual separately, the lawmaker should simultaneously determine the optimal combination of rules for all individuals involved in a scenario.

I propose the following three-step process for using algorithms to determine the law for each individual in a particular situation that accounts for the interactive effects of each individual. This approach improves not only aggregate welfare but also the welfare of most individuals.

For the first step, the lawmaker should try to identify whether there are interactions among individuals and classify the nature of the interaction. Different types of interactions require different combinations of individual behavior to obtain aggregate optimality. For example, is this situation involving multiple individuals a Prisoner’s Dilemma game, a Stag Hunt Game, a Hawk-Dove Game, another game or type of interaction that game theorists have modeled, or something else?

Categorizing the interaction requires an extensive analysis of the considerations discussed in Part II.B and other factors as appropriate. Categorizing the interaction into well-studied models would allow the lawmaker to identify the form of the optimal solution for the combination of laws for all individuals and reduce their search for the optimal solution.113The optimal solution for many types of coordination games and interactions is not uniformity of law or behavior.

The second step involves listing all the possible combinations of laws for all individuals involved. The lawmaker would enumerate all the possible combinations of laws for each individual that fit the form of the optimal solution if the lawmaker knows the form of the optimal solution.114The term “lawmaker” does not necessarily refer to legislators only, but to administrative agencies that have the statutory authority to make tailored laws and regulations and to the courts.
If the lawmaker cannot classify the interaction and identify the optimal solution form in Step One (i.e., the interaction is too complex), then the lawmaker would enumerate all the possible combinations of laws, not only the combinations that take on the form of the optimal solution (since the lawmaker doesn’t know the form of the optimal solution).

There are instances where the choice of law is discrete. For example, the law requiring a driver to turn on the headlights during the day has two discrete choices (“required” and “not required”). There are also instances where individuals choose from a continuous spectrum of alternatives, such as driving speed and the required rest time in a twenty-four-hour period before driving. The lawmaker would partition the spectrum into discrete bins of alternatives. For example, speed could be rounded to the nearest five-mile-per-hour interval (i.e., 45 mph, 50 mph, and so forth), and sleep time could be rounded to the nearest fifteen-minute interval.

For the third step, the lawmaker calculates the social welfare resulting from implementing each combination of laws and choose the optimal combination. The social welfare of each combination of laws would account for the benefit and cost to all individuals to comply with the law and the risk of harm from that combination of laws. The risk of harm must consider the interactive effects of behaviors. In many instances, the net social utility is best estimated using simulations performed by engineers to fully capture the interactive effects.115Cognlianese and Lehr likewise proposed a simulation-based approach to rule-making known as “agent based models” (ABM) or “multi-agent systems” (MAS). Cary Coglianese & David Lehr, Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, Geo. L. J. 1147, 1173–1174 (2017); Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, 71 Adm. L. Rev. 1, 12 (2019). This approach is conceptually similar to the approach proposed in this Article. In their ABM/MAS system, the rule-maker specifies the objectives, and many potential rules are modeled. In each model, the response of all affected entities in the system (in terms of compliance or non-compliance) are forecasted using machine learning algorithms, and the net effect of each rule is calculated. The rule-maker would then choose the rule that results in the optimal outcome. Coglianese and Lehr provide a possible application of this approach in the Occupational Safety and Health Administration (OSHA) rule-making regarding permissible exposure limits for chemicals in the workplace. However, Coglianese and Lehr did not point out the possibility and importance that agents within their system have interactive effects on others, that the aggregate welfare is not necessarily a sum of the individual welfares estimated in isolation, and that simulations are helpful and sometimes necessary in complex environments that make analytical solutions difficult to find. Coglianese and Lehr also limited their discussion to administrative agency using this approach to make regulations, and made no mention of applying this approach in other contexts such as tailoring negligence standards or contract default rules.

Generally, the lawmaker should account for several types of interactive effects in calculating social utility. Spatial effects include effects such as the risk associated with greater overtaking and less predictable driving behavior arising from increasing the variation of speed among drivers. In the warranty of habitability example in Part III.B, spatial effects also include the increase in safety or health risk to the whole building as a result of lowering safety and health standards for one unit in the building. Engineers and economists specializing in areas relevant to the law to be crafted should help build models for lawmakers to anticipate the direct effects associated with each possible set of commands.

Social effects include the effects of social norms on people’s behavior, such as using mobile phones while driving as a result of seeing others use mobile phones while driving and/or knowing that others approve of using mobile phones while driving. Social norms also affect behaviors outside of urban mobility, such as excessive alcohol consumption and smoking cessation.116See, e.g., Kypros Kypri & John D Langley, Perceived Social Norms and Their Relation to University Student Drinking, 64 J. Stud. Alcohol 829 (2003) (showing that social norms significantly influence university students’ intentions to overconsume alcohol); Nicholas A. Christakis & James H. Fowler, The Collective Dynamics of Smoking in a Large Social Network, 358 N. Engl. J. Med. 2249 (2008) (showing that social networks predict smoking cessation).
Social norms should be built into the utility models with the help of social psychologists and other quantitative social scientists to account for the complexities of interactions from social networks. The general process of computing algorithmic law should allow for both spatial and social effects to be incorporated in computing the social utility associated with each set of commands. In some cases, either spatial or social effects may be non-existent. For example, some examples in urban mobility described in Part I, like smart traffic lights and merge control, have almost no social effects. Nevertheless, in the general process, the lawmaker should begin by considering all possible spatial and social effects. Identifying the form of the optimal solution in Step One can eliminate unnecessary simulations and computations. In addition, enforcement costs, externalities, and system unpredictability should also be accounted for when calculating the social welfare for each combination of laws.

The calculation of social welfare for each combination can be computationally intensive and time-consuming. The search for the optimal combination could also be onerous if the list of possible combinations is too long, even when the list is reduced to only those that follow the form of the optimal solution. I assume that computational power is sufficient to perform these tasks in my proposal. A Bayesian optimization strategy could be used to decide which combinations to test and how to search for the optimal combination if computational power is limited.117See generally Peter I. Frazier, A Tutorial on Bayesian Optimization, arXiv:1807.02811 (July 8, 2018), https://arxiv.org/abs/1807.02811.

The optimal law must be communicated to each individual as soon and as clearly as possible. In urban mobility, this can be done using road signs where the law is the same for everyone or messaging the driver via the dashboard in instances where the command varies for each individual. In cases where communication and enforcement are impractical and the reasonableness of one’s behavior is determined ex post (and in other instances whenever possible), the lawmaker must be able to explain the algorithm used to craft the law.118Coglianese and Lehr delineate the types of information that would have to be disclosed and aspects of algorithms that would have to be explained and justified to satisfy the due process doctrine and transparency requirements under administrative law. See Coglianese & Lehr, supra note 115, at 40–47.

Let’s consider a section of a highway with two lanes in each direction. Suppose three drivers, A, B, and C, in this section of the road are going in the same direction concurrently. To determine the speed limit for each driver, the lawmaker would first compile a list of all potential combinations.119The list of combinations would be look like {(45 mph, 45 mph, 45 mph), (45 mph, 45 mph, 50 mph), (45 mph, 50 mph, 45 mph), (45 mph, 50 mph, 50 mph)  . . . }. Of course, if the lawmaker is able to quickly identify the form of the optimal solution during the first step (that all drivers should have the same speed), the list of possible combinations would be much shorter; the list would include only {(45 mph, 45 mph, 45 mph), (50 mph, 50 mph, 50 mph), (55 mph, 55 mph, 55 mph) . . . }.
Next, based on the characteristics of A, B, and C, such as age and the number of accidents in the past year, accounting for the effect on risk arising from differentiated speeds, the lawmaker would calculate the net utility of each possible combination of speeds.120If individuals A, B, and C have different characteristics that affect their driving risk profile, combinations such as (45 mph, 45 mph, 50 mph), (45 mph, 50 mph, 45 mph), and (50 mph, 45 mph, 45 mph) should have different social welfare, even though in each case one driver has a 50 mph speed limit while the other two have 45 mph speed limits.
For the sake of simplicity, let’s assume there are no social effects in this case, and interactions among the actors only produce spatial effects. Simulation software such as VISSIM would help lawmakers calculate the social utility by modeling the movement of vehicles under various scenarios of speed limit combinations.121VISSIM is a software that simulates microscopic traffic behavior. See Martin Fellendorf & Peter Vortisch, Microscopic Traffic Flow Simulator VISSIM, in Fundamentals of Traffic Simulation 63, 63 (Jaume Barceló ed., 2010), (describing the principles and functions of VISSIM). While this simulation can help capture the interactive effects, the lawmaker must ensure that simulations account for the effects of individual characteristics, such as driving experience, as well as externalities like pollution are accounted for in computing social welfare.
The net social utility would account for the risk of accident and the cost of care to drive at that speed, including the interactive effects such as speed variations described previously and the cost of enforcement. The net social utility should also account for externalities like noise pollution, local air pollution, and carbon emissions. The lawmaker would then communicate the speed limit for each driver to the driver’s dashboard. As explained previously, the optimal combination would likely be one where most (if not all) drivers are required to drive at the same speed most of the time. This does not render algorithms unnecessary, as they are still helpful in calculating the optimal speed required for all drivers given the weather and traffic conditions (as well as other factors such as the value of drivers’ time and physical damage to the roads associated with various speeds). Moreover, as the following sections will show, there are instances in urban mobility where coordination is needed but the law for each individual is differentiated; algorithms are necessary to compute personalized laws in these instances.

The solution of uniform law in this example with speed limits is not an argument for uniformity for the sake of uniformity or an argument against tailoring the law to each individual, but an illustration that tailoring at the personal level must be supplemented with coordination, particularly those with interactive effects on others, to be useful. The uniform law in the speed limits example happens to be the optimal solution that the lawmaker should reach after adequately considering all the possible interactive effects.

While this Article illustrates the importance of coordinating personalized law through applications in urban mobility, the concept of coordination extends far behind urban mobility. The interactions among drivers in urban mobility constitute a subset of interactions among humans in everyday life in which coordination is necessary to avoid potentially devastating consequences. As a general matter, the law, as Richard McAdams explains, is needed to provide focal points to facilitate coordination in human interactions.122See generally McAdams, supra note 111.
In Part III.B I provide examples of areas of law outside of urban mobility where human behaviors are interactive, and the coordination of algorithmic law requires uniformity.123See supra Part III.B.

D. Differences with Previous Conceptions of Algorithmic Law

Coordination plays a very different role in my conception of algorithmic law from the conceptions of other legal scholars. Other scholars have written about conceptions of algorithmic law that do not require coordination, have not considered the necessity of coordination, or have not properly addressed the role of coordination.

McGinnis and Wasick’s conception of the shift from current law to algorithmic law consists of two central ideas: (1) substituting standards for rules, as the exponential increase in computing power will make it easier for people to predict how a standard will be applied in a particular situation, and (2) making rules dynamic.124See generally John McGinnis & Steven Wasick, Law’s Algorithm, 66 Fla. L. Rev. 991 (2015).
McGinnis and Wasick envision algorithmic law as changing based on circumstances, but they do not envision algorithmic law as varying for each person. Without the personalization component, their conception of algorithmic law does not require coordination in the lawmaking process. In addition, in McGinnis and Wasick’s vision, algorithmic law is the product of legal search that predicts how legal standards will applied in each particular situation; algorithmic law is what legal search tools predict the law will be. McGinnis and Wasick’s conception of algorithmic law is made from a bottom-up approach in which the law is a product of citizens’ predictions of the law using legal search. The conception of algorithmic law in this Article is a top-down approach in which the legislature or agency makes the law. Coordination can only be possible in a top-down approach because citizens cannot anticipate the behaviors of others, consider the potential effects, and model the interactions and social outcomes under different combinations of laws.

Casey and Niblett have also written about the possibilities of algorithmic law. They propose that we take advantage of advancements in communication and predictive technology and use algorithms to create the substance of the law. Like McGinnis and Wasick, Casey and Niblett envision using algorithms to make the law dynamic, meaning that the law can be expressed by a set of instructions that adjust the substance of the law to changing circumstances.125See Casey & Niblett, supra note 1, at 1404 (“Machines then design the law as a vast catalog of context-specific rules to optimize that goal.”).
Casey and Niblett also envision that algorithms could make the law personalized, providing an example in which the speed limit may depend on the characteristics of the driver.126Id. (“For example, a microdirective might provide a speed limit of 51.2 miles per hour for a particular driver with twelve years of experience on a rainy Tuesday at 3:27 p.m.”).
However, Casey and Niblett have not explored the necessity of coordination in their proposal of algorithmic law.127Casey and Niblett did provide an example of algorithmic law where coordination is implicit; this example is one where traffic lights may have sensors that detect that a passenger in a vehicle may need medical attention, and change accordingly. See id. at 1417. However, Casey and Niblett did not explicitly address the necessity of coordination.
They have not discussed the possibility that personalized law, such as personalized speed limits, would be more dangerous than the current uniform law, nor have they proposed any solutions to resolve such issues.

Lastly, Ben-Shahar and Porat have written extensively about personalizing the law. They propose that the law should be crafted for each person based on that person’s unique characteristics; algorithms would be used to compute the substance of the law for each person. Although Ben-Shahar and Porat discussed the role of coordination in personalized law, there are three issues with their perspective. First, they have not anticipated the most problematic consequences of differentiating the law such as spatial effects and social norms. Second, they have not proposed a process of crafting personalized law to avoid such consequences. Third, they consider some activities to be individual when in fact these activities are group activities.

Let us return to the speed limit example. Ben-Shahar and Porat see the problem with differentiated speed limits as that other road users would be less able to anticipate the movement of each car and behave accordingly.128See Ben-Shahar & Porat, supra note 3, at 175.
However, the primary problems I see with differentiated speed limits are 1) if drivers obeyed the differentiated speed limits, there would be more overtaking and thus more collisions; and 2) if drivers are influenced by social norms, drivers with lower speed limits would drive at higher speeds and the overall driving speed and risk of collision would increase. Ben-Shahar and Porat argue that their scheme of differentiating speed limits for each driver would equalize risk, and thus they conclude that differentiated speed limits would be optimal. For Ben-Shahar and Porat, coordination is about sorting people into groups of the same risk level. They have not considered that differentiating speed limits would create a greater likelihood of collisions. Nor have they considered that social norms would cause those subject to lower speed limits to drive at higher speeds. In my conception, coordination is about giving rules to different people so that they can collectively make the most effective use of space, taking advantage of social norms to motivate people to perform pro-social behavior, reducing enforcement costs, reducing negative externalities, and reducing the system’s unpredictability.

Ben-Shahar and Porat recognize that the movement of airplanes must be spatially coordinated to avoid collisions.129See id.
However, their consideration of spatial coordination seems to be left out when they transition from providing directions to airplanes to creating personalized tort law and contract law for individuals. As Ben-Shahar and Porat point out, uniformity is not required for spatial coordination,130Id.
but spatial coordination must be a part of the process of figuring out directions. Yet, spatial coordination is not part of Ben-Shahar and Porat’s algorithmic law-making process. Ben-Shahar and Porat also neglect the role of social norms, system unpredictability, and other relevant factors previously discussed in this Article. Ben-Shahar and Porat’s conceptions of personalizing negligence standards and mandatory contract terms are based on personal characteristics only. As explained above, the approach based on only each individual is often not socially optimal and can even be worse off than the current law.131See supra Part II.B.
If Ben-Shahar and Porat’s approach of making personalized laws were applied to making directions for airplanes, it would enable the possibility that airplanes would collide due to poor planning.

The problem of differentiating speed limits for each driver is one example of a broader issue with Ben-Shahar and Porat’s conception of personalizing the law. The broader issue is that just about every act we conduct affects others. Many seemingly individual acts are, in fact, group activities. My approach assumes that every activity is a group activity, and laws governing each individual must be coordinated. This is a more robust approach that can accommodate purely individual activities—the interactive effects would be zero—and compute the same personalized laws as the approach that accounts for only individual characteristics. However, an approach that assumes every activity is an individual activity cannot accommodate the interactive effects of group activities.

There is a significant difference between Ben-Shahar and Porat’s proposal and my proposal in the nature of the computation process of using algorithms to make the law more efficient. For example, their scheme of personalizing speed limits is based purely on each driver’s characteristics. Their approach would call for using a formula for computing the law, which would not have any components about other drivers, and the formula would yield an analytical solution. The output is a speed limit for one driver. In contrast, my scheme of coordinated speed limits is based on all drivers’ characteristics. A simulation that accounts for all interactive effects, not a formula, would be used to calculate the social utility for each possible combination of personalized laws. There would be another algorithm that selects the optimal combination of laws. The output of my approach would be a set of speed limits for all drivers (and the likely solution is the speed limits for all drivers would be the same).

III. Applications of Supplementing Personalization with Coordination

In this section, I will examine the potential outcomes of making algorithmic law in urban mobility that is customized for each individual and incorporates coordination in the customization process. In some cases, the likely solution is that coordination requires uniformity or something close to uniformity in the law. In other cases, there are less prominent interactive effects among road users, and the result of coordination is likely differentiated law for different individuals. However, even in these cases, the characteristics of all individuals must be accounted for in the coordination process, and the personalized law must be in a way that ensures that the individuals collectively maintain a socially acceptable level of protection. I will also provide some examples outside of urban mobility where differentiated law would fail to achieve social optimality. Applying the three-step algorithmic law-making process in these cases results in uniform laws.

A. Cases Where Coordination Requires Uniformity

There are many cases in urban mobility where personal behavior affects others, and coordinated laws require uniformity of behavior. In this section, I will discuss two examples: mobile phone use while driving and auto insurance requirements.

1. Mobile Phone Use While Driving

Like exceeding the speed limit, using handheld devices such as mobile phones while driving is considered negligence per se in many states.132In states like Massachusetts, where traffic violations like speeding are not considered negligence per se, traffic violations may be considered as evidence of negligence. Wynn v. Sullivan, 294 Mass. 562, 566. Texting while driving is banned by law in forty-eight states and D.C. Samantha Bloch, Traffic Safety Review: States Focus on Distracted Driving, National Conference of State Legislatures (Aug. 10, 2022), https://www.ncsl.org/transportation/traffic-safety-review-states-focus-on-distracted-driving/maptype/tile.
Texting while driving and other usages of mobile phones while driving, such as calling while driving and browsing social media while driving, could be personalized, as drivers have distinct levels of skill and different risk profiles—due to physical characteristics and experience—to drive while cognitively impaired. Currently, the law in many states regarding mobile phone usage while driving is partitioned based on age,133For example, in states such as California, Colorado, and Connecticut, the total ban on mobile phone use while driving applies only to drivers under the age of 18. State Highway Safety Offices, Distracted Driving, Governors Highway Safety Association (Mar. 2023), https://www.ghsa.org/state-laws/issues/distracted%20driving.
location,134In states such as Arkansas and Florida, the hand-held ban applies only to school and work zones. In states such as Connecticut and Delaware, the hand-held ban applies to all locations. Id.
and purpose.135In Massachusetts, for example, using a mobile phone for navigation while driving is permitted if it is mounted on the windshield or front console. Using a mobile phone while driving in response to an emergency is an affirmative defense. Mass. Gen. Laws ch. 90, § 13B (2020).
In theory, algorithms could be employed to make the law regarding mobile phone use specific to each individual. However, a differentiated law regarding phone use while driving would likely incur greater net social cost than a mostly uniform law regarding phone use.

As explained, social psychology theories state that the likelihood of carrying out a behavior is significantly influenced by one’s perception of the social norms regarding that behavior.136Ajzen, supra note 77, at 182.
Empirical studies have shown that perceptions of social norms indeed influence the use of mobile phones for talking and texting while driving.137See, e.g., Shari P. Walsh, et al., Dialling and Driving: Factors Influencing Intentions to Use a Mobile Phone While Driving, 40 Accid. Anal. Prev. 1893 (2008).; Michelle Nicolls, Verity Truelove & Kayla B Stefanidis, The Impact of Descriptive and Injunctive Norms on Engagement in Mobile Phone Use While Driving in Young Drivers: A Systematic Review, 175 Accid. Anal. Prev. 106774 (2022).
Given the immense danger created by mobile phone use while driving,138Simulation studies have found that drivers using hand-held devices while driving performed as bad as or even worse than alcohol-impaired drivers. See, e.g., David L. Strayer, Frank A. Drews & Dennis J. Crouch, A Comparison of the Cell Phone Driver and the Drunk Driver, 48 Hum. Factors J. Hum. Factors Ergon. Soc. 381 (2006); PC Burns, et al., How Dangerous Is Driving with a Mobile Phone? Benchmarking the Impairment to Alcohol (2002), https://trl.co.uk/publications/trl547.
it is likely that personalizing the law based on individual characteristics would only allow a small number of people to use mobile phones while driving in some circumstances. However, the critical mass needed for social norms to change and to affect the behavior of the population could be quite small. Iacopini and colleagues have used the Naming Game model to find that “one single individual with no special power or wealth can overturn the social conventions held by groups of hundreds of peers.”139Iacopo Iacopini, et al., Group Interactions Modulate Critical Mass Dynamics in Social Convention, 5 Commun. Phys. 1, 8 (2022).
Since law enforcement officers are already reluctant to enforce laws against phone use while driving due to concerns about the prevalence of mobile phone use while driving, the lack of support from the courts, and the need to maintain rapport with the public,140Toni Marie Rudisill, Adam D. Baus & Traci Jarrett, Challenges of Enforcing Cell Phone Use While Driving Laws among Police: A Qualitative Study, 25 Inj. Prev. 494 (2019).
it is even more critical to eliminate the possibility of phone use while driving becoming an even more prevalent social norm.

The coordination process for choosing the optimal set of commands described in Part II would begin by recognizing that the model is one where only a small number of drivers allowed to use phones while driving could induce the whole jurisdiction to use phones while driving. The number of people allowed to use phones while driving must remain below the critical mass needed to make phone use while driving widespread. This number should be close to zero. Recognizing the model and finding the critical mass are important steps that would save significant computation time. Next, the lawmaker would enumerate all sets of possible commands (with the total number allowed to use phones remaining under the critical mass) for drivers in a particular jurisdiction with the recognition of the possible effects of social norms, enforcement cost, and other factors. Next, the net social utility associated with each set of commands will be calculated. Although there would be little spatial interactive effect among drivers from a traffic engineering perspective, social norms play an important role in influencing the behavior of others in this context. The extent of the influence of social norms on phone use while driving and the pace of popularizing the behavior are empirical questions that should be addressed with surveys and models. The difficulty of law enforcement for differentiated laws must also be accounted for. Given the results from related studies regarding the role of social norms in mobile phone use while driving and the danger arising from mobile phone use while driving in each case, it is likely that the optimal set of commands is in effect a uniform ban of mobile phone use while driving.

2. Minimum Auto Insurance Coverage

Almost every state in the United States requires all drivers to purchase auto insurance. Each state has its minimum liability coverage requirements. Regulators could achieve some social objectives more effectively by incorporating algorithms into auto insurance pricing. Algorithmic pricing of auto insurance, like varying insurance premiums based on the distance driven (also known as “pay as you drive”) rather than pricing insurance premiums the same regardless of usage, could significantly reduce vehicle miles driven (especially by riskier drivers) and driving-related harms such as congestion, pollution, and accidents.141Economists estimate that the harm to society such as oil consumption and carbon emissions reduced by usage-based driving could amount to as much as billion annually. See Jason E. Bordoff & Pascal Noel, Pay-As-You-Drive Auto Insurance: A Simple Way to Reduce Driving-Related Harms and Increase Equity, Brookings Discussion Paper 2008-09 1, 2 (2008), https://www.brookings.edu/research/pay-as-you-drive-auto-insurance-a-simple-way-to-reduce-driving-related-harms-and-increase-equity. Field experiments have shown that pay as you drive insurance schemes can reduce speeding violations by fourteen percent. See J. W. Bolderdijk, et al., Effects of Pay-As-You-Drive Vehicle Insurance on Young Drivers’ Speed Choice: Results of a Dutch Field Experiment, 43 Accid. Anal. Prev. 1181 (2011).
Advancements in telematics have enabled insurance companies to use mobile phones to collect the usage data needed for pay-as-you-drive insurance, which many states have adopted.142See Panos Desyllas & Mari Sako, Profiting from Business Model Innovation: Evidence from Pay-as-You-Drive Auto Insurance, 42 Res. Policy 101, 111 (2013) (“The pattern of the initial expansion of PAYD to 19 US states has been explained by Hutchinson . . . “).

It is likely that personalizing minimum auto insurance requirements based on drivers’ characteristics and driving records instead of subjecting all drivers to the same minimum liability requirements could be more efficient for some individuals. For example, a twenty-five-year-old driver named Diana, who has driven two thousand miles per year in the past three years, has no medical issues and has had no accidents, would not be required to purchase auto insurance. Diana’s expected liability could be significantly less than the minimum auto insurance coverage currently required by law.143Under current law with uniform minimum insurance coverage, one cannot pay a fine and continue to drive without auto insurance in many states, even after paying a monetary penalty. In Massachusetts, for example, a driver who is found to have no auto insurance will have his license suspended for sixty days on the first offense, and have his license suspended for one year on the second and subsequent offenses. Mass. Gen. Laws ch. 90, § 34J.
On the other hand, a fifty-year-old driver named Frank, who has driven fifteen thousand miles per year in the past three years, has a record of cardiovascular issues, and was involved in a collision in the past year, would be required to purchase coverage that includes $50,000 for bodily injury and $100,000 for death for a person involved in an accident.144Insurance companies currently determine auto insurance rates based on characteristics such as age and gender. The idea being considering here is not determining auto insurance rates, but determining the minimum amount of insurance coverage, which is not personalized under the law.
While drivers have different risk profiles and auto insurance companies vary premiums based on personal characteristics145Rachael Brennan & Andrew Hurst, Average Car Insurance Rates by Age and Gender (2023), Policygenius (Jan. 5, 2023), https://www.policygenius.com/auto-insurance/average-car-insurance-rates-by-age-and-gender/. Some states, however, do not allow insurance companies to vary premiums based on factors such as gender and age. See, e.g., Commonwealth of Massachusetts Division of Insurance, Massachusetts Consumer Bill of Rights for Automobile Insurance, Mass.gov (2023), https://www.mass.gov/service-details/massachusetts-consumer-bill-of-rights-for-automobile-insurance. (“Massachusetts prohibits insurance companies from using factors such as: sex, marital status, race, creed, national origin, religion, age [except to provide the discount for persons who are 65 years or older], occupation, income, education and homeownership. Companies also may not use credit information contained on your consumer report that is obtained from a consumer reporting agency.”); Patricia Moore, New Gender Equality Regulations in California Auto Insurance, One Inc (Dec. 1, 2020), https://www.oneinc.com/resources/blog/new-gender-equality-regulations-in-california-auto-insurance. (“Effective January 1, 2019, insurers can no longer include gender – whether as a stand-alone factor or in combination with any other factor – when calculating rates for private passenger automobile insurance policies.”).
and driving behavior,146See Cambridge Mobile Telematics, The DriveWell® Platform, Cambridge Mobile Telematics (2023), https://www.cmtelematics.com/safe-driving-technology/how-it-works/.
auto insurance requirements should not be tailored to individuals only based on individual factors. Drivers should not be exempt from minimum auto insurance requirements, even if it makes sense for them to do so individually.

There are several reasons to require all drivers to purchase a minimum level of auto insurance regardless of their risk profile. First, requiring all drivers to buy insurance would allow the victims to be compensated in each case.147Jennifer Wriggins, Automobile Injuries as Injuries with Remedies: Driving, Insurance, Torts, and Changing the “Choice Architecture” of Auto Insurance Pricing, 44 Loy. L.A. L. Rev. 69, 74 (2010).
For some policymakers and citizens, all victims need to be compensated from a normative standpoint. Second, requiring all drivers to purchase insurance makes available the money necessary to pay the injured party.148Id. at 75.
This pool might not be sufficiently large without requiring the participation of all drivers, and thus, victims might not be adequately compensated without all drivers’ participation. Third, personalizing auto insurance requirements has the potential to make insurance premiums excessively high for high-risk drivers due to the lack of participation from low-risk drivers. Requiring all individuals to purchase some level of health insurance would prevent premiums from “death spiraling.”149See Sam Cappellanti, Premium Increases for “Young Invincibles” Under the ACA and the Impending Premium Spiral, American Action Forum (Oct. 2, 2013), https://www.americanactionforum.org/research/premium-increases-for-young-invincibles-under-the-aca-and-the-impending (describing the idea that individual mandates are intended to prevent the escalation of premiums for high-risk individuals due to the non-participation of low-risk individuals).
In the example above, Diana would not buy insurance if she were not required to purchase insurance. Frank’s premium would be higher if Diana did not buy insurance than if Diana purchased insurance. There needs to be at least some initial cross-subsidization to prevent the death spiral problem. Requiring all individuals to purchase health insurance has been shown to yield welfare gains.150See, e.g., Martin B. Hackmann, Jonathan T. Kolstad & Amanda E. Kowalski, Adverse Selection and an Individual Mandate: When Theory Meets Practice, 105 Am. Econ. Rev. 1030 (2015) (finding that the individual mandate in Massachusetts led to a 4.1% welfare gain).
Therefore, while auto insurance minimum requirements could be more efficient in many individual cases through tailoring the requirements to the characteristics of each individual, tailoring should not be done in each driver’s case without consideration of the characteristics of all drivers in the insurance pool and the impact on the welfare of others and social objectives.

Applying the process for choosing the optimal set of commands to determine auto insurance minimums would address the concerns about failing to compensate the victim in each case, building a sufficiently large pool, and making insurance viable and affordable for high-risk individuals. The loss in welfare arising from each of the possibilities mentioned would be reflected in the social utility function in each possible combination for the lawmaker to consider. The optimal set of insurance premiums would require some (particularly low-risk individuals) to purchase greater insurance coverage than they would be required to if their insurance premiums were determined based only on individual considerations. The result is uniformity in terms of the minimum insurance coverage required of all drivers, and the riskiest drivers would be incentivized to purchase greater coverage based on their knowledge of their own risk. Auto insurance requirements could be tailored to personal characteristics,151For example, premiums could be increased for drivers who are likely to drive longer distances.
but there must still be a uniform jurisdiction-level minimum requirement to ensure that all victims are sufficiently covered and avoid the adverse selection problem.

B. Applications Outside of Urban Mobility

Although this Article primarily focuses on urban mobility to illustrate the necessity of coordination in creating the law, there are numerous other areas of law where coordination is essential to account for interactions among individuals, and coordination requires uniformity in behavior. There are many examples in everyday life where society requires uniformity in law and behavior. Adopting daylight-saving time, the choice of language, the use of either the metric system or the Imperial system, and the choice of which days of the week to work all require uniformity for society to function properly. The auto insurance requirement mentioned in the previous section applies to insurance in many other contexts as well. In addition, there are two areas of law where the spatial implications of the interactive effects of individuals are particularly salient, and coordination to achieve aggregate optimality requires uniformity in the law: landlord-tenant law and premises liability.

1. Landlord-tenant Law

One area of the law that could be personalized is landlord-tenant law.152Porat & Strahilevitz, supra note 4, at 1447–1448.
Porat and Strahilevitz, for example, suggest that when the warranty of habitability is missing from a contract, the personality traits and circumstances of the tenant could be used to determine whether the warranty is implied.153Id.
The conditions constituting habitability may also vary depending on personal circumstances, according to Ben-Shahar and Porat.154Ben-Shahar & Porat, supra note 63, at 279–280.
However, the health and safety conditions of each apartment in a building affect others. For example, suppose the warranty of habitability for an apartment owner did not require the landlord to equip the tenant’s apartment with door locks and window guards.155It is possible that even if the warranty of habitability did not require locks for each apartment, building codes would cover the requirement for all apartments. However, it is possible that building codes could be customized for each apartment, in which case this example would be relevant.
The increase in the risk of crime for the tenant whose apartment is not equipped with door locks and window guards would increase the risk of crime for that apartment and increase the risk of crime for all other tenants in the building. If an intruder breaks into one apartment through the window, the intruder could enter the apartment building and endanger other residents. When we use algorithms to determine the substance of the warranty of habitability for each tenant, we must at least account for the effect of the safety of each unit on the safety of other units. The optimal set of customized laws likely is one where the landlord must equip every tenant’s unit with door locks and window guards.

The warranty of habitability often includes other elements, such as equipping an apartment with fire alarms and preventing pests. Let us suppose the warranty of habitability is differentiated with respect to equipping apartment units with fire alarms. The lack of fire alarms in one apartment would increase the risk of fire not only for that apartment but also for other apartments in the building. It is likely that when we properly analyze the aggregate welfare that results from each combination of customized laws, the optimal combination is one where each apartment in a building is required to be equipped with fire alarms. Similarly, the law regarding pest prevention and remediation would require the landlord to apply treatment to all units in the building.

Another aspect of the landlord-tenant relationship that would likely require uniformity is the guarantee of tenants’ ability to enjoy quiet hours in their apartments. Suppose the landlord and tenants’ contracts are customized so that the quiet hours are differentiated for each tenant. If one tenant were allowed to make noise at midnight, that tenant would disrupt his neighbors. Tenants can only enjoy quiet hours when everyone is quiet.

2. Premises Liability

Another area of law concerned with the safety conditions of one’s living spaces that requires coordination if algorithms are to be used to make the law is premises liability. A property owner could be liable for negligence if the owner fails to provide proper outdoor lighting and visitors trip and fall or are robbed on the owner’s premises. The provision of outdoor lighting on one’s premises has externalities. When one provides outdoor lighting on one’s premises, it has both the internal effect of keeping one’s own premises safe and the external effect of contributing to the safety of one’s neighbors. Of course, the failure to provide lighting also makes one’s neighbor’s premises less safe. The decision to provide outdoor lighting also affects social norms. If enough people in a neighborhood do not turn on their outdoor lights, others may consider it socially acceptable to keep their outdoor lights off. People could also refuse to turn on their outdoor lights because they don’t want to be the “suckers” who bear disproportionate responsibility for keeping the neighborhood safe while others free ride. The requirement for providing sufficient outdoor lighting must account for the effects of externalities and social norms to protect visitors from physical injuries and crime. The likely result of applying the three-step process discussed in Part II concerning the law regarding the provision of outdoor lighting would likely be uniformity in the law.

C. Cases Where the Law is Differentiated but Coordination is Necessary to Maintain a Level of Protection

While uniformity is the result of accounting for interactive effects in making coordinated algorithmic law, it is not always the case. Sometimes, coordination and social welfare maximization require differentiated behavior. In this section, I discuss two examples where the law should be differentiated to achieve social welfare maximization. Nevertheless, in these cases, the interactive effects of individual behavior must be accounted for in making algorithmic law.

1. Driving Age

Ben-Shahar and Porat considered personalizing the legal driving age based on personal characteristics.156 Ben-Shahar & Porat, supra note 3, at 109–111.
Personalized driving age would likely be more efficient than a uniform driving age as individuals have different physical characteristics that enable them to drive after reaching different ages. However, tailoring the driving age to each individual based only on a potential driver’s characteristics misses an important consideration: the current risk profile of the jurisdiction that the potential driver is entering. Coordinating personalized law that accounts for the risk profile of other drivers isn’t likely to make much difference on the personalized law calculated using only personal characteristics in the relative sense (e.g., if Driver M is ready to drive at a younger age than Driver N, then M will have a lower driving age than N regardless of the characteristics of the jurisdiction). Nevertheless, the characteristics of all drivers should be considered to make the algorithmic law optimal to stay above a given level of social safety.

In an extreme example, consider two simple scenarios, A and B. In both scenarios, there are twenty drivers in a small town, and the driving age for the twenty-first person is to be determined. In Scenario A, all twenty current drivers are over thirty and have at least ten years of driving experience. In Scenario B, all twenty current drivers are between seventeen and nineteen years of age and have between one and three years of driving experience. Let us only consider the safety of the twenty-first person and put aside considerations about the social safety level. In Scenario A, all the drivers are already quite experienced, and there would be room for a younger and riskier driver. For example, the appropriate driving age for the twenty-first driver could be sixteen. In Scenario B, all the drivers are less experienced, and there would likely be less room for a younger and riskier driver. In Scenario B, the appropriate driving age for the twenty-first driver, whose physical and mental characteristics are the same as those of the twenty-first driver in Scenario A, would need to be higher than sixteen to attain the same level of safety achieved in Scenario A.

Similarly, although no state sets explicit limits on the maximum driving age, determining the appropriate personalized maximum driving age should account for similar considerations and follow a similar process. A person would lose driving eligibility earlier in a jurisdiction where drivers are collectively riskier than in a jurisdiction where drivers are less collectively risky.157The discussion of determining the appropriate for age for driving shows that ultimately using age to determine whether someone is legally eligible to drive is not a precise way to ensure that only those who have the requisite physical capacity to drive may drive. Age is used in the expression of the law as an approximation for characteristics not easily measured and regulated. Algorithms have the potential to craft laws that determine the eligibility to drive based on more precise characteristics. As MIT professor Joseph Coughlin says, “Birthdays don’t kill. Health conditions do.” Elizabeth Nolan Brown, Too Old to Drive? Depends on Which State You Live In, AARP Blogs (Sep. 17, 2012), https://blog.aarp.org/bulletin-today/too-old-to-drive-depends-on-which-state-you-live-in.

There is no spatial interactive effect caused by personalizing driving age similar to that caused by personalizing speed limits from a traffic engineering perspective. For example, the probability of collision does not increase due to an increase in the variance of the drivers’ age (all else being equal). There is also very little need for concern for the impact of social norms and law enforcement, as law enforcement could easily control drivers’ behavior by issuing or refusing to issue driver’s licenses. Nevertheless, the example above shows that driving age cannot be determined based on personal characteristics alone. The risk profile of all other drivers must be considered when determining the appropriate driving age for a prospective driver.

As a general matter, the determination of the appropriate minimum age to perform a particular activity could be done using only personal characteristics as described by Ben-Shahar and Porat in instances where the activity has minimal interactive effects and considerations of individual rights are very important (compared to considerations of utility). One such example would be the voting age. Ultimately, nearly every activity has some social effect, and activities differ in the degree of their interactive effects. Whether the minimum age would be determined by personal characteristics to maximize efficiency in every individual instance or by the characteristics of all members of a jurisdiction to maximize efficiency for the jurisdiction would depend on factors such as data availability, computational intensity, and normative considerations such as the appropriate boundaries of personal liberty and the values that society wants to promote.

2. Drowsy Driving

Drowsy driving is a highly prevalent problem158About 41% of drivers have reported falling asleep while driving at some point. Brian C. Tefft, The Prevalence and Impact of Drowsy Driving (2010), https://aaafoundation.org/prevalence-impact-drowsy-driving.
and causes a significant number of accidents.159According to the National Highway Traffic Safety Administration, “2% to 20% of annual traffic deaths are attributable to driver drowsiness,” and “more than 6,000 people may have died in drowsy-driving-related motor vehicle crashes across the United States [in 2017].” Vindhya Venkatraman, et al., Countermeasures That Work: A Highway Safety Countermeasure Guide for State Highway Safety Offices, 10th Edition, 2020 10–3 (2021), https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-06/15100_Countermeasures10th_080621_v5_tag_0.pdf.
Only Arkansas and New Jersey currently have laws against drowsy driving,160In Arkansas and New Jersey, the law prohibits driving without sleep for more than 24 consecutive hours. State Highway Safety Offices, Drowsy Driving, Governors Highway Safety Association (Mar. 2023), https://www.ghsa.org/state-laws/issues/drowsy%20driving.
although in many states, driving while drowsy constitutes negligence.161Keller v. De Long, 231 A.2d 633, 108 N.H. 212 (N.H. 1967).
Laws regarding drowsy driving can be tailored to the profession of the driver. For example, the Federal Motor Carrier Safety Administration has issued several rules regarding driving time and rest time for commercial truck drivers.16249 CFR 395.3(c)
Taxi drivers are even more prone to drowsy driving than truck drivers,163Fanxing Meng, et al., Driving Fatigue in Professional Drivers: A Survey of Truck and Taxi Drivers, 16 Traffic Inj. Prev. 474 (2015).
and thus states could place rest requirements on taxi drivers as well.

Drowsy driving affects drivers differently depending on physical characteristics, driving experience, and other factors. Therefore, lawmakers could consider tailoring the law against drowsy driving according to personal traits. There are several ways in which individual characteristics can affect the law regarding drowsy driving. First, characteristics such as age and medical conditions may directly affect one’s ability to drive safely while drowsy.164Older drivers are less affected by sleepiness than younger drivers. Simon S. Smith, et al., Hazard Perception in Novice and Experienced Drivers: The Effects of Sleepiness, 41 Accid. Anal. Prev. 729, 732 (2009). Teenagers, for example, are prone to lacking sleep and drowsy driving. Judith Owens, Insufficient Sleep in Adolescents and Young Adults: An Update on Causes and Consequences, 134 Pediatrics e921 (2014).
The law regarding the amount of rest needed may vary based on these characteristics. Second, characteristics such as age, profession,165Nurses are prone to lacking sleep and are at risk of drowsy driving. Linda D. Scott, et al., The Relationship between Nurse Work Schedules, Sleep Duration, and Drowsy Driving, 30 Sleep 1801 (2007).
and personality traits may be associated with behaviors and lifestyles that increase the likelihood of driving while drowsy. Lawmakers could indirectly reduce drowsy driving by placing requirements on age groups and professions to obtain more rest before driving. Third, drivers with characteristics that render them more vulnerable to drowsy driving, or those associated with behaviors and lifestyles that increase their vulnerability to drowsy driving, may be subject to particular forms of law enforcement. Algorithms can help determine the precise law by computing the amount of rest each driver needs given that driver’s characteristics.

In addition to tailoring based on individual physical characteristics, laws regulating drowsy driving, using algorithms, can be further tailored to the time of the day and road conditions that may render drivers vulnerable to drowsy driving. As a prerequisite for obtaining driver’s licenses, drivers could be required to participate in driving simulations in which their brains’ electrical activity is collected through electroencephalography (EEG) tests.166EEG results collected from drivers using driving simulators have the potential to predict when drivers are too fatigued to drive. See Timothy Brown, Robin Johnson & Gary Milavetz, Identifying Periods of Drowsy Driving Using EEG, 57 Ann. Adv. Automot. Med. 99 (2013). A similar approach to using EEG data to predict potential fatigue scenarios is using respiratory rate variability (RRV) data. See Federico Guede-Fernández, et al., Driver Drowsiness Detection Based on Respiratory Signal Analysis, 7 IEEE Access 81826 (2019).
Algorithms can be used to predict the situations in which drivers are most vulnerable to drowsy driving.167See e.g., Nikita Gurudath & H. Bryan Riley, Drowsy Driving Detection by EEG Analysis Using Wavelet Transform and K-Means Clustering, 34 Procedia Comput. Sci. 400 (2014).
Driving licenses will forbid drivers from driving during certain times of the day or under certain road conditions if their EEG results show they are too drowsy to drive in these situations.168Although EEG is the most accurate method for measuring drowsiness, the accuracy of drowsiness detection could be improved by supplementing EEG with other measurements such as Near-Infrared Spectroscopy (NIRS), Electrocardiogram (ECG), Electrooculogram (EOG), Photoplethysmogram (PPG). See Shubha Majumder, et al., On-Board Drowsiness Detection Using EEG: Current Status and Future Prospects, in 2019 IEEE International Conference on Electro Information Technology (EIT) 483, 487 (2019).

Technological advancements will likely make drowsy driving detectable in real-time. Wearable devices like smartwatches could measure EEG signals, which could then be analyzed to detect drowsy driving instantly.169See Gang Li, Boon-Leng Lee & Wan-Young Chung, Smartwatch-Based Wearable EEG System for Driver Drowsiness Detection, 15 IEEE Sens. J. 7169 (2015).
Another approach is to require all cars to install cameras to monitor drivers. The camera would record the body movements of drivers to detect signs of drowsy driving, transmit the data to law enforcement, and penalize drivers for drowsy driving based on the recorded data. This data could also be considered as evidence of negligence. In this approach, the camera would be placed in the car and record the driver’s eye and head movements, and algorithms would detect whether the driver is fatigued.170See Anil Kumar Biswal, et al., IoT-Based Smart Alert System for Drowsy Driver Detection, 2021 Wirel. Commun. Mob. Comput. e6627217 (2021). This approach merely sends alerts to drivers. For a stricter approach, the data would be used to penalize in addition to alerting the driver for drowsy driving.
A less intrusive approach would require all drivers to install drowsy-detection systems that send alerts and/or force the car to stop safely when drowsy driving is detected. The threshold for drowsy driving would be tailored for each driver.

Tailoring the law to personal characteristics would protect those less capable of driving with less sleep from unsafe driving and protect other drivers and users from dangerous drivers. On the other hand, allowing those who are more capable of driving with less sleep to drive would enable them to undertake productive activities such as providing services and transporting goods.

Individualizing the law regarding drowsy driving would not produce spatial interactive effects like individualizing speed limits. The influence of social norms is limited since drivers cannot easily observe the amount of rest other drivers have had before driving. Therefore, the lawmakers would not have to be concerned with achieving uniformity among drivers regarding how much rest drivers must receive before driving. However, like in the previous example, the characteristics of other drivers should be considered to determine the amount of rest needed for a particular driver. The law regarding the amount of rest required for a specific driver before driving would vary depending on the overall safety level of the jurisdiction, which is determined by the characteristics of all other eligible drivers in the jurisdiction. Therefore, while it appears that there is no interactive effect of the personalized command to each driver on other drivers, determining the law for each driver is still a coordination process that accounts for the characteristics of all drivers. Each individual might have different rest requirements due to individual factors, but the coordination process that considers all drivers’ characteristics guarantees a certain level of social safety. The lower the risk profile the collective drivers in the jurisdiction have, the lower the rest requirements would be needed for all drivers, even though there could be variations among drivers due to individual differences.

IV. Obstacles and Limitations in Implementing Algorithmic Law

Using algorithms to craft the law has several obstacles, even when coordination is appropriately incorporated into the process. In instances where there are unresolved issues in the law itself, it would not be possible or appropriate to use algorithms to make law. For example, algorithms cannot make decisions regarding normative questions that humans have thus far failed to achieve consensus on. Even when the law has a complete list of alternatives and does not address ethical questions, legal and political authorities must consider public skepticism regarding using algorithms in lawmaking.

A. Inability to Make Difficult Normative Decisions: Autonomous Vehicles

Using algorithms to make laws to optimize social utility assumes that all alternatives and outcomes can be quantified and compared. In decisions involving unresolved moral issues, it is difficult—if not impossible—to quantify and compare the values of the alternatives and outcomes. Therefore, it is impossible to maximize social utility. The decisions that algorithms controlling AV must make illustrate the difficulty of making decisions involving complex moral dilemmas. Since government agencies such as the United States Postal Service are planning to use AV,171The U.S. Postal Service has begun research into implementing autonomous vehicle technology in delivery mail. See Office of Inspector General, United States Postal Service, Autonomous Vehicles for the Postal Service (2017), https://www.uspsoig.gov/reports/white-papers/autonomous-vehicles-postal-service. In 2019, USPS began a pilot for using autonomous trucks to move mail between Dallas and Phoenix. Vanessa Romo & Camila Domonoske, U.S. Postal Service Tests Self-Driving Trucks, NPR (May 21, 2019, 6:54 PM), https://www.npr.org/2019/05/21/725524334/u-s-postal-service-is-testing-self-driving-trucks.
the government must confront moral dilemmas when designing algorithms that drive its vehicles. In addition, legislatures, regulatory agencies, or the courts must draw the parameters for the appropriate design of algorithms controlling AV used by citizens and businesses.

Regardless of whether the manufacturer or the human driver is held liable when AVs crash, the law needs to define what constitutes reasonable behavior. However, the design of AV algorithms and considerations of which human driver behavior would be considered reasonable can involve moral dilemmas in which it is difficult to assess and compare the values of the alternatives quantitatively. The Trolley Problem is the classic illustration of such dilemmas. Another example is the motorcycle problem, in which an AV must collide with at least one of the two motorcyclists. One of the motorcyclists is wearing a helmet, and the other is not.172Jeffrey Gurney describes the motorcycle problem and difficulties of the decision from an ethics perspective in greater detail. See Jeffrey Gurney, Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Through the Two Lanes of Ethics and Law, 79 Alb. L. Rev. 183, 197–198 (2016).
Striking the motorcyclist wearing a helmet would likely result in less damage to human life on average since helmet-wearing significantly reduces the probability of injury and death, but striking the motorcyclist wearing a helmet would also punish the more responsible motorcyclist, which could be seen as unfair.173Id.

The moral dilemmas that make it difficult to decide which alternative is preferable are problematic not only for algorithmic lawmaking but also for lawmaking without the use of algorithms. Even legislatures and judges making fixed laws that apply to all scenarios regardless of the circumstances would struggle to identify the optimal alternative without violating some legal doctrines and notions of ethics.174For example, Stephen Wu argues that in the trolley problem the AV manufacturer would face tort liability under current law regardless of which choice the AV makes. Stephen S. Wu, Autonomous Vehicles, Trolley Problems, and the Law, 22 Ethics Inf. Technol. 1 (2020).
By requiring legislatures and judges to be clear about the objective and underlying values embedded in the law when law is to be made dynamic and personalized, algorithmic law makes these issues even more salient.

Given the potential for AV to significantly improve safety, 175AV can significantly reduce accidents caused by human-error, which NHTSA estimates is a major factor in approximately 94% of all fatal accidents. NHTSA, Automated Driving Systems 2.0: A Vision for Safety i (2017), https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/13069a-ads2.0_090617_v9a_tag.pdf.
increase mobility access,176AV would allow senior citizens who can no longer drive to use cars. See Koen Faber & Dea van Lierop, How Will Older Adults Use Automated Vehicles? Assessing the Role of AVs in Overcoming Perceived Mobility Barriers, 133 Transp. Res. Part Policy Pract. 353 (2020).
and generate significant economic gains,177The economic value of lives saved by AV alone is worth at least 0 billion annually. Center for Sustainable Systems, Autonomous Vehicles Factsheet (2022), https://css.umich.edu/publications/factsheets/mobility/autonomous-vehicles-factsheet.
it would be unreasonable to ban AV altogether. Lawmakers must determine which choice the AV is required to make despite the legal and ethical difficulties. However, algorithms cannot make these decisions without human input, which must be based partly on normative considerations and require special legislation or regulation.178See Wu, supra note 174, at 11–12. See also Gurney, supra note 172, at 262–267.

B. Legitimacy

Using algorithms to regulate urban mobility would most likely meet the federal and state legal and constitutional requirements. Cary Coglianese and David Lehr have considered whether administrative and constitutional law doctrines permit government agencies to use machine learning algorithms to make regulations.179Coglianese & Lehr, supra note 115.
They concluded that regulation by machine learning algorithms would, in principle, satisfy the non-delegation doctrine, the due process doctrine, the equal protection doctrine, and the statutory requirements for transparency in federal law, such as the Freedom of Information Act and the Administrative Procedure Act.180Id.
State and local governments would likely be exempt from disclosing the law-making algorithms, as such algorithms would be considered non-public records or trade secrets not subject to disclosure.181See Matthew Daus, Legal Issues and Emerging Technologies 26 (2022), https://nap.nationalacademies.org/catalog/26786/legal-issues-and-emerging-technologies.
In addition, state and local governments, which would likely be the entities tasked with making algorithmic laws, particularly concerning urban mobility, have immunity from tort liability arising from accidents related to traffic control and transportation planning.182Id. at 14.

However, having passed the statutory and constitutional thresholds does not mean that algorithmic lawmaking should be implemented, particularly when the output is differentiated laws. There are numerous policy concerns associated with using algorithms to make the law. Despite satisfying the requirements for transparency under legal and technical standards, using algorithms in lawmaking, particularly in tailoring the law for each individual, is perceived as unfair and thus illegitimate. Some have already found skepticism about using algorithms in the adjudicatory context. Using a survey of more than three thousand respondents from a nationally representative sample, Wang compared Americans’ attitudes towards 1) algorithms, 2) psychologist expertise, and 3) mandatory guidelines in assessing risk in bail hearings.183A. J. Wang, Procedural Justice and Risk-Assessment Algorithms (Jun. 21, 2018), https://papers.ssrn.com/abstract=3170136.
Wang found that respondents prefer both mandatory guidelines and psychologist expertise to algorithms,184Id. at 7.
that their preferences are not just based on differences in accuracy,185Id. It is worth noting that inaccuracy can contribute to people’s distrust of algorithms and prompt them to opt for human judgment instead. See Berkeley J. Dietvorst, Joseph P. Simmons & Cade Massey, Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err, 144 J. Exp. Psychol. Gen. 114, 123 (2015).
and that “[r]espondents strongly disapprove of algorithms as a matter of fairness, a preference of policy, and a source of legitimacy.”186Wang, supra note 183, at 7.
Using algorithms for lawmaking would likely be even more controversial than using algorithms for adjudication and bring about greater concerns for the legitimacy of the process. Without legitimacy, people are less likely to comply with the law voluntarily187See, e.g., Tom Tyler, Why People Obey the Law (2006).
and more likely to disrupt public order.188See, e.g., Tom R. Tyler, Procedural Justice, Legitimacy, and the Effective Rule of Law, 30 Crime Justice 283 (2003).
Of course, using algorithms instead of human judgment in bail hearings could eliminate adjudicators’ biases and yield more objective decisions. However, if citizens distrust algorithms despite their capacity to make more objective judgments, the lawmaker should consider refraining from using them.

Given the complexity of factors involved—particularly when coordination is integrated into algorithmic law—and the need to make accurate predictions quickly to make the law dynamic, machine learning algorithms will be necessary to implement algorithmic law in urban mobility and other areas. Machine learning has already been used to forecast and manage traffic,189Snow, supra note 12.
and will be deployed to implement the system optimization objectives of algorithmic law described in previous sections. However, the black box nature of machine learning models makes it very difficult for even experts to explain the models.190See Cynthia Rudin, Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead, 1 Nat. Mach. Intell. 206 (2019).
Government officials may not have sufficient expertise or attention to fully understand and manage the objectives and operations of the algorithms. More importantly, the complexity of the models and the use of machine learning to generate the law make it impossible for citizens to reverse engineer the process of making the law. In other words, citizens cannot take the output (the individualized law) and derive the inputs used to make the law (personal characteristics and other factors) and the weights of the inputs. Even if government officials are not legally obligated to explain the process of making the law, they should attempt to do so to protect the law’s legitimacy and secure compliance with the law. If citizens cannot understand the process for making the law, they are less likely to comply.191See generally Menghan Li & Jinhua Zhao, Gaining Acceptance by Informing the People? Public Knowledge, Attitudes, and Acceptance of Transportation Policies, 39 J. Plan. Educ. Res. 166 (2019) (showing that when people are more likely to accept a policy when they understand the reasons for enacting the policy and its benefits and drawbacks).

Another challenge for algorithmic law is that tailoring the law to individuals would create more opportunities for public backlash than maintaining uniformity, even when the individualized laws are coordinated to achieve social optimality. Algorithmic law enables the possibility that, in some cases, people with more resources would be subject to less stringent or burdensome laws. For example, consider the idea of personalizing property taxes. Under this scheme, it would be possible for a wealthy older couple to be subject to a lower tax rate than a poor young family due to various factors, many of which the public is unaware of. Of course, it would also be possible for a poor older couple to be subject to a lower tax rate than a wealthy young family. However, the former would likely be salient to the public and likely to generate public backlash,192See Maryann Cousens, Americans Support Raising Taxes on the Wealthy and Big Corporations, Navigator (Feb. 27, 2024), https://navigatorresearch.org/americans-support-raising-taxes-on-the-wealthy-and-big-corporations (showing that Americans support increasing taxes on the wealthy).
while the latter would hardly be noticed. The lack of transparency and ability to understand the process of making the law would likely make the public backlash even more intense. A scenario where people with more resources are subject to less stringent or burdensome laws, and the public is not told how the law is made, would give the appearance of corruption. The likelihood of this scenario would increase if people and entities with more resources obtained a better understanding of how the law-making algorithms work and exploited the algorithms to their advantage. Furthermore, implementing algorithmic law could provide cover for actual corruption. Those with more economic resources could capture the lawmaker who creates the law-making algorithms.

The concerns about the legitimacy of algorithmic law differ from the effects of social norms discussed in Part II.B in several ways. First, the concerns about legitimacy stem from the inability to understand the process of making the law, not from their observations of others’ behavior. Second, social norms may cause people to refuse to comply with specific laws, whereas the lack of legitimacy may cause people to refuse to comply with the law generally and refuse to cooperate with the authorities in other ways. Third, whereas lawmakers enact uniform laws to obtain greater compliance through the influence of social norms, it is not always possible for lawmakers to obtain more compliance through legitimacy by enacting uniform laws. Even some uniform laws could challenge the legitimacy of the authorities if their functions and consequences are not adequately understood.

One response to the issues that challenge the legitimacy of algorithmic law is that some of these issues also challenge the legitimacy of the law in its current state. For example, let’s consider the problem that people with more resources would be more capable of understanding how algorithms create the law and “game” the algorithms accordingly. This is an issue that exists under our current laws as well. Many of the wealthiest people in America take advantage of loopholes in the tax code and pay lower tax rates than middle-class Americans.193Warren Buffett, Jeff Bezos, Michael Bloomberg, and Elon Musk had effective tax rates of less than 5% between 2014 and 2018. See Jesse Eisinger, Jeff Ernsthausen & Paul Kiel, The Secret IRS Files: Trove of Never-Before-Seen Records Reveal How the Wealthiest Avoid Income Tax, ProPublica (Jun. 8, 2021), https://www.propublica.org/article/the-secret-irs-files-trove-of-never-before-seen-records-reveal-how-the-wealthiest-avoid-income-tax.
Making the tax code algorithmic would not necessarily make the tax system more regressive. Regulators who worked for government agencies on cryptocurrency regulation have gone on to work for cryptocurrency firms, helping them get around government regulation.194See Julie DiMauro, Regulators That Regulated and Fined Crypto . . . Now Advise Crypto, Global Relay Intelligence & Practice (Feb. 28, 2024), https://www.grip.globalrelay.com/regulators-that-regulated-fined-crypto-now-advise-crypto.
However, citizens might not recall the lack of fairness under uniform law when encountering algorithmic laws that they consider unfair. Lawmakers should always consider citizens’ perceptions of legitimacy when they decide whether to implement algorithmic law in any situation.

Conclusion

Recent improvements in data availability, computing power, and communications technology have enabled the possibility of using algorithms to adjust the law to changing circumstances and heterogeneous personal traits. If the goal of algorithmic law is to improve efficiency, then lawmakers must consider welfare at the social level rather than at the individual level, and account for the interactive effects among the individuals. Numerous examples from urban mobility—most notably speed limits—show that tailoring the law to individual characteristics without considering the impact of individual behavior on others could lead to unanticipated and undesirable consequences. Lawmakers must ensure that making the law specific to each individual accounts for the interactive effects of individualized law—particularly the spatial and social effects—so that algorithmic law could lead to socially optimal outcomes from a utilitarian perspective. This process is also necessary to protect individual utility in many instances. This requires lawmakers to issue combinations of socially optimized and coordinated laws simultaneously. Examples from urban mobility illustrate the shortcomings of customization based on individual characteristics only and the necessity of coordination that accounts for spatial effects and social norms.

Making the law algorithmic while accounting for the interactive effects and maximizing social welfare does not necessarily yield a uniform law. In some cases, uniformity of the law is necessary. In other cases, the behavior of each individual has minimal direct impact on others, and coordination is unlikely to make a significant difference from personalization based entirely on personal characteristics. Nevertheless, even in those cases, coordination is necessary to optimize social welfare and to prevent system-level metrics, such as safety, from falling below acceptable levels.

The use of algorithmic law is not always possible or appropriate. The utilitarian approach of maximizing social welfare cannot be applied when the alternatives cannot be quantified or involve unsettled moral questions. In addition, lawmakers must consider that using algorithms in making law is likely to encounter public resistance, even if such use can meet the legal thresholds. The suitability of deploying algorithms in making law is situation dependent. In each instance, political and legal authorities should weigh the benefits, including the potential for greater efficiency, equity,195Ben-Shahar and Porat explain that tailored law based on skill would bring about greater distributive justice than uniform law if skills were exogenously distributed rather than acquired through investment and practice. Ben-Shahar & Porat, supra note 79, at 672–673.
and autonomy196A differentiated law that is restrictive for some but not for others would give more autonomy to those who are not subject to the more restrictive rule than a uniform law that applies to all. Robinson, supra note 1, at 318.
against the costs in terms of the potential for error, perceptions of bias and abuse, and perceptions of loss of autonomy. When algorithms are used to make law, lawmakers must recognize that each citizen’s behavior affects others and that social outcomes are collective consequences of simultaneous acts rather than aggregates of individual acts in isolation. Failing to account for the interactive effects would often lead to socially and individually inefficient laws and behaviors.


*Visiting Assistant Professor of Law and Ribstein Fellow, University of Illinois College of Law. E-mail address: gaoj@illinois.edu. I would like to thank Eric Johnson, Amitai Aviram, Jennifer Robbennolt, Robert Lawless, Arden Rowell, Michael Gerrard, and Jason Jackson for very helpful comments. I would like to give special thanks to Jinhua Zhao for inspiring me to explore the relationship between law and algorithms. An earlier draft of this article was included in my PhD dissertation accepted at the Massachusetts Institute of Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *