California has become the third state to affirm the legality of driverless vehicles, setting the stage for computers to take the wheel along the state's roads and highways – at least eventually.
Gov. Jerry Brown on Tuesday signed SB1298, which makes so-called autonomous vehicles legal in California while requiring the Department of Motor Vehicles to establish and enforce safety regulations for their manufacture. The governor put pen to paper at Google's headquarters in Mountain View, where the technology giant has been developing and testing driverless versions of the Toyota Prius for years.
"Today, we're looking at science fiction becoming tomorrow's reality," Brown said. "This self-driving car is another step forward in this long march of California pioneering the future and leading not just the country, but the whole world."
The new law immediately allows testing of the vehicles on public roadways, so long as properly licensed drivers are seated at the wheel and able to take over. It also lays out a road map for manufacturers to seek permits from the DMV to build and sell driverless cars to consumers. It requires the department to adopt regulations covering driverless vehicles "as soon as practicable," but by January 2015 at the latest.
In other words, don't expect the highways to be overrun with robot drivers just yet. Which is good, since most companies and researchers say there's much work still to be done before driverless cars are proved irrefutably safe and reliable in traffic.
But state Sen. Alex Padilla, D-Pacoima (Los Angeles County), who introduced the bill, and Google, which lobbied for it, say autonomous vehicles could vastly improve public safety in the near future. Google co-founder Sergey Brin says driverless cars will provide greater mobility to people with disabilities, give commuters back the productive hours they now waste sitting in traffic and reduce congestion on roads – and by extension, pollution.
"It really has the power to change people's lives," he said.
The case for improved safety certainly makes intuitive sense, assuming the technology is adequately developed. A 2006 U.S. Department of Transportation study found driver error occurred in almost 80 percent of motor vehicle accidents. Computers, on the other hand, never get tired or distracted. Presumably they also won't speed, run red lights, forget to signal or tailgate.
But it's worth noting that there's no wide-scale testing of the premise to date. And as every computer user knows well, machines are fallible and occasionally unpredictable. The artificial intelligence software operating these vehicles is making predictions about appropriate responses based on programmed rules and huge volumes of data, including maps and previous miles logged.
But there are always unknown unknowns, unique conditions the software might not have encountered before and might not react to in a way we would hope.
Ryan Calo, an assistant professor of law focused on robotics at the University of Washington, noted in an earlier interview that a vehicle might know to avoid baby strollers and shopping carts, but might make the wrong decision if suddenly presented with a choice between the two.
Calo thinks autonomous vehicles can improve safety, but notes that public perception of the technology could turn on negative events, even if the machines prove statistically safer than humans. In other words, we'll be tough and unfair critics. That makes it all the more critical that the technology works well before it's widely deployed.
This leaves the DMV to tackle all sorts of weighty questions concerning safety and liability, including: How safe is safe enough? How should these vehicles be evaluated against that goal? And how do you create regulations for technology that's still under development?
He has pointed to a statistical baseline for safety that the DMV might consider as it begins to develop standards. After crunching data on crashes by human drivers, Walker Smith noted in a blog post earlier this year: "Google's cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones."
On Tuesday, Brin said Google cars have now traveled more than 300,000 miles, the last "50,000 or so … without safety-critical intervention."
"But that's not good enough," he said, adding that there should be – and will be – ongoing field tests, as well as continued evaluations in labs and on closed courses.
'Not just Google'
For the DMV to adequately understand the safety issues potentially posed by an artificial intelligence program, it must reach out to technologists, car companies and academic researchers, Calo said Tuesday. "Not just Google," he said.
Another lingering concern about driverless cars is privacy. The machines will have to collect and store certain information about a person's movements as part of their basic functioning, as well as to improve their performance over time. Because of pressure from privacy advocates, the law requires manufacturers to provide written disclosures describing the data collected. But John Simpson, director of Consumer Watchdog's privacy project, says that doesn't go far enough.
"We think the provision needs to be that information should be gathered only for the purpose of navigating the vehicle, retained only as long as necessary for the navigation of the vehicle and not used for any other purpose whatsoever," he said.
Technically, driverless vehicles are already legal in many states, insofar as no one ever thought to make them illegal. That's why Google has been able to test its cars on California's roads. But the advancing technology has pushed a number of states to take up the issue. Nevada's governor signed a driverless car bill in 2011, as did Florida's earlier this year. Legislatures in Hawaii, Oklahoma and Arizona have considered similar measures.