How Self-Driving Cars Could Become Weapons Of Terror

Published on

Self-driving cars could be lifesavers, preventing many, if not most, of the traffic accidents that claim more than 30,000 American lives each year.

They could also make devastating weapons.

Picture hackers employed by a hostile nation finding a way to command large numbers of cars on U.S. roads. Picture those hackers ordering the vehicles to suddenly accelerate and turn hard to the right, flipping them over, killing many passengers and clogging freeways with junked cars.

Or envision a lone-wolf terrorist loading explosives into a car and programming it to drive to a targeted building or public space.

“A nation-state will think very carefully before they commit something that can be interpreted as an act of war, so that helps keep us safe,” said Isaac Porche, associate director of the Forces and Logistics Program at the Rand Corp. think tank. “But is it possible? Yes.”

The auto industry and federal regulators alike foresee a future in which cars drive themselves, talk to each other and communicate with traffic signals and other infrastructure. This system, they hope, will eliminate the human error behind the vast majority of accidents and ease congestion in crowded cities.

The complex technology and communication systems inside self-driving cars make them more vulnerable to hacking. Photo: Alex Brandon, Associated Press

Photo: Alex Brandon, Associated Press
The complex technology and communication systems inside self-driving cars make them more vulnerable to hacking.

But the same technology enabling this vision could also make cars even more vulnerable to hacking than they already are. And they’re more vulnerable now than most drivers realize.

Witness the infamous 2015 experiment in which hackers seized remote control of a Jeep Cherokee via the Internet and cut its transmission — while its test driver was barreling down a St. Louis freeway. In response, Fiat Chrysler Automobiles had to patch the software on 1.4 million cars.

“The more gizmos, the more gadgets, the more electrical controls you have, the more vulnerable the car is,” Porche said. “The most secure car is probably a 1972 Volkswagen.”

Automakers understand the potential danger. They have been staffing up with cybersecurity specialists both to protect existing cars and the connected, autonomous vehicles to come.

“We are extremely cautious about it,” said Lex Kerssemakers, CEO of Volvo Cars of North America. “We have big firewalls, really strict firewalls to make sure no one can operate the airbags and take over the car. Of course, this is an area of high attention, for the entire automobile industry.”

The Department of Transportation last month released guidelines for the development of autonomous cars and made ensuring cybersecurity part of a 15-point safety assessment that automakers should use. Critics note, however, that the guidelines are voluntary, rather than absolute rules. The guidelines also don’t specify what levels of cybersecurity are and aren’t acceptable.

“The problem is, they don’t set standards, they just say, ‘OK, you just have to explain how you’re going to handle these issues,’” said John Simpson, with the nonprofit group Consumer Watchdog.

The National Highway Traffic Safety Administration, which is part of the Transportation Department, also established in 2012 a research division specializing in the safety and cybersecurity of the electronic systems within cars.

The FBI has also noted the risk that self-driving cars could pose. A 2014 report from the bureau’s Strategic Issues Group warned that while autonomous cars could help law enforcement in many situations, the technology will also open up “ways for a car to be more of a potential lethal weapon than it is today.”

Much could depend on how future cars communicate and coordinate with each other.

The National Highway Traffic Safety Administration has been studying whether to require short range vehicle-to-vehicle, or V2V, communication equipment in all light-duty vehicles. Cars would be constantly aware of each other on the road, pinging each other with information about location and speed.

Fiat Chrysler Automobiles recalled about 1.4 million cars and trucks in the U.S. to patch the software just days after two hackers detailed how they were able to take control of a Jeep Cherokee over the Internet in July 2015. Photo: Carlos Osorio, Associated Press

Photo: Carlos Osorio, Associated Press
Fiat Chrysler Automobiles recalled about 1.4 million cars and trucks in the U.S. to patch the software just days after two hackers detailed how they were able to take control of a Jeep Cherokee over the Internet in July 2015.

That equipment could provide a pathway for hackers. But the danger would be heightened if the cars were also plugged into some kind of centralized traffic management system. Thousands of autonomous cars coordinating with the vehicles around them would be a harder target for hackers than a single system, said Mary Cummings, director of the Humans and Autonomy Laboratory at Duke University.

“The military will tell you the worst possible strategy would be to have a single, centralized control that would represent a single point of failure,” she said.

Cummings added, however, that even the GPS systems already embedded in many cars can be spoofed into thinking they are someplace they aren’t. That could be a major problem in cars driving themselves.

“The reality is these systems are incredibly fragile,” she said. “So where we have to go is to make sure that cars can make their own decisions absent any system like GPS. For them to be truly autonomous, they have to be able to make their own decisions like human drivers do.”

Both Google and Tesla Motors, for example, are developing autonomous driving systems that use hyper-detailed, 3-D maps but also can see and respond to road conditions on their own.

“I think the benefits of autonomous vehicles vastly outweigh the risks of people using them for nefarious purposes,” said Nidhi Kalra, director of the Rand Corp. Center for Decision Making Under Uncertainty. “It’s really about rethinking how we design autonomous vehicles so that cybersecurity is baked into the vehicle and not pasted on as an afterthought.”

Ensuring cybersecurity, however, won’t remove all the danger that autonomous cars could be weaponized.

What of the lone-wolf terrorist, placing explosives in a programmable car? At a National Highway Transportation Safety Administration hearing in April, one speaker urged the government to require sensors inside autonomous cars to sniff out hazardous materials and disable the cars if necessary.

“You could have the safest vehicle, the highest cybersecurity, and the tightest control of privacy data and still be wide open for bad actors to load the vehicle up with explosives, punch in coordinates, shut the door and send the vehicle to its destination,” James Niles, president of Orbit City Lab, said at the hearing.

David R. Baker is a San Francisco Chronicle staff writer. Email: [email protected] Twitter: @DavidBakerSF

Avenues of attack

Self-driving cars require electronic brains and network connections to function. That opens up vulnerabilities hackers can exploit.

GPS: Location-tracking systems could trick cars into thinking they’re somewhere else.

Networks: Vehicle-to-vehicle communication could spread cyberinfections.

Maps: Self-driving cars developed by Tesla and Google use detailed 3-D maps accessed via the cloud to navigate.

Software: Over-the-air system upgrades could deliver malicious code.

Entertainment systems: Streaming video and audio provide another pathway into the car.

Latest Videos

Latest Releases

In The News

Latest Report

Support Consumer Watchdog

Subscribe to our newsletter

To be updated with all the latest news, press releases and special reports.

More Releases