Google Says The Accidents Its Self-driving Cars Have Been In Weren’t Their Fault

Published on

Self-driving cars are supposed to be the solution to less-than-perfect human drivers. We get tired, have blind spots in our vision, and sometimes just flat-out choose to break driving laws. Robocars, on the other hand, are pimped out with sensors and artificial intelligence that’s supposed to make them almost crash-proof.

The key word there is almost. According to an article published today by the Associated Press, driverless cars on California roads have already gotten into four accidents.

Two accidents happened while the cars were in control; in the other two, the person who still must be behind the wheel was driving, a person familiar with the accident reports told The Associated Press.

Three involved Lexus SUVs that Google Inc. outfitted with sensors and computing power in its aggressive effort to develop “autonomous driving,” a goal the tech giant shares with traditional automakers. The parts supplier Delphi Automotive had the other accident with one of its two test vehicles.

When Fusion contacted Google for comment on the report, the company sent along the following statement: “Safety is our highest priority. Since the start of our program 6 years ago, we’ve driven nearly a million miles autonomously, on both freeways and city streets, and the self-driving car hasn’t caused a single accident.” The spokesperson didn’t readily have insurance claims data, but said all incidents were minor fender benders — people bumping into the Google cars — and didn’t result in any injuries.

Google wouldn’t provide the details of what happened to the AP either. Consumer advocates say the “secrecy” around these events is worrisome, especially because the cars of the future might be steering completely wheel-free, so humans wouldn’t be able to take over in the event of an emergency. John Simpson of the nonprofit Consumer Watchdog, which frequently singles out Google for criticism, told the AP, that the possibility of cars without humans in the loop makes it “even more important that the details of any accidents be made public — so people know what the heck’s going on.”

In March, Elon Musk said he could deploy self-driving cars in a matter of months: the technology was already there, he touted. But incidents like this show that more progress is needed, especially as humans and robots start sharing the roads.

For starters, the computer-vision algorithms necessary to make a car “see” are very complex. In recent months, engineers at companies developing self-driving cars have turned to deep learning, a breed of artificial intelligence with superb pattern-recognition abilities. Deep learning makes use of multi-layered neural networks, software made up of layers of artificial “neurons” that can gather and analyze loads of data. For example, if computer scientists wanted a car to “recognize” a stop sign, they’d engineer a low-level layer to recognize basic things like edges. The next layer up would piece them together into shapes — in this case, an octagon — and then into objects. By doing that, machines can build up an understanding of how things look.

With deep learning, researchers at tech companies like Google, Facebook and Microsoft have been able to achieve unprecedented levels of image recognition. But even this next-gen AI has issues. A recent study showed that state-of-the-art neural networks can be fooled by common images. If you’re doing web search, that’s not a big deal, but if you’re trying to figure out whether a car is trying to park or drive into the lane you’re speeding on, a mistake could be potentially life-threatening.

To make things really work, a driverless car system would have to tie in many different data streams and make sense of them so it could make the right decision.

“We use a lot of common sense reasoning, and computers don’t have that yet,” Gary Marcus, a cognitive scientist and artificial intelligence researcher at NYU, recently told me. Scientists are working on that, but it’s going to be a long time before they solve that problem.

Beyond developing AI sophisticated enough to handle driving, there are also legal questions. Who’d be liable in the event of a crash is still debatable. It’s not even clear whether self-driving cars are completely legal, according to a recent article by WIRED which reported that many state laws don’t yet explicit address the new technology.

In light of all the legal and technological challenges involved, it would probably behoove robo-automakers to be as transparent as possible about how their self-driving cars are doing on the open roads.

Update: Post has been updated to reflect comment from Google.

Latest Videos

Latest Releases

In The News

Latest Report

Support Consumer Watchdog

Subscribe to our newsletter

To be updated with all the latest news, press releases and special reports.

More Releases