Did Uber Just Ruin Self-Driving Cars For The Rest Of Us?

Published on

Did Uber Just Ruin Self-Driving Cars For The Rest Of Us?

By Jim Gorzelany, FORBES

March 20, 2018

https://www.forbes.com/sites/jimgorzelany/2018/03/20/did-uber-just-ruin-self-driving-cars-for-the-rest-of-us/#25ed62941f38

We all knew this would be coming.

A specially equipped fully autonomous Volvo SUV that was undergoing real-world testing by the ride-hailing service Uber in Tempe, Arizona struck and killed 49-year-old Elaine Herzberg while she was walking her bicycle across a street outside of the crosswalk. While this is far from being the first traffic incident involving self-driving cars being tested on public roads – fender benders are more usual than one might hope – this is the first pedestrian fatality, and we’re afraid it’s not going to be the last.

The National Traffic Safety Board is opening an investigation of the incident, but according to Tempe police, the SUV was in fully autonomous mode at the time of the collision and had a human motorist behind the wheel to take control if necessary. Uber has since halted its autonomous-car testing in Arizona, California, and Pittsburgh.

It’s too early for authorities to determine exactly what went wrong, but to absolutely nobody’s surprise, the incident has sparked a veritable firestorm of commentary on the notion of whether established automakers and entrepreneurs alike aren’t irresponsibly rushing technology to market at the pace of a supercharged V8. Ride-sharing companies including Uber and Waymo, Google parent Alphabet, Inc., and automakers like General Motors are reportedly planning to launch autonomous services in select cities as early as 2019.

Not surprisingly, safety advocates are seeking at least a temporary national moratorium on companies testing self-driving cars on public roads. As it is, state regulation of self-driving cars is spotty at best – only 32 out of 50 states (and the District of Columbia) have either passed legislation or issued executive orders allowing them on public roads – with federal oversight remaining token at best.  “Arizona has been the wild west of robot car testing with virtually no regulations in place,” contends autonomous-car critic John M. Simpson, the Privacy and Technology Project Director of the nonprofit Consumer Watchdog organization. “That’s why Uber and Waymo test there. When there’s no sheriff in town, people get killed.”

And while California has thus far required self-driving test cars to carry a human motorist if intervention is needed, the state DMV is on track to let them navigate streets without human drivers as early as next month.

Of course, autonomous advocates will note that around 1,200 lives have been lost thus far during 2018 at the hands of human drivers, who are both fully licensed by the states in which they reside, and who operate motor vehicles that must adhere to a long litany of federal safety regulations. Proponents of the technology argue that autonomous vehicles, piloted by artificial intelligence, have the potential to virtually eliminate traffic fatalities down the road by taking the foibles of human nature – including drinking, texting, and driving recklessly – out of the equation. Eventually, autos should be able to operate as if they had invisible chauffeurs behind the wheel, picking us up at the front door, dropping us off at work, and then parking itself, perhaps at a remote off-site parking lot to save a few bucks.

But in the meantime, with the technology already being perfected in fits and starts, this incident could well set back development, if not licensing, of so-called “Level 5” autonomous vehicles that can operate on any road and under any conditions without a human driver. The hardware necessary to pull this off is sitting in the proverbial parts bin ready to be deployed, albeit not without tweaking, but it’s the artificial intelligence necessary to help the car make critical life-and-death decisions that’s likely to come under the most scrutiny.

As of now, car buyers can only avail themselves of what’s known as a “Level 2” system of autonomy that enables a vehicle to accelerate, brake, and steer on its own under well-defined situations. The most advanced of them so far is Cadillac’s Super Cruise system, which is limited to highway driving and requires a human motorist to pay attention to the road and be prepared to take over if the technology gets, to put it bluntly, confused. Other automakers’ rudimentary systems require the driver to keep a light hand on the steering wheel in case intervention is necessary.

While Cadillac’s system indeed works as advertised, it’s essentially an extension of existing lane keeping and adaptive cruise control systems, and as such lacks both the advanced LIDAR sensors and artificial intelligence being used in autonomous prototypes undergoing testing.

So-called “disengagement reports” released by 20 companies earlier this year revealed that a preponderance of robo-cars being tested require human intervention under certain circumstances to avoid collisions. According to Consumer Watchdog, in most cases, the vehicles cannot travel more than a few hundred miles without having to hand control over to the driver. An over-reliance on Tesla’s similar AutoPilot system is blamed for the death of a Model S owner in a crash back in 2016. In only its first hour of service last November a driverless shuttle being tested over a limited route in downtown Las Vegas got into a much-publicized collision with a delivery truck.

According to a report issued in 2016 by the Rand Corporation, it could take billions of miles of real world testing to demonstrate that the failure rate of auto-piloted cars is at least on a par with, let alone proves to be – as hoped – superior to human drivers. One stickler is that self-driving vehicles are likely to face ethical dilemmas in which they’re forced to choose which of two or more perilous courses to pursue in a split second. For example, would it be best to brake but ultimately slam into an obstruction and risk injury or worse to the car’s occupants, swerve in one direction to avoid a crash and possibly veer into oncoming traffic or steer in the other direction and perhaps run over a pedestrian standing at the curb?

Some industry observers feel autonomous cars should be required to take “driving tests” to prove their prowess, just as today’s rides undergo crash testing to certify their safety. The idea is that just as humans’ visual acumen varies, road-monitoring cameras and sensors might perform better or worse during daytime or nighttime hours, over smooth or broken pavement, and under rainy or snowy conditions, depending on the model.

In the meantime, society won’t be able to fully realize the safety-oriented rewards of fully autonomous driving until all (or at least most) of the cars on the road are operated via AI. That’s because, as our recent tests with Caddy’s Super Cruise system proved when we had to intervene to escape collisions with inattentive drivers blindly merging or changing lanes, the weak link in the system is the necessity to share the road with human-piloted conventional vehicles.

And this latest incident underscores the need for automotive artificial intelligence to be able to account for the otherwise unpredictable nature of pedestrians, not to mention bicyclists, joggers, and animals with which they may suddenly cross paths.

So, don’t expect to give up your driver’s license any time soon.

Consumer Watchdog
Consumer Watchdoghttps://consumerwatchdog.org
Providing an effective voice for American consumers in an era when special interests dominate public discourse, government and politics. Non-partisan.

Latest Videos

Latest Releases

In The News

Latest Report

Support Consumer Watchdog

Subscribe to our newsletter

To be updated with all the latest news, press releases and special reports.

More Releases