When developing driving tests for robot cars, complications abound

Robots, unlike humans, cannot drink. They also cannot be distracted and cannot fall asleep at the wheel.

But is a sober computer necessarily safer than a drunken driver?

That’s a question facing car companies and regulators alike as the National Highway Traffic Safety Administration takes steps to both promote adoption of autonomous vehicles and ensure road safety.

In a country where 35,000 Americans are killed each year in automobile accidents, the promise of self-driving cars is clear.

Take the driver out of the car, advocates say, and fully autonomous vehicles could all but eliminate the majority of crashes.

Human error causes 90 percent of accidents, with drunken driving, distracted driving and driver fatigue contributing to 41 percent, 10 percent and 2.5 percent of crashes, respectively, according to the Department of Transportation.

Even safe human drivers could theoretically be outdone by autonomous vehicles (AVs), whose sensors are far more sophisticated than human eyes. While human drivers can see an average of 50 meters down the road, radar, lasers and cameras allow AVs to spot objects up to 200 meters away.

Those theoretical advantages of self-driving cars have yet to be proved, and experts say the technology will come to market before we can ever definitively demonstrate that AVs are safer than human drivers. The question that then remains is how to decide when the technology is safe enough.

“We are at a point now where we are trying to develop a driving test that instead of just covering a three-point turn and parallel parking can cover 99.9 percent of scenarios a car would encounter on the road,” explains Chan Lieu, former director of government affairs at NHTSA, who now advises the industry group Self-Driving Coalition for Safer Streets. “We are all jointly, between the industry and the agency, still trying to figure that out.”

Crash data

Experts say it is hard to tell how self-driving cars compare to human-driven ones, largely due to a lack of data.

The California Department of Motor Vehicles requires reporting of every crash involving a self-driving car in the state, even when it is being controlled by a human. Google also produces monthly reports detailing all its accidents across the country.

Those data show 33 reported accidents between 2009 and September 2016, 20 of which occurred when the vehicle was driving itself. Human drivers were to blame for most of those accidents, rear-ending stopped AVs 15 times and sideswiping them three times. With 2 million miles on public roads, Google cars have driven far more than their competitors and are involved in the majority of AV accidents.

Since 2009, an AV has been at fault in one accident. That crash occurred in February 2016 when a Google AV struck a Mountain View, Calif., public transit bus. The AV had been trying to make a right turn on red when it encountered sandbags blocking the right lane. When the light turned green, the AV tried to merge into the center lane and hit the side of the bus.

According to DMV records, the test driver “saw the bus approaching in the left side mirror, but believed the bus would stop or slow to allow the Google AV to continue.”

That’s an assumption the car apparently made, too, leaving it with body damage to the left front fender, left front wheel and one of its sensors. No injuries were reported at the scene.

The most severe crash involving an AV occurred in September, when a human driver ran a red light and struck a Google car as it was crossing an intersection. Just before the crash, the AV had anticipated that the other vehicle was about to run the light and hit the brakes, but not with enough time to avoid the accident.

The crash substantially damaged the front and rear passenger doors of the AV, while the other car sustained significant front-end damage. The Google operator went to the hospital for an evaluation and was later released.

Those two accidents are anomalies, with the majority of reported accidents occurring when human drivers rear-end self-driving cars.

In a typical example, a Google car stopped at a red light was hit from behind at 17 mph. A simulation of the July 2015 accident shows that the Google car’s braking at the light was natural and that the other driver had plenty of room to stop but did not hit the brakes.

Asked for comment, Google instead referred to blog posts from its former head of AVs, Chris Urmson. At the time, he wrote that distracted driving played a role in that case, as it does in many of the company’s fender benders.

“That’s a big motivator for us,” he wrote, noting that self-driving cars would eliminate that problem.

Comparing apples to oranges

The minimal number of AV accidents so far and their lack of experience on the roads make comparing the technology to humans difficult.

AVs have only driven roughly 2.5 million miles on public roads, compared with the 3 trillion miles conventional cars drive annually in the United States.

Academics have tried to make the comparison — with mixed success.

Brandon Schoettle at the University of Michigan found that AVs crashed at nearly five times the rate of conventional vehicles. But self-driving cars were able to avoid the most dangerous types of crashes — head-on collisions, which constitute 4 percent of conventional vehicle accidents. Injuries from crashes involving self-driving vehicles have also been minor, compared with the types of life-ending or life-changing injuries resulting from traditional vehicle crashes.

Schoettle said AVs’ propensity for fender benders could be due to not driving defensively.

He compared the technology to a newly licensed teenager who hasn’t learned to avoid being rear-ended by rolling forward if another car isn’t stopping.

“It’s their right to just sit there, but they aren’t doing anything like what a more skilled, middle-aged human driver might do to strategize and try and get out of these situations,” he said.

But, Schoettle said, his study is far from conclusive because data for traditional vehicle crashes are also lacking.

People don’t usually report the kinds of minor accidents that are most common to AVs, where there are no injuries and minimal property damage.

“These are situations where you and I would have gotten out of our car, seen nothing was wrong and drive our separate ways without a second thought,” he said. “It doesn’t get into these statistics.”

Myra Blanco, at the Virginia Tech Transportation Institute, tried a different approach, adjusting federal crash statistics using data the institute had previously collected as part of a study of local drivers’ habits, which included some minor crashes.

The researchers found Google cars are not involved in severe accidents any more or less than human-driven ones, and are actually involved in fewer low-level crashes.

But Blanco cautions there is “too much uncertainty” to be confident in those results, not just because AVs haven’t traveled very far, but also because of where they have traveled.

Most self-driving cars have only been tested on urban streets, which are more complicated and more accident-prone than highways. At the same time, most companies are testing their AVs in places like San Francisco and Phoenix, where they do not have to contend with much rain or any snow.

“We need more data in order to keep pieces of this puzzle together,” Blanco said.

Running against time

Having enough data to definitively prove that AVs are safer than human-driven ones could take time — 518 years, according to one estimate from the Rand Corp.

Current data show that one death occurs every 100 million miles of human driving. But AVs would have to drive exponentially farther to demonstrate that they are just as safe, or safer, to a statistically significant degree, according to Rand researcher Nidhi Kalra.

She calculated that AVs would have to drive 8.8 billion miles to prove they are as safe as human drivers. Doing so would take a fleet of 100 vehicles 400 years if they drove 24 hours a day, 365 days a year at an average speed of 25 mph.

Proving AVs are 20 percent safer than human drivers would require them driving 11 billion miles, which would take more than half a millennium under the same circumstances.

But automakers are hoping to bring self-driving cars to market within the next five years, not the next 500.

That means regulators and passengers are going to have to be OK with some uncertainty around the exact benefits of AVs if society is going to reap any of them.

That’s not as frightening as it seems, Kalra said.

“When people get into their cars now to go somewhere, we don’t think, ‘Gosh, my risk of getting in a crash is X,'” she said. “We usually don’t know what the risk is, but we put our seat belt on anyway because no matter what it is, the seat belt reduces it.”

Automakers could similarly take steps to mitigate self-driving risks, she said.

First steps

Regulators and industry are just beginning to determine what those steps might be.

For its part, NHTSA’s AV policy guidelines have remained purposely vague on what metrics it might use to measure AV safety, with agency officials saying they don’t want to close the door on any innovative approach. Instead, the agency has asked companies to demonstrate how they would prove their technology can function in 15 different areas, including safety, ethics and cybersecurity (Greenwire, Sept. 21).

Those reports are not due until the spring, but some people already have ideas.

One first step could be test-driving AVs in more varied environments with snow, ice and rain.

Boston, for example, is taking steps to promote itself as an ideal AV testing ground due to its inclement weather and notoriously nasty drivers.

“Miles driven isn’t as important as the complexity of miles driven — miles in snow, miles in fog,” said Mary Louise Cummings, who directs the Humans and Autonomy Lab at Duke University. “Can your car detect pedestrians from far away, can it recognize the hand signals of a crossing guard and follow them?”

Beyond putting tire to pavement, companies are already using computer simulations to improve their technology.

Also, a look at the instances in which humans have had to disengage autonomous mode and take control of the self-driving cars can be instructive.

Between September 2014 and November 2015, Google AVs drove 424,331 autonomous miles and experienced 341 disengagements, 69 of which were characterized as “safe operation events,” according to records filed with the California DMV.

Computer simulations later showed that, had the driver not intervened, the AV would have crashed in 13 of those events. Though simulation showed the vehicle would not have crashed in the remaining 56 safe operation events, they were deemed “safety significant,” because the car was acting unsafely. Those events included times when the AV incorrectly recognized traffic lights and incorrectly yielded to pedestrians and cyclists.

After those incidents, Google engineers updated the AV software to teach it how to correctly respond.

Examining the frequency of such disengagements can be indicative of how the technology is improving, Google’s Urmson wrote in a blog post. Toward the end of 2015, Google’s AVs were traveling an average of 5,300 miles between disengagements, a sevenfold improvement over the previous year.

“We’re pleased with this direction and we’ll focus more on this in the future,” he wrote.

The disengagement reports show on-road testing isn’t the only way to assess an AV. Because the cars are driven by computers, they can run through virtual testing scenarios to learn how to react in various dilemmas.

Amitai Bin-Nun, director of the autonomous vehicle initiative at Securing America’s Future Energy, says computerized tests are going to play a key role in ultimately determining the safety of AVs.

He foresees a kind of “incident library” where companies could store data from unusual or dangerous situations their cars have encountered. Then, instead of taking a typical on-road driving test, the AVs’ computers would have to successfully complete simulations of those scenarios before NHTSA certifies they are safe.

“We know where the weak links in the systems are, and we can develop something to test them without just driving around blindly for billions of miles,” he said.

Find this article at: http://www.eenews.net/greenwire/stories/1060044577/

Comments are closed, but trackbacks and pingbacks are open.