Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk

Spectre

The Deported
Joined
Feb 1, 2007
Messages
36,832
Location
Dallas, Texas
Car(s)
00 4Runner | 02 919 | 87 XJ6 | 86 CB700SC
An update from here: Uber Disabled Volvo SUV's Safety System Before Fatality - The NTSB has released its report. More pics and links at source.

Self-driving Uber car that hit and killed woman did not recognize that pedestrians jaywalk
The automated car lacked "the capability to classify an object as a pedestrian unless that object was near a crosswalk," an NTSB report said.

A self-driving Uber car that struck and killed an Arizona woman was not able to recognize that pedestrians jaywalk, the National Traffic Safety Board revealed in documents released earlier this week.

Elaine Herzberg, 49, died after she was hit in March 2018 by a Volvo SUV, which had an operator in the driver's seat and was traveling at about 40 miles per hour in autonomous mode at night in Tempe.

The fatal accident came as a result of this automated Uber not having "the capability to classify an object as a pedestrian unless that object was near a crosswalk," one of the NTSB documents said.

Because the car could not recognize Herzberg as a pedestrian or person — instead alternating between classifications of "vehicle, bicycle, and an other" — it could not correctly predict her path and concluded it needed to brake just 1.3 seconds before it struck her as she wheeled her bicycle across the street a little before 10 p.m. at night.

Uber told the NTSB that it "has since modified its programming to include jaywalkers among its recognized objects," but other concerns were also expressed in NTSB's report.

Uber had disabled the emergency braking system, relying on the driver to stop in this situation, but the system was not designed to alert the operator, who only "intervened less than a second before impact by engaging the steering wheel," the documents said.

That safety driver was working alone — a recent change in procedure — and didn't keep her eyes on the road, the report said. She was streaming the television show "The Voice," according to a police report cited by NBC Philadelphia.

The NTSB also noted that Uber's Advanced Technologies Group had a technical system safety team in place, but failed to "have a standalone operational safety division or safety manager." The company also "did not have a formal safety plan, a standardized operations procedure (SOP) or guiding document for safety."

Sarah Abboud, an Uber spokeswoman, told Reuters that the company regretted the crash, but said Uber's automated program has “adopted critical program improvements to further prioritize safety. We deeply value the thoroughness of the NTSB’s investigation into the crash and look forward to reviewing their recommendations.”

Between September 2016 and March 2018, Uber's test vehicles were involved in 37 crashes while driving autonomously, but only two were as a result of a car's failure to identify a roadway hazard.

Herzberg's family settled with Uber out of court.

Uber announced it had relaunched its self-driving cars nine months after the accident.

https://www.nbcnews.com/tech/tech-news/self-driving-uber-car-hit-killed-woman-did-not-recognize-n1079281

  1. What idiot programmed this?????
  2. Holy crap, Uber's ATC is a bunch of idiots.
 
I think you answered your own question.
 
I see no issue with this... Add logic to run over cyclists and I know which self driving car i want
 
I see no issue with this... Add logic to run over cyclists and I know which self driving car i want

Yes, but then you realize that their system may not correctly classify large animals that may wander into the roadway. Receiving a cow to the face might tend to ruin your whole day.
 
This is idiocy on a whole new scale.
  1. Allowing autonomous vehicles anywhere they could encounter a cyclist or pedestrian is criminal.
  2. Looking before you cross the road is generally considered a good idea.
 
Yes, but then you realize that their system may not correctly classify large animals that may wander into the roadway. Receiving a cow to the face might tend to ruin your whole day.
Hmm good point, though not many cows around here...
 
Hmm good point, though not many cows around here...

Replace "cow" with deer, elk, moose, bear, mother-in-law or other large beast found in North America as required. :p

More seriously, I'm wondering what it makes of all the other moving stuff that can enter the roadway - such as tumbleweeds, plastic bags, newspapers, windblown boxes, etc.
 
Allowing autonomous vehicles anywhere they could encounter a cyclist or pedestrian is criminal.
They have to know how to deal with all of that
More seriously, I'm wondering what it makes of all the other moving stuff that can enter the roadway - such as tumbleweeds, plastic bags, newspapers, windblown boxes, etc.
That's a bigger concern, forgetting the non-existent large animals of NYC I would assume basic logic of any self driving software would be to not hit shit that's in front of it.
 
  • Like
Reactions: MWF
That's a bigger concern, forgetting the non-existent large animals of NYC I would assume basic logic of any self driving software would be to not hit shit that's in front of it.
Have you read the article? For the self driving software the woman wasn't "in front" up until 1.3 seconds before impact which was way too late. No human driver will ever re-classify a cyclist, pedestrian or whatever entering the road perpendicular to it multiple times within a second which means every human driver (who isn't distracted) will correctly predict the cyclist, pedestrian or whatever might continue it's collision path with the car and brake.

The issue is mostly about prediction. Currently most humans predict the behaviour of other road users better than self driving software since
a) self driving software has been written by other humans (who experienced mostly the conditions and behaviour of other road users in their vicinity which can differ extremely from conditions and behaviour of other road users in other countries) and
b) this kind of software has to work in every condition (light, dark, fog, back light, ice on the road, roads with and without markings etc.) so it has to be pretty conservative to cover every single eventuality. Driving conservatively isn't what most people do though so a "conservatively" driving autonomous car will be driving "slowly"/"erratically" for those people.
 
Have you read the article? For the self driving software the woman wasn't "in front" up until 1.3 seconds before impact which was way too late. No human driver will ever re-classify a cyclist, pedestrian or whatever entering the road perpendicular to it multiple times within a second which means every human driver (who isn't distracted) will correctly predict the cyclist, pedestrian or whatever might continue it's collision path with the car and brake.

Except even before she was directly in front of the car, it apparently had problems figuring out what it was. Mentioned in the article, but I'll quote directly from the press release regarding the preliminary report, available here: https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx

The report states data obtained from the self-driving system shows the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that emergency braking was needed to mitigate a collision. According to Uber emergency braking maneuvers are not enabled while the vehicle is under computer control to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

HWY18MH010-prelim-fig2.png


(Numbers above are distances in meters.) The system actually had seen the person in the roadway as a bicycle with at least six seconds and 25 meters to stop from 43mph. It could see that the paths were going to converge, but it couldn't identify what the converging object was and it became obsessed with classifying just what it was it had seen, to the point that it apparently made assumptions of the bicyclist's path based on what the system thought it was - instead of simply checking the RADAR and LIDAR track and deciding it needed to brake - until it was too late. Six seconds is more than enough for most people to figure out there was something there and hammer the brake pedal.

I would also say that in this stage of development, if the system is uncertain of what it's seeing other than the detected object is on a collision course, it should begin light braking and alert the safety driver. This one didn't.

The issue is mostly about prediction. Currently most humans predict the behaviour of other road users better than self driving software since
a) self driving software has been written by other humans (who experienced mostly the conditions and behaviour of other road users in their vicinity which can differ extremely from conditions and behaviour of other road users in other countries) and
b) this kind of software has to work in every condition (light, dark, fog, back light, ice on the road, roads with and without markings etc.) so it has to be pretty conservative to cover every single eventuality. Driving conservatively isn't what most people do though so a "conservatively" driving autonomous car will be driving "slowly"/"erratically" for those people.

No, the problem isn't about prediction. The problem was the system got hung up in identifying just what it was seeing on the road (per the logs and NTSB investigation!) instead of realizing it needed to stop first and identify the object later.

I would also point out that it seems likely that this software is being written by humans who are not drivers experienced in a wide variety of driving conditions nor do they seem to have asked people who were, because the logic used here was absolute crap. When something comes leaping into your path from the roadside with only a couple seconds to a potential collision, a driver does not (or at least should not) pause to try to identify what exactly that item is. The human decision tree comes down to: Sudden obstacle!-which way is it going and how fast?-how do I steer or brake to avoid collision? You don't freeze and think, "Is that a deer? Is it a feral hog? Is it a tumbleweed? Is it a fat guy who consumed too much beer at the local fall festival? Is it two dudes running from Nazis on a motorcycle with a sidecar?"


No, you brake and evade first and ask stupid questions later.
 
Last edited:
The system also has an automatic built in delay of 1 second before taking any evasive action. So even though it identified that evasion needed to occur 1.3 seconds before collision (way too late), it did not attempt to do so until 0.3 seconds before collision (no chance to even reduce speed).
 
I also read articles about this and as a software engineer by trade, I am offended by the stupidity of this software. There is just no excuse for such a dumb implementation.
 
Also something worth pointing out that is part of the problem?

I turn my back to the road when playing pokemon go on street corners. Why? Because specifically if drivers see me facing the road they cover the brake in case I run out without notice. I do this because it is what would be helpful to me if I was driving to determine if I should slow down 5-10 mph or not.

That's the kind of nuance i think missed by the AI. I joke that driving is really adhd friendly because you run constant calculations. Will this person stop? If they do how fast? How fast can I stop in relation to things in front of me? What is the road condition? Is there anything off to the side moving to be aware of that can end up in front of the car? To make AI viable you need to compensate for EVERYTHING and if you aren't a new driver you probably wont even be doing half of it consciously anymore.

This will get sorted eventually i think, but like... Maybe less testing in areas where it can KILL PEOPLE in the meantime? Idk. At this point I am not even sure if I would hit the autopilot on a car if i needed to get home while drunk or something because it doesn't seem much safer from someone who has had to drive on 3 days no sleep before. That's a serious issue
 
Maybe less testing in areas where it can KILL PEOPLE in the meantime?
They have to be tested in areas where they could kill people, otherwise they won't be ready. The fail here was not just in software but also the human operator not paying attention.
 
Yeah that's my other hangup with it, when testing this technology you have to take upon yourself responsibility for its operation. I'd be loathe to wait for it to react in dicey situations.
 
What I think they should be doing for the time being is not have the cars drive themselves but run in logging mode, so you could see what the car WOULD have done in this situation and see what the driver actually DID and then compare. And to test the software actually doing stuff they could use a controlled environment with like animatronic mannequins or some shit.
 
or the other way around?

let the auto pilot run in the background, without doing anything but just constantly verify if it would've done the same thing as the driver did, and modify itself accordingly
 
I'd also say REALLY CLEARLY LABEL THE CARS so everyone knows to be extra cautious around them like how you would a driving school car.
 
Top