U.S. opens investigation in Tesla after fatal crash in Autopilot mode

I dont think it will ever be "non BETA" . It is always gathering data and sends it back for analysis and new conditions added.
 
Hmmm... how do I put that mildly?

No, there's no way to soften this: Motorbikes will be the biggest risk for life and limb on the road, if cars should become autonomous. Because with the general habit of speeding (and no, I am not exaggerating) and in general driving recklessly, no computer will have the intuition to expect the unexpected. Autonomous driving only works when computers can react to defined parameters. If someone isn't keeping to the rules, no computer will be able to handle that.

And yes, I know, you are the ONE motorbiker on the planet who always sticks to the rules :p

Skip to 11:05 in this video for some things that Google?s self driving cars have reacted to that is not keeping to the rules.

 
Well it detected the bus. But wrongly assumed that the bus would yield and let the car move over to clear a hinder in the car's path.
 
So the computer "assumed"... interesting ;)
 
You people better be all over this shit like you are with FCA's electronic shifter. ;)

Looks like hispter Telsa owners everywhere are scouring in the dark that their god car can indeed kill them. Too soon?
 
U.S. opens investigation in Tesla after fatal crash in Autopilot mode

Hmmm... how do I put that mildly?

No, there's no way to soften this: Motorbikes will be the biggest risk for life and limb on the road, if cars should become autonomous. Because with the general habit of speeding (and no, I am not exaggerating) and in general driving recklessly, no computer will have the intuition to expect the unexpected. Autonomous driving only works when computers can react to defined parameters. If someone isn't keeping to the rules, no computer will be able to handle that.

And yes, I know, you are the ONE motorbiker on the planet who always sticks to the rules :p

First of all you know jack shit about me and how I drive. I somehow manage to handle cars with way more power than you are likely to ever see.

Second of all I'm not worried about being "punished" for riding like an asshat. What I am worried about is a fucking obliviot texting and masturbating while changing lanes or running red lights. I doubt a computer is going to have any trouble with that.

Also a proper self-driving car should be able to handle speeders and red light runners and land splitters otherwise wtf is the point?
 
So the computer "assumed"... interesting ;)

Have you actually read the article you linked?

From the article:

When the light turned green, several cars ahead of the bus passed the SUV. Google has said that both the car?s software and the person in the driver?s seat thought the bus would let the Lexus into the flow of traffic. The Google employee did not try to intervene before the crash.
?This is a classic example of the negotiation that?s a normal part of driving ? we?re all trying to predict each other?s movements. In this case, we clearly bear some responsibility, because if our car hadn?t moved there wouldn?t have been a collision,? Google wrote of the incident.

What I have read is that Google is actually trying to get their self driving cars to behave a bit more like a human. Otherwise their cars get hit by inattentive humans that assume that the cars will behave like cars driven by humans.

I guess they would benefit from an L-plate, as in learner-plate. :rolleyes:
 
Have you actually read the article you linked?

From the article:



What I have read is that Google is actually trying to get their self driving cars to behave a bit more like a human. Otherwise their cars get hit by inattentive humans that assume that the cars will behave like cars driven by humans.

I guess they would benefit from an L-plate, as in learner-plate. :rolleyes:

Well, first of all: Excusing the accident with saying that even a human couldn't prevent the accident is a bit lame, when the whole point of the presentation above was to show that the car watches traffic BETTER than a human.

Secondly, a computer is always only as good as the human who programmed it. And humans make mistakes, as you surely know. Computers are, despite all the technical progress in the past decades, incredibly stupid. They can all do exactly one thing: Additions -- one at a time.

All the increase in computer capacity and speed over the last 60 years or so have only enabled computers to make additions faster. And as long as they haven't gotten a quantum processor made to work, computers will still be doing one addition at a time. The whole talk about "artificial intelligence" is bullshit as long as a computer can't evolve beyond its programming. Maybe one day they might be able to do so but even then they will still break at some point. All machines break eventually.

So entrusting human lives in traffic to a machine might work to a certain extent but at some point new variables that nobody could foresee, will add new dangers and risks.

In essence we will all be the lab rats for computers that try to replace human intuition and experience, which they will of course never be able to.
 
The whole talk about "artificial intelligence" is bullshit as long as a computer can't evolve beyond its programming.

This is the whole impetus behind the field of "machine learning", and why self-driving programs require large fleets of vehicles and lots of miles with a human driver behind the wheel. The programmers don't code in scenarios like "if you need to turn but there is a car to the left, speed up by X amount and then change lanes", they feed it lots of data of various driving scenarios and what human drivers did in those scenarios. The programmers don't tell the computer what to do for each scenario, but instead they tell the computer what outcomes are good, and which are bad (or perhaps a sliding scale of "goodness" and "badness", so it knows near misses are still better than crashes, but not ideal, for example), and the computer itself then determines patterns. In this way, it can be presented with a new situation that's a variation on one it's seen before, or a brand new combination of factors, and then estimate what the best course of action would be without having to be explicitly pre-programmed to know what that is.

Now, you might be thinking that, since it uses humans for input, that it won't ever be better than a human, and you'd be half right. It won't be any smarter than a human, in terms of decision making, but it'll be 100% alert at all times, and have fewer blind spots. It will not be perfect, but it doesn't need to be. It just needs to be slightly better on average than an average driver.

You people better be all over this shit like you are with FCA's electronic shifter. ;)

This is way worse than FCA's shifter issues, IMO. Tesla's system is just a glorified radar cruise control and lane keeping assist (it doesn't hook into the navigation system to know where to go, and it doesn't even pay attention to speed limit signs), but they hype it up calling it "autopilot" and talking at conferences about self-driving cars all the time. I really hope they're forced to change the name, if nothing else. It pretty much begs people to misuse it in a way that the FCA shifter doesn't.
 
Last edited:
Well, first of all: Excusing the accident with saying that even a human couldn't prevent the accident is a bit lame, when the whole point of the presentation above was to show that the car watches traffic BETTER than a human.

Secondly, a computer is always only as good as the human who programmed it. And humans make mistakes, as you surely know. Computers are, despite all the technical progress in the past decades, incredibly stupid. They can all do exactly one thing: Additions -- one at a time.

All the increase in computer capacity and speed over the last 60 years or so have only enabled computers to make additions faster. And as long as they haven't gotten a quantum processor made to work, computers will still be doing one addition at a time. The whole talk about "artificial intelligence" is bullshit as long as a computer can't evolve beyond its programming. Maybe one day they might be able to do so but even then they will still break at some point. All machines break eventually.

So entrusting human lives in traffic to a machine might work to a certain extent but at some point new variables that nobody could foresee, will add new dangers and risks.

In essence we will all be the lab rats for computers that try to replace human intuition and experience, which they will of course never be able to.

Of the literally millions of miles that Google cars have amassed this is the *first* and *only* accident where the computer is at fault. That's better than any human driver I have ever met, hell I have been in two at fault fender benders in my driving career and I actually concentrate on driving when I drive and not on eating a burger and texting.

Computers can do things that you never can, process a shit ton of information at an extremely high rate of speed. I don't care how much intuition or experience you think you have but your vision is never going to be 360 degrees, you can't gauge speed with 1kph precision, you can't see through steel and you don't know the dimensions of your car down to a millimeter, you also cannot react anywhere near as fast as a computer. That's the reason your car has ABS, TCS, ESP and shit like brake assist (as in braking harder and faster than you would if it detects an imminent crash), which has saved you from a crash before when all your "intuition and experience" failed you.

Sensors and cameras and lidars and so on are all great but the real benefit will come from V2V communication. Your car won't have to react to my car hitting the brakes, my car will tell it that it's about to hit the brakes, or that there is a hazard in the road.

So entrusting human lives in traffic to a machine might work to a certain extent but at some point new variables that nobody could foresee, will add new dangers and risks.
Over 30,000 people die on US roads every year, super majority of it is driver error, humans suck at foreseeing new variables and responding.
 
This is way worse than FCA's shifter issues, IMO. Tesla's system is just a glorified radar cruise control and lane keeping assist (it doesn't hook into the navigation system to know where to go, and it doesn't even pay attention to speed limit signs), but they hype it up calling it "autopilot" and talking at conferences about self-driving cars all the time. I really hope they're forced to change the name, if nothing else. It pretty much begs people to misuse it in a way that the FCA shifter doesn't.

You are 100% correct. I'd be ok if it was listed as an assist like the parallel parking assist some vehicles have. Anything with "auto" in the name automatically (pun intended?) insinuates hands off and that it does the task autonomously (again, pun?).
 
"Autosteer... is best suited for highways with a centre divider.
"We specifically advise against its use at high speeds on undivided roads."

So... it's not suited for roads? XD

Roads with dividers, that's fancy stuff, like they got in big cities. :p
 
"Autosteer... is best suited for highways with a centre divider.
"We specifically advise against its use at high speeds on undivided roads."

So... it's not suited for roads? XD

Roads with dividers, that's fancy stuff, like they got in big cities. :p

Not good for Texas either. This road has a 65mph limit and unless the stripes have been repainted in the recent past they often disappear.



Most of our major road net is like this outside the cities. So, Tesla is not for the everyman outside the glittering city centers of Texas apparently.
 
So when will we have the first accident with a Tesla in "Autopilot" mode while the driver had sexual intercourse on the backseats? :D
 
The "when" I don't know. The "where" will almost certainly be Florida (and it'll likely be kinkier than just a couple engaging in sex).
 
Sounds like the next betting pool. :lol:
 
Musk is an asshole. He's done this whole industry a disservice by releasing a half-baked product and calling it "autopilot" when it is anything but.

So you can learn where it is having problems and fine tune the system, just like every other time a beta is released to the public.

You can't do that in an industry where cost of life is a concern.
 
Last edited:
Top