View Single Post
Old 23 September 2017, 05:00 AM
ganzfeld's Avatar
ganzfeld ganzfeld is offline
Join Date: 05 September 2005
Location: Kyoto, Japan
Posts: 23,720

I know about the ieee's thing but, frankly, I don't think anyone has a handle on this, including them (and I think they know that too, unlike the lots of others in that area). I don't think we're going to have any of the kinds of automation these companies are touting for a long time.

"Self driving" is a stupid term. The car goes where it wants to go? No; nobody wants that. Someone still has to tell it lots and lots of things besides just where to go. The only difference between that and just driving is the way those instructions are given. Say exactly what it does, not "autopilot" or "super duper cruise control" or some other stupid term either.

Here's another dumb term: "human error". At best, it's an admission of the system's failure. Say exactly what happened. What can the engineers do to prevent it? The excuse that the driver (or passenger or whatever you call the person in the driver's seat of a car that's supposed to be quasi-self-driving) was warned - once or eight times or a hundred times - is pathetic. We've had systems that warn people of all kinds of things for over a hundred years. We know that they often fail when they are designed the way these "warnings" are. It's just CYA. They don't know how to let the driver know what's happening in anything close to, for example, the tactile feedback one gets with one's hands on the wheel; being able to feel the car taking control or handing it back. That's just one example of hundreds of things that still need to be designed. (See that video I linked to, by the way, of the 1950's vision of this. These postwar engineers had a real grasp of the type of system that works for real.)

I don't think "real world testing" really means much (is this another dumb term? probably) when they don't know what to test; they still don't have a handle on all the things that can go wrong.
Reply With Quote