Self driving has to be better than human because their failure modes are “weird” and unexpected to other (human) drivers.
If I’m on the road, I know what a drunk driver looks like, or someone who looks like they won’t see me changing lanes. What I’m not expecting is, say, a sudden, stable, controlled veer into oncoming traffic, a slow red light runner well after it changed, a hard turn across the road from a full stop it clearly can’t make and so on. FSDs aren’t human; their mistakes aren’t easily predictable by humans (or other FSDs trained on human traffic), hence they’ll have nasty, headline grabbing, lawsuit grabbing accidents even if their error rate is 1/4 of an ultra healthy 30 year old.
…And that is assuming perfect conditions.
Humans handle bad environments extremely well, FSD taxis are obsessively maintained (which is not sustainable with widespread FSD), and the problems in rain, dust, construction, weird obstructions or whatever are compounded without “superhuman” sensing like Waymo’s LIDAR.
Hence, I completely disagree. This engineering problem is intractable.
I think the solution is:
Mesh networked cars, so they can “see” each other, pool their perspective sensor data, send warnings and such. They don’t all have to be FSD, human driven cars can still communicate.
A critical mass of these, enough to matter.
“Defensive driving” against FSD cars taught in driving school.
Along with expected hardware/model architecture advances.
And we are both awhile away from that, and headed in the wrong direction (since companies like Tesla generally aren’t interested in cooperation, but monopolization).
I think one of the issues is that what you are describing is getting over the line between the idea of a car that represents total personal freedom and communist public transport.
It doesn’t have to be about that. Cars could share data via common protocol, like all sorts of industries do, and that has little to do with ownership.
Self driving has to be better than human because their failure modes are “weird” and unexpected to other (human) drivers.
If I’m on the road, I know what a drunk driver looks like, or someone who looks like they won’t see me changing lanes. What I’m not expecting is, say, a sudden, stable, controlled veer into oncoming traffic, a slow red light runner well after it changed, a hard turn across the road from a full stop it clearly can’t make and so on. FSDs aren’t human; their mistakes aren’t easily predictable by humans (or other FSDs trained on human traffic), hence they’ll have nasty, headline grabbing, lawsuit grabbing accidents even if their error rate is 1/4 of an ultra healthy 30 year old.
…And that is assuming perfect conditions.
Humans handle bad environments extremely well, FSD taxis are obsessively maintained (which is not sustainable with widespread FSD), and the problems in rain, dust, construction, weird obstructions or whatever are compounded without “superhuman” sensing like Waymo’s LIDAR.
Hence, I completely disagree. This engineering problem is intractable.
I think the solution is:
Mesh networked cars, so they can “see” each other, pool their perspective sensor data, send warnings and such. They don’t all have to be FSD, human driven cars can still communicate.
A critical mass of these, enough to matter.
“Defensive driving” against FSD cars taught in driving school.
Along with expected hardware/model architecture advances.
And we are both awhile away from that, and headed in the wrong direction (since companies like Tesla generally aren’t interested in cooperation, but monopolization).
I think one of the issues is that what you are describing is getting over the line between the idea of a car that represents total personal freedom and communist public transport.
It doesn’t have to be about that. Cars could share data via common protocol, like all sorts of industries do, and that has little to do with ownership.