One annoying thing about the self-driving car conversation is even relatively smart people (ones who agree with me about stuff) are always evoking the concept of "edge cases." The idea is that there's a level of normal driving which is a solvable problem, and then there are finite number of weird situations, like recognizing that a deer is going to jump out in front of you, or the lines on the road suddenly disappear or are wrongly striped, or you hit a construction zone, or the stop sign is posted upside down, and once you just teach your computer about this finite number of situations, you're all set.
But this isn't really a finite problem. It's a "need a human brain problem" and for all of the talk of AI, there is no AI. The argument is that on balance you can make it safer than a human driver, or as safe, so isn't that good enough? Even if true, safety is the wrong measure. It has to be as useful a human driver. I'm pretty sure you can program these cars not to hit things reasonably well, but it's much harder to program them to just not get confused in a parking lot. I mean, I can't imagine a car managing to navigate its way into, say, a stadium parking lot at game time. Or out again. Again, the problem isn't safety. They can usually manage to avoid things or pull off to the side of the road and say "wtf, man, help me out of this shit." But that won't be very useful. Niche limited applications might work, but they won't be very useful either.