
Phil and Stephen resume their discussion from last week about self-driving cars in light of the tragic news about Germanwings flight 9525.
Statistics show that pilot error has consistently caused between 50% and 60% of all airplane crashes over the decades. Deliberate sabotage accounts for another 3% to 10% annually. Then there’s “other human error” accounting for another 6%. Large portions of the process of controlling a commercial aircraft have already been successfully automated. As with self-driving cars: fully self-flying airplanes could be substantially less than perfect and yet still out-perform humans by a signficant margin.
In the case of the Germanwings tragedy, the likely explanation for the crash (at the time of writing this) is pilot suicide. This is, fortunately, an extremely rare phenomenon, but there is little doubt that it does occur. In fact, the deliberate downing of Germanwings 9525 looks like a copycat of the most recent previous pilot murder / suicide. We can be pretty sure that self-flying planes won’t do that.
Partial self-driving capabilities can make us less cautious and less responsive in certain circumstances, resulting in accidents and fatalities that otherwise might not have occurred. Net new accidents sounds like a reason not to go there, but is it? The question we have to ask is: would those new accidents actually offset the number of accidents / fatalities prevented by the same features? If more autonomy causes an additional 5,000 deaths per year due to people getting sloppy and lazy behind the wheel, but prevents 7,000 deaths…we’re better off to the tune of 2,000 lives saved.
Is it time for the switch? How hard (or easy) will it be?
Plus:
50th Anniversary of the First Spacewalk
Seeing in Color
The Man Who Saved a Billion Lives
(WT 110)