Can Man + Machine Sometimes Be Less Than The Sum of Its Parts?

It seems evident that self-driving cars, once they are made consumer-ready and widely deployed, will be a huge improvement in driver safety. The large numbers of traffic fatalities every year are testament to how terrible human drivers are. Surely a state of the art computer-driver, ideally in constant network contact with all the other computer-drivers on the road, could do a much better job.

Since a lot of luxury cars already have self-driving features, such as automatic parking, I always imagined that this trend of gradual improvements would continue. That more and more parts of the driving experience would get automated until eventually the human was no longer necessary. At that point, there would be just the final hurdle of making it legal for the human to tune out. We could even start designing cars with chairs that face each other, so passengers could more easily engage in conversation while the car does its work.

What I didn’t consider is the problematic transitional period, where cars are automated enough to fool consumers into thinking they can tune out, but not automated enough for tuning out to be actually safe. This apparently is the reasoning of many car companies such as Lincoln, who according to Bill Howard of ExtremeTech, are developing highly automated vehicles but forgoing…”the massively redundant autonomous-driving sensors and controllers that let research vehicles deal with traffic detours, panic stops by cars in front, and cars cutting you off. Instead, Lincoln goes in the opposite direction. The system includes the ability to sense if a driver is driving hands-free. A warning chime sounds to discourage misuse of the system.

The same concerns have been raised about the partial automation of commercial aviation. Philip E. Ross at IEEE Spectrum suggests that we may be very close to having unmanned commercial aircraft. The remaining technical challenges are beatable and easily enumerated. Perhaps the bigger obstacle is mental, since “people who otherwise retain a friendly outlook toward futuristic technologies are quick to declare that they’d never board a plane run by software.” So the result is that we retain human pilots who are progressively just playing more and more the role of “babysitters.”

Although such a middle scenario that combines human and machine intelligence might sound like a win-win on the surface, apparently the FAA holds similar concerns to the afore-mentioned car companies.  As Ross reports:

“In a draft report cited by the Associated Press in July, the agency stated that pilots sometimes “abdicate too much responsibility to automated systems.” Automation encumbers pilots with too much help, and at some point the babysitter becomes the baby, hindering the software rather than helping it. This is the problem of “de-skilling,” and it is an argument for either using humans alone, or machines alone, but not putting them together.”

Comments are closed.