Erik Brynjolfsson Diagnoses the Problem in the Economy But Has No Solution

In this talk Erik Brynjolfsson clearly makes the case that productivity and employment are decoupling from each other. His presentation is a fantastic description of what is happening today and a fitting answer to the stagnationists.

That said, his solution at the end of this video amounts to little more than a clever turn of phrase: namely he suggests that we need to race with machines. In my detailed review of Erik’s book Race Against the Machine I criticized this idea:

The first suggestion the authors make can be summarized as “race with machines.” A human-machine combo has the potential to be much more powerful than either a human or machine alone. So therefore it’s not simply a question of machines replacing humans. It’s a question of how can humans and machines best work together.

I don’t disagree with this point on the surface. But I fail to see how it suggests a way out of our current predicament. The human-machine combo is a major cause of the superstar economics described earlier in the book. Strengthen the human-machine combo and the superstar effect will only get worse. In addition, if computers are encroaching further and further into the world of human skills, won’t the percentage of human in the human-machine partnership just keep shrinking? And at an exponential pace?

Moreover, as I’ve written about before on this site, the human-machine partnership can sometimes be less than the sum of its parts. Consider the example of airline pilots:

“In a draft report cited by the Associated Press in July, the agency stated that pilots sometimes “abdicate too much responsibility to automated systems.” Automation encumbers pilots with too much help, and at some point the babysitter becomes the baby, hindering the software rather than helping it. This is the problem of “de-skilling,” and it is an argument for either using humans alone, or machines alone, but not putting them together.”

At some point it may be possible to literally race with machines in the sense of actually merging man and machine together. But this has not been the current trend. What we have been seeing instead is people offloading cognitive tasks to independent machine algorithms. How many of us remember phone numbers anymore? Indeed memory has been one of the first cognitive tasks to get offloaded.

In order to race with machines I am convinced we need to actually enhance human intelligence directly. This is probably not impossible, but will require a much better understanding of the brain, and as a solution it will probably not arrive in time to stave off the massive decoupling that is affecting our economy.

Here is Eliezer Yudkowsky on the relative difficulty of agumenting humans versus developing standalone artificial intelligence:

“I originally gave the example of humans augmented with brain-computer interfaces, using their improved intelligence to build better brain-computer interfaces. A difficulty with this scenario is that there’s two parts to the system, the brain and the computer. If you want to improve the complete system, you can build interfaces with higher neural bandwidth to more powerful computers that do more cognitive work. But sooner or later you run into a bottleneck, which is the brain part of the brain-computer system. The core of your system has a serial speed of around a hundred operations per second. And worse, you can’t reprogram it. Evolution did not build human brains to be hacked. Even if on the hardware level we could read and modify each individual neuron, and add neurons, and speed up neurons, we’d still be in trouble because the brain’s software is a huge mess of undocumented spaghetti code. The human brain is not end-user-modifiable.

“So trying to use brain-computer interfaces to create smarter-than-human intelligence may be like trying to build a heavier-than-air flying machine by strapping jet engines onto a bird. I’m not saying it could never, ever be done. But we might need a smarter-than-human AI just to handle the job of upgrading humans, especially if we want the upgrading process to be safe, sane, healthy, and pleasant. Upgrading humans may take a much more advanced technology, and a much more advanced understanding of cognitive science, than starting over and building a mind from scratch.”

6 thoughts on “Erik Brynjolfsson Diagnoses the Problem in the Economy But Has No Solution

  1. While i agree with Eliezer Yudkowsky on a lot of issues, I don’t agree with his stance on the issue of human enhancement, mostly because I don’t think general artificial intelligence is going to be anywhere near as easy to design in the short term as he does. I mean, sure, if he does manage to build his friendly self-modifying AI that has a rapid intelligence explosion, and he does it in the next 10 years or so, then it make everything else irrelevant, but I think that’s a lot less likely then he does.

    On the other hand, I think we’re getting quite close to human intelligence enhancement. In terms of brain-computer interfaces, we’re making amazing progress, on several fronts. For example, a scientist has recently managed to create a chip that (in the brain of a monkey) can be implanted and can store memories that the brain can retrieve later.

    None of that solves the problem of technological unemployment, by the way; if anything it is likely to mean that even less people are going to need to “work” in the traditional sense. But as far as pushing the accelerating growth curve goes, I think human intelligence enhancement is going to be very significant in the near term.

    • I could be wrong but I don’t believe Eliezer is on record predicting that he or anyone else will necessarily build AGI in 10 years. I think his confidence interval about this extends as far as a 100 years if I remember correctly.

      That said, I don’t necessarily disagree with you, and I have no special claims to knowing what will come first in the race between AI and IA. But as far as jobs go, laboratory proof of concept in terms of brain computer interfaces is up against very real narrow AI that is being deployed right now to the possible detriment of large numbers of occupations.

      I do think it potentially affects technological unemployment, because if we have IA and it is cheap and distributed among the population so that we are all smarter together, then that effectively levels the playing field in the work force. It means that say, truck drivers who get replaced by self driving cars actually might plausibly be able to re-enter the job market in some brand new field. It means that we will be able to learn new things and retrain in such a way as our skills can keep up with technological progress. And on the more extreme end, it also means that as super smart humans we might just seamlessly continue to invent new wants and new jobs to serve them.

      We might also get really smart and decide to just take it easy, but who knows.

      • It occurs to me that a simpler way to put this is: destroying jobs only requires narrow AI. But to upgrade humans fast enough to keep up with the changing job market we would need actual full blown IA.

        • I’m probably being unfair to Eliezer; I don’t think he specifically predicted that it would be here in 10 years either, although I do remember one of his writings where he worried that AI might come significantly before Kurzweil’s predicted “2029” date for human-level AI and that it would catch everyone off-guard.

          Anyway, I think one key to the whole problem of technological unemployment might be to just get through an awkward transition phase as quickly as possible. When the level of technological unemployment passes 50% or 60%, I have confidence that society will move to quickly deal with the issue, one way or the other (either economically, or legally, or culturally, ect). The painful period, IMHO, is the time period when it climbs past 10%, then 15%, then 25%, and people still aren’t sure if it’s a real phenomenon or just an economic blip, and people aren’t sure what a good solution is that lets 30% of the population not work but keeps the other 70% working. That is going to be the most painful time period, and if we can get past that as quickly and painlessly as possible, I think we’ll be better off.

          • Yes, I think your solution of just get through this quickly is much more realistic than Brynjolfsson’s “race with machines”

          • Eliezer wrote some crazy stuff in the early nineties. He was still a teenager then, so i’d take those old essays with a grain of salt.