Erik Brynjolfsson Diagnoses the Problem in the Economy But Has No Solution

In this talk Erik Brynjolfsson clearly makes the case that productivity and employment are decoupling from each other. His presentation is a fantastic description of what is happening today and a fitting answer to the stagnationists.

That said, his solution at the end of this video amounts to little more than a clever turn of phrase: namely he suggests that we need to race with machines. In my detailed review of Erik’s book Race Against the Machine I criticized this idea:

The first suggestion the authors make can be summarized as “race with machines.” A human-machine combo has the potential to be much more powerful than either a human or machine alone. So therefore it’s not simply a question of machines replacing humans. It’s a question of how can humans and machines best work together.

I don’t disagree with this point on the surface. But I fail to see how it suggests a way out of our current predicament. The human-machine combo is a major cause of the superstar economics described earlier in the book. Strengthen the human-machine combo and the superstar effect will only get worse. In addition, if computers are encroaching further and further into the world of human skills, won’t the percentage of human in the human-machine partnership just keep shrinking? And at an exponential pace?

Moreover, as I’ve written about before on this site, the human-machine partnership can sometimes be less than the sum of its parts. Consider the example of airline pilots:

“In a draft report cited by the Associated Press in July, the agency stated that pilots sometimes “abdicate too much responsibility to automated systems.” Automation encumbers pilots with too much help, and at some point the babysitter becomes the baby, hindering the software rather than helping it. This is the problem of “de-skilling,” and it is an argument for either using humans alone, or machines alone, but not putting them together.”

At some point it may be possible to literally race with machines in the sense of actually merging man and machine together. But this has not been the current trend. What we have been seeing instead is people offloading cognitive tasks to independent machine algorithms. How many of us remember phone numbers anymore? Indeed memory has been one of the first cognitive tasks to get offloaded.

In order to race with machines I am convinced we need to actually enhance human intelligence directly. This is probably not impossible, but will require a much better understanding of the brain, and as a solution it will probably not arrive in time to stave off the massive decoupling that is affecting our economy.

Here is Eliezer Yudkowsky on the relative difficulty of agumenting humans versus developing standalone artificial intelligence:

“I originally gave the example of humans augmented with brain-computer interfaces, using their improved intelligence to build better brain-computer interfaces. A difficulty with this scenario is that there’s two parts to the system, the brain and the computer. If you want to improve the complete system, you can build interfaces with higher neural bandwidth to more powerful computers that do more cognitive work. But sooner or later you run into a bottleneck, which is the brain part of the brain-computer system. The core of your system has a serial speed of around a hundred operations per second. And worse, you can’t reprogram it. Evolution did not build human brains to be hacked. Even if on the hardware level we could read and modify each individual neuron, and add neurons, and speed up neurons, we’d still be in trouble because the brain’s software is a huge mess of undocumented spaghetti code. The human brain is not end-user-modifiable.

“So trying to use brain-computer interfaces to create smarter-than-human intelligence may be like trying to build a heavier-than-air flying machine by strapping jet engines onto a bird. I’m not saying it could never, ever be done. But we might need a smarter-than-human AI just to handle the job of upgrading humans, especially if we want the upgrading process to be safe, sane, healthy, and pleasant. Upgrading humans may take a much more advanced technology, and a much more advanced understanding of cognitive science, than starting over and building a mind from scratch.”

The Declining Cost of Doing Good

One of the things that makes me optimistic about the future is that as technology progresses, charity should become dramatically less expensive in terms of both time and money. I base this assumption on the fact that historically almost all goods have gotten both cheaper and more convenient, and I don’t see why charity should be any different.

Imagine you are watching TV late at night and one of those ads comes on imploring you to help feed some child in Africa for just five cents a day. If you are like me there is a strong chance you might just change the channel and pretend you never saw that ad. Sure, the deal being offered is good: just five cents a day and you could dramatically improve someone’s life. But on the other hand, it’s just one kid out of billions—hardly a dent in terms of global hunger—and if I do decide to follow through then I am going to have to momentarily stop what I am doing, dial some phone number, maybe go to a website, and then, worst of all, I’ll actually have to take my credit card out of my wallet and type in all the numbers listed there—oh, what a chore!

Does this make me a terrible person? Maybe. But it also makes me human, and fairly typical of humans in general. If it weren’t so easy for humans to ignore the plight of others far away, then we would probably live in a far more equitable world.

Now imagine instead of one kid, it’s 1000 kids. And instead of five cents a day it’s five cents a year. That deal is starting to get pretty hard to ignore. Imagine also that some startup company has solved the micropayment problem and established a widely adopted standard for sending money. Imagine I don’t need to reach for my wallet; I don’t have to log in anywhere; I can just look at the TV and blink my eyes, and this will signal my augmented reality glasses to go ahead and send payment, no extra hassle necessary. Perhaps my payment will also be followed by a satisfying video game sound effect and an icon showing me I have just earned 1000 “points”.

All of which is to say: even in the face of a potential bad future where work is increasingly hard to find, where government fails to provide for people, and where access to helpful technologies is needlessly restricted by intellectual property law and digital rights management—even in a future run by sociopathic elites who could care less about the rest of us—even then there is hope that the masses will be alright. Because as lazy and solipsistic and selfish as people are, I’m convinced that if technology makes it cheap and easy enough, growing numbers of average people around the globe will simply choose to help each other out. Because at that point, why not?

The Road Forward Paved is With Decentralized Technological Solutions

In a previous post I articulated how most problems have three types of solutions: cultural, legal, and technological. Suppose a hundred people are stranded on an island, and they keep fighting over limited food resources. To fix this problem, one could implement:

  1. A Cultural Solution – Try to convince everyone to be nice.
  2. A Legal Solution – Design laws that dictate the distribution of food. Create a government to enforce these laws.
  3. A Technological Solution – Invent new food technologies that ensure there is more food than anyone could ever eat.

Now the idealist in me likes (1) a lot, and it might even work in the context of a small island where everyone knows each other, but let’s face it, such solutions are generally not effective, especially as societies get bigger. Humans don’t necessarily respond well to just being told to “play nice.” Especially when it only takes a few bad people to ruin everything.

As for (2), it’s a necessary evil most of the time, but it has tons of undesirable side effects. Creating any government is going to lead to a concentration of wealth and power as per the iron law of oligarchy. Certainly some governments are more desirable than others, and we can quibble over those details, but at the end of the day I don’t think it matters whether or not you opt for a libertarian private property scheme, a socialist wealth distribution scheme, or anything else in between, once you start giving certain islanders spears and the authority to stab people who disobey, I think  inequality and abuse of power are likely to be unfortunate byproducts.

But as I articulated in the original article, (3) has tons of advantages. By finding a way to create more food you have potentially done an end run around the difficult challenge of getting people to be nice to each other.

However (3) has a big caveat: Who controls the technology? Is it centralized, or is it decentralized?

Some technologies are decentralized or centralized by their very nature. For example, fire, one of the first technologies, is naturally decentralized. The raw materials to create fire are cheap and readily available. All you need is some basic knowledge. Nuclear power, on the other hand, is an example of an extremely centralized technology. Clearly you cannot just create a nuclear power plant in your backyard.

The problem with centralized technological solutions, is you potentially run into the same sorts of problems as legal solutions. Let’s say the islanders develop an effective new farming technology, but there is only a very small patch of land on the island with useable soil. Then a situation arises in which the people who control that piece of land are effectively in charge of the food supply. This simply gives rise to another form of governance, albeit one based on leveraging technology rather than just force. Such a scenario is again highly likely to lead to inequality and abuses of power.

However, it’s not hard to imagine a more decentralized technological solution. For example, if the islanders discover a robust food-producing plant that can grow anywhere on the island, then this solution will be much more resistant to elite control. Thus, I would argue, it is decentralized technology solutions that have the most potential to create real progress. These are the types of solutions we should actively promote if we want to achieve a better, more equal society.

Of course, governments will often resist such developments as they tend to undermine government power. It is all too common for governments to take a decentralized technology and try to recentralize it into the hands of a few people. For example, we could easily imagine the government of the island making it illegal to grow certain plants unless you are part of a special farmer’s guild. (If that sounds silly, keep in mind that the effect of seed patents today isn’t all that different in principle.)

Because I believe in the power of decentralized technological solutions to create real progress, my political beliefs are shaped accordingly. I favor any government that enables decentralized engineering solutions to flourish, whether by actively funding research in a socialist fashion or simply getting out of the way in a libertarian fashion. Specifically, I want a government that:

  • Does not actively wage war on decentralized technologies (see the war on drugs and the war on piracy)
  • Does not enforce complex legal schemes whose main aim appears to be locking down knowledge that would otherwise be decentralized (see intellectual property law)
  • Encourages the development of new decentralized solutions (see solar panels, open source software, household 3D printers, mesh networks, etc)

Three Ways to Tackle Societal Problems, Or The Importance of Technological End Runs

Most solutions to societal problems fall into one of three categories—cultural, legal, or technological. Consider a disabled man, who lacks the use of his legs. We want to ensure that this man has equal access and isn’t unfairly discriminated against. We can institute:

  • A Cultural Solution — Encourage everyone to be considerate of this man’s needs.
  • A Legal Solution — Enforce laws that make it illegal to not provide equal access to this man.
  • A Technological Solution — Just give the man robot legs and call it a day.

Cultural solutions generally don’t hurt, but they tend to be slow-moving and in the worst cases can be completely ineffectual. Legal solutions require the use of centralized state power, and are thus subject to all the associated problems. Even in the above example, the potential for governmental abuse is clearly present: it’s not hard to imagine a bureaucracy imposing excessive fees and requirements on businesses and individuals, all under the pretense of making things more “handicap-friendly.”

Technological solutions, on the other hand, have the potential to bypass both cultural lethargy and bad policy. If you actually want to change the world for the better, with a reasonable amount of effort and on a reasonable timescale, technological solutions have a lot of advantages.

Good philanthropic institutions tend to understand this truth. For example, if you want to help solve the problem of STDs and unwanted pregnancies by encouraging condom use, you can institute:

  • A Cultural Solution — Just tell people to use condoms. (While sex education is certainly a good idea, it is far from a complete solution given how intractable horny people are.)
  • A Legal Solution — Mandate the use of condoms. (If this sounds absurd, note that my county just voted to force porn actors to wear condoms in all sex scenes.)
  • A Technological Solution — Design a better condom that people will be more likely to use.

This might seem like an obvious point I’m making, but I find that all too often people tend to inadvertently leave technological solutions out of debates. Many arguments get bogged down in fights between two competing legal solutions. Meanwhile some lateral technological solution is just sitting there, waiting to be exploited. Often times, the energy that is spent fighting over competing policy visions, could be better spent fostering some engineering project. For example, what would save more lives per unit of effort? Fighting a difficult political battle to enact tougher gun control laws aimed at criminals who are already set on breaking the law? Or researching biometric locks that might at least do away with the significant number of accidental gun deaths?

The importance of technological solutions is particularly important to remember today. As technological progress accelerates, many old cultural and political debates become susceptible to technological end runs.

The Phenomenon of Cultural Slowdown, Or It Took This Long To Come Up With MOOCS?!

Here in the futurist community it is standard practice to seize upon some fun new technological advance coming down the pike and then prognosticate on the societal implications. And often this type of over-eager thought experiment is justified. After all, as we’ve seen with cell phones, adoption can sometimes happen shockingly fast. The self-driving cars that are currently confined to labs and controlled experiments really could find their way onto the road in surprisingly short order. In which case, we are more than justified in aggressively discussing them now.

However, we should remind ourselves that just because technology makes something possible, and even desirable, doesn’t mean that said thing will occur on an expeditious timescale. Let me illustrate with a personal example.

In 1999, when I was still in high school, I took an online class offered by the University of California, Los Angeles. It was a creative writing class, and at that moment in my life, it was the best class experience I’d ever had. The professor was excellent. The other students were excellent. I got great and useful feedback on the stories that I wrote, and all from the comfort of my home computer.

It seemed obvious to me then that the future of education was online. That there was little need for classrooms when you had the internet. That education, after all, is just the transmission of information from one individual to another, primarily via text and voice. These were tasks the network could easily handle. Surely, I thought, education was going to be transformed by technology and the internet within a decade!

Alas. Ten years later my utopia had not come to pass. In fact, I had stopped ranting about how computers would change education. I’d gotten tired of people’s incredulous responses. I’d realized that my pontificating on technology trends only tended to alienate people and ruin otherwise pleasant gatherings. Moreover, I’d made peace with the fact that apparently I was wrong. Technology had not transformed education. People were still using the same poorly written textbooks and paying for the same overpriced universities. As a professional tutor, I’d witnessed firsthand many absurdities. My favorite example was a widely used math textbook that explained its concepts using math more advanced than the actual math being taught. If you think about that, it’s not unlike teaching someone their ABC’s using Shakespeare as a guide.

Then one day someone said to me, “Hey you should check out Khan Academy. It’s like what you’ve been talking about.” And so I checked it out. And was initially disappointed. This was the future? Some hastily made Youtube videos? At this point I had at least ten years under my belt consuming online tutorials. To me, learning was a big part of what the internet was for. So the fact that some guy named Khan had collected a bunch of videos in one place did not seem new or revolutionary. From my perspective, this type of knowledge dissemination had been going on for a third of my life.

But in truth, there was something new going on. Because thanks to Khan and others, the potential of online education was finally being recognized by the mainstream. Culture was finally catching up.

Today we have a lot of buzz about MOOCs or massive open online courses. And this development is exciting, because it represents a major step closer to the full service online education I’ve always imagined. So why did it take so long to get here? The technology has not been lacking. Remember: the ability to transmit text, audio, and even video over the network has been around for a long time now. I had a fulfilling online class experience back in 1999. Rather, things seem to be happening in online education today because finally people are getting more culturally comfortable with the idea of learning in front of a computer instead of in a classroom.

In fact, there is still a lot of innovation that needs to occur. I’m convinced gamification and personalization are the main ways to continuing improving education, and I know I am not alone in this opinion. But designing such systems requires skill, money, and most importantly willpower. Again, technology, as I see it, has been more than sufficient for a while now. It is human motivation that has been lacking.

So the lesson from this is a cautionary one. It is an obvious point perhaps, but worth remembering: When it comes to technology, just because something can happen soon, doesn’t mean it will. Even if the thing in question is highly desirable. Technological progress is fast, but cultural progress can be very slow indeed.

Charles Stross on Wallpapering Cities with Tiny Solar Powered Computers and Other Future Possibilities

In this video, science fiction writer Charles Stross ruminates on the next 30 years.

“In about another decade we should be able to make circuits about as powerful as one of today’s smart phones—in other words on the order of a billion operations per second—that will be about the price of a radio frequency ID tag, maybe under a dollar, and that will consume a small enough amount of power that you could in principle run it off of ambient solar energy with a surface area of a couple millimeters squared… You could pretty much wallpaper large cities with them.”

Superstar Cabbie

Here’s a great post on the Atlantic Cities blog on Rashid Temuri, the ingenious Chicago cabbie who has used Twitter to improve taxi service in Chitown and in the process become a kind of one-man taxi service himself. This is supposed to be a heart-warming story and it is, showing how superstar economics now apply to everyone with an internet connected clientele (which is to say, virtually everyone in a major metro and many others too). In the short term, Temuri is publicly outperforming his competitors and as a result he’s taking their market share. Of course, none of this is going to help in a few years when driverless cabs provide an even better service than Temuri can, and cheaper.

The Eradication of Disability as an Input to Technological Growth

News is circling today that two British patients have had partial success with electronic retina implants restoring sight to the blind. This got me thinking about the steady progress of aids for the disabled. As cochlear implants, retinal implants, and thought-controlled prostheses continue to improve, people who would previously not have had the chance to make large contributions to society will be able to do so. Stephen Hawking is a good example of a disabled person who, aided by technology, has made a singular achievement in his field. Imagine if just one or two more such people are enabled by better technology to pursue their passions.

Q: So is Technological Progress Accelerating or Not?


An early self-driving car

Accelerating technological progress is not just an abstract idea. If true, it has implications regarding all our biggest life choices: what to study, what job to get, whether to save money, and whether to have kids. Not to mention bigger policy and governance issues that affect our society at large.

In futurism circles, accelerating progress seems to be slowly emerging as a consensus view. However, there is still plenty of dissent on this issue, and possibly for good reason. So this post is going to lay out what I believe to be the three main arguments for accelerating progress.


I mean that our technology is advancing at a greater than linear rate. That’s it. I don’t want to get into arguments about the exact nature of the curve, and whether it is precisely exponential or not. Instead I simply mean to defend the proposition that the rate of progress is speeding up, rather than following a linear or decelerating trajectory.


Google Glasses

To many of us, it simply feels like things are moving faster. I’ve only been on this planet thirty years, but I’ve lived through the personal computer revolution, the rise of the internet, the adoption of cellphones, and the wide-scale deployment of smart phones. Very soon I will witness the release of autonomous cars and dawn of augmented reality. Each major technological development seems to come faster than the previous one and to be increasingly disruptive of existing economic and cultural norms.


Click to buy on Amazon

There are many thinkers for whom it doesn’t feel like things are speeding up. Economist Tyler Cowen is a good example. In The Great Stagnation he writes:

“Today, in contrast, apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space age television cartoon from the 1960s, have not come to pass. You don’t have a jet pack. You won’t live forever or visit a Mars colony. Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.”

Cowen is strangely dismissive of this “seemingly magical internet.” As far as technologies go, the internet is not like a car or a refrigerator. It’s just a way of connecting people to each other. It’s a very fundamental thing, a general purpose technology that affects all facets of the economy. But that said, this quote is primarily a subjective statement. If Cowen feels like things haven’t changed very much in the last fifty years, then I can’t really argue with that. I just happen to feel differently.

Peter Thiel

Another acceleration skeptic is prominent venture capitalist Peter Thiel. In a recent interview, he said:

“I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).

“There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had.”

Again, in order to make his case, Thiel must treat the internet as an exception, which I still find odd. But Thiel is absolutely right that in plenty of technological areas we have underperformed, at least with regards to prior expectations. This notion of prior expectations is important. Cowen, Thiel, and other stagnationists are fond of invoking jet packs and other classic science fiction tropes as evidence of our lack of progress. For example, in this talk, Thiel mentions how we once envisioned “vacations to the moon.” And in his essay Innovation Starvation, stagnationist Neal Stephenson begins by asking “where’s my ticket to Mars?”


A jetpack prototype from 1968

It should go without saying that our failure to build a world that resembles science fiction novels of the fifties and sixties should not necessarily have any bearing on how we evaluate our current technological position. In many ways the present day is far more advanced than our prior imaginings. After all, pocket-sized devices that give you instant access to all the world’s knowledge are certainly nothing to scoff at. It’s just that the technological progress we’ve ended up getting is not necessarily the same progress we once expected. I’d call that a failure of prediction, not a failure of technology.


Perhaps the focus of technology has simply shifted from growing “outward” to growing “inward.” Rather than expanding and colonizing the stars, we have been busy connecting to each other, exploring the frontiers of our own shared knowledge. And perhaps this is absolutely what we should be doing. Looking ahead, what if strong virtual reality turns out to be a lot easier (and more practical) than space travel? Why go on a moon vacation if you can simulate it? Thiel laments that “we simply aren’t moving any faster,” but one could argue that our ears, eyes, and thoughts are moving faster than ever before. At what point does communication start to substitute for transportation?

At the heart of the stagnationists’ arguments I sense a bias in favor of “real things” and against “virtual things.” Perhaps this perspective is justified, since if we are talking about the economy, it is much easier to see how real things can drive growth. As for virtual things driving growth—the jury’s still out on that question. Recently we’ve seen a lot of value get created virtually and then digitally distributed to everyone at almost no cost to the consumer. And many of today’s most promising businesses are tech companies that employ very few people and generate a lot of their value in the form of virtual “bits.” Cowen himself nails this point clearly and succinctly in the third chapter of his book, where in writing about the internet, he states “a lot of our innovation has a tenuous connection to revenue.”


Until we can agree on a standardized way to measure technological progress, all of the above discussion amounts to semantics. What is the “value” of the internet when compared to moon vacations? How many “technological progress points” does an iPhone count for? One man’s progress is another man’s stagnation. Without a relevant metric, only opinions remain.

Although no definitive measure exists for the “amount of technology” a civilization has, it might be possible to measure various features of the technological and economic landscape, and from these features derive an opinion about the progress of technology as a whole.


Real median family income has stagnated

In making their case for stagnation, Cowen and Thiel commonly cite median wages, which have been stagnant since the 1970s. Cowen writes, “Median income is the single best measure of how much we are producing new ideas that benefit most of the American population.” While these median wage statistics are interesting and important, they are absolutely not a measure of our technological capability. Rather they represent how well our economic system is compensating the median worker. While this is a fairly obvious point, I think it is an important one. It’s easy to fall into the trap of conflating technological health with economic health, as if those two variables are always going to be synchronized to each other. It seems much more logical to blame stagnant median wages on a failure of our economic system rather than a failure of our technology.

Click to buy on Amazon

Certainly one can tell a story about how it is a technological slowdown that is causing our stagnant median wages. But one can also tell the opposite story, as Erik Brynjolfsson and Andrew McAfee do in Race Against the Machine:

“There has been no stagnation in technological progress or aggregate wealth creation as is sometimes claimed. Instead, the stagnation of median incomes primarily reflects a fundamental change in how the economy apportions income and wealth. The median worker is losing the race against the machine.”

Regardless of which story is right, if we start with the question “is technological progress accelerating,” I don’t think the median wage statistic can ever provide us more than vague clues. It’s doubtful whether we can rely on a “median” measure. There is no law guaranteeing that technological gains will be shared equally and necessarily disseminate down to the median person. Cowen himself expresses this idea when he writes “a lot of our recent innovations are private goods rather than public goods.”

Productivity growth, unlike median income, has been growing.

There are of course other economic measures besides the median wage that might correlate more closely with technological progress. Productivity is a good example. However, the medium of money guarantees that such economic measures will always be at least one degree removed from the technology they are trying to describe. Moreover, it is difficult to calculate the monetary value of some of our more virtual innovations because of the “tenuous connection between innovation and revenue” mentioned above.


Another strategy for measuring technological progress is to count the frequency of new ideas or other important technological landmarks.

In The Great Stagnation, Cowen cites a study by Jonathan Huebner which claims we are approaching an innovation limit. In the study, Huebner employs two strategies for measuring innovation.

The first method involves counting the number of patents issued per year. Using patents to stand in for innovation strikes me as strange, and I’m sure many people who are familiar with the problems plaguing our patent system would agree. A good critique comes from John Smart, who writes:

“Huebner proposes that patents can be considered a “basic unit of technology,” but I find them to be mostly a measure of the kind of technology innovation that humans consider defensible in particular socioeconomic and legal contexts, which is a crude abstraction of what technology is.”

Huebner’s other method involves counting important technological events. These events are taken from a list published in The History of Science and Technology. Using this data, Huebner produces the following graph.

As you can see, the figure shows our rate of innovation peaking somewhere around the turn of the century, and then dropping off rapidly thereafter.


While counting technological events is an interesting exercise, it’s hard to view such undertakings as intellectually rigorous. After all, what criteria make an event significant? This is not a simple question to answer.

Things get more complicated when one considers that all innovations are built upon prior innovations. Where does one innovation end and another innovation start? These lines are not always easy to draw. In the digital domain, this problem only gets worse. The current debacle over software patents is symptomatic of the difficulty of drawing clear lines of demarcation.

By way of example, ask yourself if Facebook should count as an important innovation landmark. One can easily argue no, since almost all of Facebook’s original features existed previously on other social networking sites. And yet Facebook put these features together with a particular interface and adoption strategy that one could just as easily argue was extremely innovative. Certainly the impact of Facebook has not been small.


In The Singularity is Near, Ray Kurzweil also attempts to plot the frequency of important technological landmarks throughout time. However, instead of using just one list of important events, he combines fifteen different lists in an attempt to be more rigorous. In doing so, he reaches the opposite conclusion of Huebner: namely that technological progress has been accelerating throughout all of Earth’s history, and will continue to do so.

Which is not to say Kurzweil is right and Huebner is wrong (in fact there are methodological problems with both graphs), but that this whole business of counting events is highly subjective, no matter how many lists you compile. I think if we want to find a useful empirical measure of our technological capabilities, we can do better.


The following definition of technology comes from Wikipedia:

Technology is the making, usage, and knowledge of tools, machines, techniques, crafts, systems or methods of organization in order to solve a problem or perform a specific function.”

So if we want to measure the state of technology, it follows that we might want to ask questions such as “how many functions can our technology perform?” “how quickly?” and “how efficiently?” In short: “how powerful is our technology?”

Of course this quickly runs into some of the same problems as counting events. How do you define a “specific function?” Where does one function end and another begin? How can we draw clear lines between them?


Fortunately some of these problems evaporate with the arrival of the computer. Because if technology’s job is to perform specific functions, then computers are the ultimate example of technology. A computer is essentially a tool that does everything. A tool that absorbs all other technologies, and consequently all other functions.

In the early days of personal computing it was easy to see your computer as just another household appliance. But these days it might be more appropriate to look at your computer as a black hole that swallows up other objects in your house. Your computer is insatiable. It eats binders full of CDs, shelves full of books, and libraries full of DVDs. It devours game systems, televisions, telephones, newspapers, and radios. It gorges on calendars, photographs, filing cabinets, art supplies and musical instruments. And this is just the beginning.

Along the same lines, Cory Doctorow writes:

“General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”

In fact, computers and technology writ-large seem to be merging together so rapidly, that using a measurement of one to stand in for the other seems like a pretty defensible option. For this reason I feel that computing power may actually be the best metric we have available for measuring our current rate of technological progress.

Using computing power as the primary measure of technological progress unfortunately prevents us from modeling very far back in history. However, if we accept the premise that computers eventually engulf all technologies, this metric should only get more appropriate with each passing year.


When it comes to analyzing the progress of computing power over time, the most famous example is Moore’s Law, which predicts (correctly for over 40 years) that the number of transistors we can cram onto an integrated circuit will double every 24 months.

How long Moore’s law will continue is of course up for debate, but based upon history the near-term outlook seems fairly positive. Of course, Moore’s Law charts a course for a relatively narrow domain. The number of transistors on a circuit is not an inclusive enough measure to represent “computing power” in the broader sense.

One of Ray Kurzweil’s more intriguing proposals is that we expand Moore’s law to describe the progress of computing power in general, regardless of substrate:

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.”

“As the figure demonstrates there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price performance of computing long before integrated circuits were even invented.”

Measured in calculations per second per $1000, the power of computers appears to have been steadily accelerating throughout the last century, even before integrated circuits got involved.


While I like Kurzweil’s price-performance chart, the $1000 in the denominator ensures that this is still an economic variable. Including money in the calculation inevitably introduces some of the same concerns about economic measures mentioned earlier in this essay.

So to eliminate the medium of money entirely, we might prefer a performance chart that tracks the power of the absolute best computer (regardless of cost) in a given time period. Fortunately, Kurzweil provides very close to such a chart with this graph of supercomputer power over time:


Just as all technology is converging toward computers, there is a sense in which all computers are merging together into a single global network via the internet. This network can itself be thought of as a giant supercomputer, albeit one composed of other smaller computers. So by measuring the aggregate size of the network we might also get a strong indication of our current rate of computing progress.

Please note that I do not necessarily support many of Kurzweil’s more extreme claims. Rather I am simply borrowing his charts to make the narrow (and fairly uncontroversial) point that computing power is accelerating.


While increasing computer power makes more technological functions possible, a bottleneck might exist in our ability to program these functions. In other words, we can expect to have the requisite hardware, but can we expect to have the accompanying software? Measuring the strength of hardware is a straightforward process. By contrast, software efficacy is a lot harder to quantify.

I think there are reasons to be optimistic on the software front. After all, we will have an ever growing number of people on the planet who are technologically enabled and capable of working on such problems. So the notion that software challenges are going to stall technological progress seems unlikely. That’s not a proof of course. Software stagnation is possible, but anecdotally I don’t see evidence of it occurring. Instead I see Watson, Siri, and the Google autonomous car, and get distinctly the opposite feeling.


At this point, you still may not accept my premise of a growing equivalence between computers and technology in general. Admittedly, it’s not a perfect solution to the measurement problem. However, the idea that available computing power will play a key role in determining the pace of technological change should not seem far-fetched.


Empirical analysis is useful, but as is clear by now, it can also be a thorny business. In terms of explaining why technological progress might be accelerating, a simple logical argument may actually be more convincing.

A feedback loop

A key feature of technological progress is that it contributes to its own supply of inputs. What are the inputs to technological innovation? Here is a possible list:

  • People
  • Education
  • Time
  • Access to previous innovations
  • Previous innovations themselves

As we advance technologically, the supply of all five of these inputs increases. Historically, technological progress has enabled larger global populations, improved access to education, increased people’s discretionary time by liberating them from immediate survival concerns, and provided greater access to recorded knowledge.

Moreover, all innovations by definition contribute to the growing supply of previous innovations that new innovations will draw upon. Many of these innovations are themselves “tools” that directly assist further innovation.

Taking all this into account we can expect technological progress to accelerate as with any feedback loop. The big variable that could defeat this argument is the possibility that useful new ideas might become harder to find with time.

However, even if finding new ideas gets harder, our ability to search the possibility space will be growing so rapidly that anything less than an exponential increase in difficulty should be surmountable.


Although some skepticism of these arguments is still warranted, their combined plausibility means we should consider outcomes in which change occurs much more rapidly than we might traditionally expect. Clinging to a linear perspective is not a good strategy, especially when so much is at stake. In short, we should question any long-term policy or plan that does not attempt to account for significantly different technology just ten or even five years from now.

Karl Smith: “Horses are not so different than you and I”

My longer thesis is that the rising return to unskilled labor is a function of industrialization and that industrialization is unique in this. The wage rate on unskilled labor never benefited before and its not immediately clear that it will ever benefit again.

This is because rents always accrue to the scarce factors of production. Industrialization meant that the only thing we were short on were “control systems” everything else in the production process was effectively cheap.

However, any mentally healthy human being is a decent control system. So, this meant huge returns to being a human. It also meant collapsing returns to being a horse. Though, people think of this as a difference in kind, I urge you not to. Horses are not so different than you and I.

Read the whole thing.