An Internet Without Gatekeepers

I’ve often wondered about the possibility of creating an alternate internet, one that is truly decentralized and arises from individuals forming short-range connections with each other. This post by Ian Pearson seems to be calling for just such a network:

“So I tend to lean towards wanting a new kind of web, one that governments can’t control so easily, where freedom of speech and freedom of thought can be maintained. If a full surveillance world prevents us from speaking, then we need to make another platform on which we can speak freely.

“I’ve written a number of times about jewellery nets and sponge nets. These could do the trick. With very short-range communication directly between tiny devices that each of us wears just like jewellery, a sponge network can be built that provides zillions of paths from A to B, hopping from device to device till it gets there.

“A sponge net doesn’t need any ISPs. (In fact, I’ve never really understood why the web needs them either, it is perfectly possible to build a web without them). Each device is autonomous. Each shares data with its immediate neighbours, and route dynamically according to a range of algorithms available to them. They can route data from A to B so that every packet goes by a different route of need be. Even without any encryption, only A and B can see the full message. The various databases that the web uses to tell packets where their destination is can be distributed. There is a performance price, but so what?”

I left in the part at the end about performance price, because despite Pearson’s dismissiveness, that might actually be a fairly big concern. If I understand correctly, a network like this would need a fair amount of public adoption to be successful, and if adoption means that people have to put up with slower speed, that could be a big obstacle.

That aside, I like the idea a lot, and I have no doubt that given improvements in technology over the next ten years, such performance issues will become less of a concern. Certainly the possibility of networks that don’t pass through ISPs is one more reason why artificial scarcity can never work.

Q: So is Technological Progress Accelerating or Not?


An early self-driving car

Accelerating technological progress is not just an abstract idea. If true, it has implications regarding all our biggest life choices: what to study, what job to get, whether to save money, and whether to have kids. Not to mention bigger policy and governance issues that affect our society at large.

In futurism circles, accelerating progress seems to be slowly emerging as a consensus view. However, there is still plenty of dissent on this issue, and possibly for good reason. So this post is going to lay out what I believe to be the three main arguments for accelerating progress.


I mean that our technology is advancing at a greater than linear rate. That’s it. I don’t want to get into arguments about the exact nature of the curve, and whether it is precisely exponential or not. Instead I simply mean to defend the proposition that the rate of progress is speeding up, rather than following a linear or decelerating trajectory.


Google Glasses

To many of us, it simply feels like things are moving faster. I’ve only been on this planet thirty years, but I’ve lived through the personal computer revolution, the rise of the internet, the adoption of cellphones, and the wide-scale deployment of smart phones. Very soon I will witness the release of autonomous cars and dawn of augmented reality. Each major technological development seems to come faster than the previous one and to be increasingly disruptive of existing economic and cultural norms.


Click to buy on Amazon

There are many thinkers for whom it doesn’t feel like things are speeding up. Economist Tyler Cowen is a good example. In The Great Stagnation he writes:

“Today, in contrast, apart from the seemingly magical internet, life in broad material terms isn’t so different from what it was in 1953. We still drive cars, use refrigerators, and turn on the light switch, even if dimmers are more common these days. The wonders portrayed in The Jetsons, the space age television cartoon from the 1960s, have not come to pass. You don’t have a jet pack. You won’t live forever or visit a Mars colony. Life is better and we have more stuff, but the pace of change has slowed down compared to what people saw two or three generations ago.”

Cowen is strangely dismissive of this “seemingly magical internet.” As far as technologies go, the internet is not like a car or a refrigerator. It’s just a way of connecting people to each other. It’s a very fundamental thing, a general purpose technology that affects all facets of the economy. But that said, this quote is primarily a subjective statement. If Cowen feels like things haven’t changed very much in the last fifty years, then I can’t really argue with that. I just happen to feel differently.

Peter Thiel

Another acceleration skeptic is prominent venture capitalist Peter Thiel. In a recent interview, he said:

“I believe that the late 1960s was not only a time when government stopped working well and various aspects of our social contract began to fray, but also when scientific and technological progress began to advance much more slowly. Of course, the computer age, with the internet and web 2.0 developments of the past 15 years, is an exception. Perhaps so is finance, which has seen a lot of innovation over the same period (too much innovation, some would argue).

“There has been a tremendous slowdown everywhere else, however. Look at transportation, for example: Literally, we haven’t been moving any faster. The energy shock has broadened to a commodity crisis. In many other areas the present has not lived up to the lofty expectations we had.”

Again, in order to make his case, Thiel must treat the internet as an exception, which I still find odd. But Thiel is absolutely right that in plenty of technological areas we have underperformed, at least with regards to prior expectations. This notion of prior expectations is important. Cowen, Thiel, and other stagnationists are fond of invoking jet packs and other classic science fiction tropes as evidence of our lack of progress. For example, in this talk, Thiel mentions how we once envisioned “vacations to the moon.” And in his essay Innovation Starvation, stagnationist Neal Stephenson begins by asking “where’s my ticket to Mars?”


A jetpack prototype from 1968

It should go without saying that our failure to build a world that resembles science fiction novels of the fifties and sixties should not necessarily have any bearing on how we evaluate our current technological position. In many ways the present day is far more advanced than our prior imaginings. After all, pocket-sized devices that give you instant access to all the world’s knowledge are certainly nothing to scoff at. It’s just that the technological progress we’ve ended up getting is not necessarily the same progress we once expected. I’d call that a failure of prediction, not a failure of technology.


Perhaps the focus of technology has simply shifted from growing “outward” to growing “inward.” Rather than expanding and colonizing the stars, we have been busy connecting to each other, exploring the frontiers of our own shared knowledge. And perhaps this is absolutely what we should be doing. Looking ahead, what if strong virtual reality turns out to be a lot easier (and more practical) than space travel? Why go on a moon vacation if you can simulate it? Thiel laments that “we simply aren’t moving any faster,” but one could argue that our ears, eyes, and thoughts are moving faster than ever before. At what point does communication start to substitute for transportation?

At the heart of the stagnationists’ arguments I sense a bias in favor of “real things” and against “virtual things.” Perhaps this perspective is justified, since if we are talking about the economy, it is much easier to see how real things can drive growth. As for virtual things driving growth—the jury’s still out on that question. Recently we’ve seen a lot of value get created virtually and then digitally distributed to everyone at almost no cost to the consumer. And many of today’s most promising businesses are tech companies that employ very few people and generate a lot of their value in the form of virtual “bits.” Cowen himself nails this point clearly and succinctly in the third chapter of his book, where in writing about the internet, he states “a lot of our innovation has a tenuous connection to revenue.”


Until we can agree on a standardized way to measure technological progress, all of the above discussion amounts to semantics. What is the “value” of the internet when compared to moon vacations? How many “technological progress points” does an iPhone count for? One man’s progress is another man’s stagnation. Without a relevant metric, only opinions remain.

Although no definitive measure exists for the “amount of technology” a civilization has, it might be possible to measure various features of the technological and economic landscape, and from these features derive an opinion about the progress of technology as a whole.


Real median family income has stagnated

In making their case for stagnation, Cowen and Thiel commonly cite median wages, which have been stagnant since the 1970s. Cowen writes, “Median income is the single best measure of how much we are producing new ideas that benefit most of the American population.” While these median wage statistics are interesting and important, they are absolutely not a measure of our technological capability. Rather they represent how well our economic system is compensating the median worker. While this is a fairly obvious point, I think it is an important one. It’s easy to fall into the trap of conflating technological health with economic health, as if those two variables are always going to be synchronized to each other. It seems much more logical to blame stagnant median wages on a failure of our economic system rather than a failure of our technology.

Click to buy on Amazon

Certainly one can tell a story about how it is a technological slowdown that is causing our stagnant median wages. But one can also tell the opposite story, as Erik Brynjolfsson and Andrew McAfee do in Race Against the Machine:

“There has been no stagnation in technological progress or aggregate wealth creation as is sometimes claimed. Instead, the stagnation of median incomes primarily reflects a fundamental change in how the economy apportions income and wealth. The median worker is losing the race against the machine.”

Regardless of which story is right, if we start with the question “is technological progress accelerating,” I don’t think the median wage statistic can ever provide us more than vague clues. It’s doubtful whether we can rely on a “median” measure. There is no law guaranteeing that technological gains will be shared equally and necessarily disseminate down to the median person. Cowen himself expresses this idea when he writes “a lot of our recent innovations are private goods rather than public goods.”

Productivity growth, unlike median income, has been growing.

There are of course other economic measures besides the median wage that might correlate more closely with technological progress. Productivity is a good example. However, the medium of money guarantees that such economic measures will always be at least one degree removed from the technology they are trying to describe. Moreover, it is difficult to calculate the monetary value of some of our more virtual innovations because of the “tenuous connection between innovation and revenue” mentioned above.


Another strategy for measuring technological progress is to count the frequency of new ideas or other important technological landmarks.

In The Great Stagnation, Cowen cites a study by Jonathan Huebner which claims we are approaching an innovation limit. In the study, Huebner employs two strategies for measuring innovation.

The first method involves counting the number of patents issued per year. Using patents to stand in for innovation strikes me as strange, and I’m sure many people who are familiar with the problems plaguing our patent system would agree. A good critique comes from John Smart, who writes:

“Huebner proposes that patents can be considered a “basic unit of technology,” but I find them to be mostly a measure of the kind of technology innovation that humans consider defensible in particular socioeconomic and legal contexts, which is a crude abstraction of what technology is.”

Huebner’s other method involves counting important technological events. These events are taken from a list published in The History of Science and Technology. Using this data, Huebner produces the following graph.

As you can see, the figure shows our rate of innovation peaking somewhere around the turn of the century, and then dropping off rapidly thereafter.


While counting technological events is an interesting exercise, it’s hard to view such undertakings as intellectually rigorous. After all, what criteria make an event significant? This is not a simple question to answer.

Things get more complicated when one considers that all innovations are built upon prior innovations. Where does one innovation end and another innovation start? These lines are not always easy to draw. In the digital domain, this problem only gets worse. The current debacle over software patents is symptomatic of the difficulty of drawing clear lines of demarcation.

By way of example, ask yourself if Facebook should count as an important innovation landmark. One can easily argue no, since almost all of Facebook’s original features existed previously on other social networking sites. And yet Facebook put these features together with a particular interface and adoption strategy that one could just as easily argue was extremely innovative. Certainly the impact of Facebook has not been small.


In The Singularity is Near, Ray Kurzweil also attempts to plot the frequency of important technological landmarks throughout time. However, instead of using just one list of important events, he combines fifteen different lists in an attempt to be more rigorous. In doing so, he reaches the opposite conclusion of Huebner: namely that technological progress has been accelerating throughout all of Earth’s history, and will continue to do so.

Which is not to say Kurzweil is right and Huebner is wrong (in fact there are methodological problems with both graphs), but that this whole business of counting events is highly subjective, no matter how many lists you compile. I think if we want to find a useful empirical measure of our technological capabilities, we can do better.


The following definition of technology comes from Wikipedia:

Technology is the making, usage, and knowledge of tools, machines, techniques, crafts, systems or methods of organization in order to solve a problem or perform a specific function.”

So if we want to measure the state of technology, it follows that we might want to ask questions such as “how many functions can our technology perform?” “how quickly?” and “how efficiently?” In short: “how powerful is our technology?”

Of course this quickly runs into some of the same problems as counting events. How do you define a “specific function?” Where does one function end and another begin? How can we draw clear lines between them?


Fortunately some of these problems evaporate with the arrival of the computer. Because if technology’s job is to perform specific functions, then computers are the ultimate example of technology. A computer is essentially a tool that does everything. A tool that absorbs all other technologies, and consequently all other functions.

In the early days of personal computing it was easy to see your computer as just another household appliance. But these days it might be more appropriate to look at your computer as a black hole that swallows up other objects in your house. Your computer is insatiable. It eats binders full of CDs, shelves full of books, and libraries full of DVDs. It devours game systems, televisions, telephones, newspapers, and radios. It gorges on calendars, photographs, filing cabinets, art supplies and musical instruments. And this is just the beginning.

Along the same lines, Cory Doctorow writes:

“General-purpose computers have replaced every other device in our world. There are no airplanes, only computers that fly. There are no cars, only computers we sit in. There are no hearing aids, only computers we put in our ears. There are no 3D printers, only computers that drive peripherals. There are no radios, only computers with fast ADCs and DACs and phased-array antennas.”

In fact, computers and technology writ-large seem to be merging together so rapidly, that using a measurement of one to stand in for the other seems like a pretty defensible option. For this reason I feel that computing power may actually be the best metric we have available for measuring our current rate of technological progress.

Using computing power as the primary measure of technological progress unfortunately prevents us from modeling very far back in history. However, if we accept the premise that computers eventually engulf all technologies, this metric should only get more appropriate with each passing year.


When it comes to analyzing the progress of computing power over time, the most famous example is Moore’s Law, which predicts (correctly for over 40 years) that the number of transistors we can cram onto an integrated circuit will double every 24 months.

How long Moore’s law will continue is of course up for debate, but based upon history the near-term outlook seems fairly positive. Of course, Moore’s Law charts a course for a relatively narrow domain. The number of transistors on a circuit is not an inclusive enough measure to represent “computing power” in the broader sense.

One of Ray Kurzweil’s more intriguing proposals is that we expand Moore’s law to describe the progress of computing power in general, regardless of substrate:

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.”

“As the figure demonstrates there were actually four different paradigms—electromechanical, relays, vacuum tubes, and discrete transistors—that showed exponential growth in the price performance of computing long before integrated circuits were even invented.”

Measured in calculations per second per $1000, the power of computers appears to have been steadily accelerating throughout the last century, even before integrated circuits got involved.


While I like Kurzweil’s price-performance chart, the $1000 in the denominator ensures that this is still an economic variable. Including money in the calculation inevitably introduces some of the same concerns about economic measures mentioned earlier in this essay.

So to eliminate the medium of money entirely, we might prefer a performance chart that tracks the power of the absolute best computer (regardless of cost) in a given time period. Fortunately, Kurzweil provides very close to such a chart with this graph of supercomputer power over time:


Just as all technology is converging toward computers, there is a sense in which all computers are merging together into a single global network via the internet. This network can itself be thought of as a giant supercomputer, albeit one composed of other smaller computers. So by measuring the aggregate size of the network we might also get a strong indication of our current rate of computing progress.

Please note that I do not necessarily support many of Kurzweil’s more extreme claims. Rather I am simply borrowing his charts to make the narrow (and fairly uncontroversial) point that computing power is accelerating.


While increasing computer power makes more technological functions possible, a bottleneck might exist in our ability to program these functions. In other words, we can expect to have the requisite hardware, but can we expect to have the accompanying software? Measuring the strength of hardware is a straightforward process. By contrast, software efficacy is a lot harder to quantify.

I think there are reasons to be optimistic on the software front. After all, we will have an ever growing number of people on the planet who are technologically enabled and capable of working on such problems. So the notion that software challenges are going to stall technological progress seems unlikely. That’s not a proof of course. Software stagnation is possible, but anecdotally I don’t see evidence of it occurring. Instead I see Watson, Siri, and the Google autonomous car, and get distinctly the opposite feeling.


At this point, you still may not accept my premise of a growing equivalence between computers and technology in general. Admittedly, it’s not a perfect solution to the measurement problem. However, the idea that available computing power will play a key role in determining the pace of technological change should not seem far-fetched.


Empirical analysis is useful, but as is clear by now, it can also be a thorny business. In terms of explaining why technological progress might be accelerating, a simple logical argument may actually be more convincing.

A feedback loop

A key feature of technological progress is that it contributes to its own supply of inputs. What are the inputs to technological innovation? Here is a possible list:

  • People
  • Education
  • Time
  • Access to previous innovations
  • Previous innovations themselves

As we advance technologically, the supply of all five of these inputs increases. Historically, technological progress has enabled larger global populations, improved access to education, increased people’s discretionary time by liberating them from immediate survival concerns, and provided greater access to recorded knowledge.

Moreover, all innovations by definition contribute to the growing supply of previous innovations that new innovations will draw upon. Many of these innovations are themselves “tools” that directly assist further innovation.

Taking all this into account we can expect technological progress to accelerate as with any feedback loop. The big variable that could defeat this argument is the possibility that useful new ideas might become harder to find with time.

However, even if finding new ideas gets harder, our ability to search the possibility space will be growing so rapidly that anything less than an exponential increase in difficulty should be surmountable.


Although some skepticism of these arguments is still warranted, their combined plausibility means we should consider outcomes in which change occurs much more rapidly than we might traditionally expect. Clinging to a linear perspective is not a good strategy, especially when so much is at stake. In short, we should question any long-term policy or plan that does not attempt to account for significantly different technology just ten or even five years from now.

Rushkoff: “Are Jobs Obsolete?”

I basically agree with all of this until the last sentence of the second to last paragraph — I’m not sure what he means by “pay each other with the same money we use to buy real stuff.” since digital bit-based products are by their nature abundant, their price compared to whatever few “real” things remain scarce (and not deemed ‘necessities’ and therefore socialized, under his scenario) is going to be miniscule. I don’t think we can run an economy of sufficient size off “selling” protected “bits” to one another. that’s an artificial scarcity scenario that requires draconian measures to enforce, and thus seems unlikely.

(Found via Blake Senfter via Joe McDonald.)

Body Mapping Yourself to Assist With Online Clothes Shopping

A while ago I wrote about how one of the last remaining obstacles that prevents technologically-inclined people like myself from doing all of our shopping online, is the problem of finding clothes that fit.

In the post I discussed Facecake’s virtual dressing room. I found it odd that Facecake’s marketing primarily emphasizes the retail store possibilities of the technology, while strangely downplaying the home shopping potential.

The same goes for this article about a Bodymetrics Pod, which fails to address the obvious trajectory of this technology:

The Bodymetrics Pod, which launched in the United States during the Denim Days celebration at a Bloomingdale’s in Los Angeles, uses Kinect for Windows to digitally ogle your curves and body-map your butt. This is all in the name of hooking you up with a pair of jeans that will flatter you rather than make it look like you crashed into a denim factory at 55 mph.

I have a fantasy that one day, thrift stores will all have Bodymetrics Pods that will match you up with the one gem among the hodgepodge of denim inventory on the racks. A gal can dream, right?

It seems clear to me that the real potential here is to get your body mapped once every couple of years (or any time you have a dramatic change in body shape). Once you have a digital file of your own body, that file can be sent to the cloud and follow you everywhere you go. Most importantly, given a widely adopted body-mapping standard, you ought to be able to make purchases from online retailers with full confidence that a given item will fit. This would include being able to observe a 3D model of yourself wearing the clothes.

This seems like an obvious technology that people want, and I’m surprised it doesn’t already exist. Once mature, this technology also seems likely to disrupt ordinary retail clothing stores the way that other digital technologies have disrupted music, video, and book stores.

Pirate Bay Plans to Evade the Authorities Using Hovering Server Drones

File this under more-reasons-while-artificial scarcity-can-never-work:

“With the development of GPS controlled drones, far-reaching cheap radio equipment and tiny new computers like the Raspberry Pi, we’re going to experiment with sending out some small drones that will float some kilometers up in the air. This way our machines will have to be shut down with aeroplanes in order to shut down the system. A real act of war.”

Read the full article.

Is the US Close to Maxed Out on Education?

Mark Lewis writes:

“In 1930, there was a lot of potential in the US public for improving skills through education. Most people were undereducated. They hadn’t reached their potential because they didn’t need to and were advised against it. Somewhere around the 1950s, kids were being told that they really needed to graduate from High School to find jobs. By the 1980s, you needed to go to college to get a good job. By 2000, college wasn’t seen as the key to the good jobs, it was the key to almost every job. We had moved into the information age and High School counselors were telling students that if they didn’t get some college they were doomed to lower-end jobs.

“One result of this is that the US is probably close to maxed out on education. There are inevitably some things that can happen to help certain students go further. There are definitely things that can be done to make the whole process more efficient. However, I don’t think this is an area of huge untapped potential. I don’t see any technology that is going to take current High School dropouts and turn them into Ph.D.s in STEM fields…

“I think the stories from the Occupy movement of people who had degrees and couldn’t find jobs are a parallel to the kid in the 1920s who was told to drop out of school and start working the farm. While it is easy to take a condescending view of the 20-somethings who racked up a whole bunch of debt majoring in some field from the Humanities and can’t find a job today, doing so is not only non-productive, it really isn’t fair. Those kids grew up being told that they should get a college degree in something they loved and that would get them a job. That advice has worked for decades. The people giving the advice didn’t lie, they simply didn’t have 20/20 foresight into the future. (Something it is impossible to blame people for.)

“There is a difference between today’s Occupiers and the unemployed farm hand of 1930 though, the unemployed farm hand had a lot of untapped potential when it came to education. The youth of today typically don’t. Yes, they could go learn something different to give them more desirable skills, but I fear that doesn’t scale the same way. Plus, many of these people chose the direction they went because they found that those other areas (which might be better for jobs) didn’t work well for them.”  (link)

I mostly agree with the basic premise here. The only clarification I would add is the limiting factor may not be people’s intellectual capability but their interest and ambition. I actually have a lot of faith in people’s potential when properly educated. But harnessing one’s potential requires willpower and drive that may be in short supply.

In other words, there are probably lots of people who intellectually speaking could become STEM field Ph.D.s but never will, because they lack the desire to follow through with such a field of study. So the next educational challenges will not only be about transferring knowledge, but also about finding new ways to incentivize people and make the learning process fun. For this reason I expect future education to increasingly take the form of games. The educational problem could be seen as a game design problem.

However, the bottom line remains the same. We are probably not going to be able to address the upcoming automation revolution with education alone. And the previous industrial revolution may have limited lessons to teach us in terms of providing a blueprint for the way ahead.

Graham: How Do You Define ‘Property?’

Paul Graham‘s newest essay on defining property has an analogy I like a lot:

As a child I read a book of stories about a famous judge in eighteenth century Japan called Ooka Tadasuke. One of the cases he decided was brought by the owner of a food shop. A poor student who could afford only rice was eating his rice while enjoying the delicious cooking smells coming from the food shop. The owner wanted the student to pay for the smells he was enjoying. The student was stealing his smells!

This story often comes to mind when I hear the RIAA and MPAA accusing people of stealing music and movies.

If that’s not enough to get you to read the whole thing, there’s also this, in the footnotes, about the interconnectedness of technological progress and cultural definitions of property:

Change in the definition of property is driven mostly by technological progress, however, and since technological progress is accelerating, so presumably will the rate of change in the definition of property. Which means it’s all the more important for societies to be able to respond gracefully to such changes, because they will come at an ever increasing rate.

Video Simulation of How Autonomous Cars Will Navigate Intersections

This fun little video offers an aerial simulation of how a crowd of self-driving cars might navigate a busy intersection. I had two reactions upon watching:

  1. I am excited about how efficient self-driving cars are going to be at using existing roads. I hope to spend a lot less time waiting in traffic.
  2. If you imagine yourself inside one of the vehicles in this simulation, its clear how scary riding in one of these cars might be for an first timer. The cars routinely head right towards each other only to swerve away at the last moment.

Job Creation and Job Destruction are Both Relentless, and the Small Difference Between Them is What We Call Prosperity

Andrew McAfee just posted about a live discussion he had at SXSW with TIm O’Reilly:

The huge question is whether enough labor-making technical innovation will take place to offset the labor-saving innovation that’s also going on, and that is (according to me and Erik) only going to accelerate in the near future. Creative destruction is the central dynanic of capitalism —  simultaneous creation and destruction of industries, companies, and jobs.

There is no economic law, however, that says that job creation has to stay slightly ahead of job destruction. As the Wall St. Journal’s Holman W. Jenkins, Jrsays, “Job creation and destruction are both relentless… The small difference between the two is what we call prosperity.” If that small difference turns negative instead of positive, due to technological progress and other factors, we will experience something quite different than prosperity.

Read the whole post.

Apple Now Bigger By Market Cap Than Entire US Retail Sector

From Zero Hedge:

A company whose value is dependent on the continued success of two key products, now has a larger market capitalization (at $542 billion), than the entire US retail sector (as defined by the S&P 500).

Apple sells computers of various sizes. These makers of general-purpose machines have reached a symbolic milestone in passing the retail market cap. This has to do with stock strategy and market valuation as well as other complicating factors, so we can’t read too much into it. But it’s easy to imagine a company like Apple eventually selling more computers than the world sells non-computers. The everything box ultimately encompasses all other products.