1972 Film “Future Shock” Narrated By Orson Welles

This documentary film is based upon the Alvin Toffler book of the same name. As you’d expect from a work of futurism made in the early seventies, there are some dated sounding predictions in here. But on a whole, if you’re willing to look past some of the sillier details, the central thesis holds up fairly well. I have experienced lots of little “future shock” moments in the past decade, from the first time I saw a camera perform facial recognition, to the first time I saw Sebastian Thrun describe autonomous cars at the Singularity Summit in 2006. Since I’ve started following tech news more aggressively, it seems like almost every week there is some crazy new technology coming down the pipeline, most recently the announcement that we might have augmented reality glasses consumer-ready within a year. In fact, it seems like I can scarcely think of an idea before some company announces that the concept in my head is already well on its way to becoming reality. Would-be science fiction authors beware! In any event, I would be lying if I didn’t say that all these rapid-fire technological advances aren’t occasionally accompanied by a slight feeling of nausea, almost as if I’m on a ride that is going too fast. I would be lying if I didn’t say that sometimes I feel “future shocked.”

Are Augmented Reality Glasses Really Coming THIS YEAR?

This is just a rumor right now, and might not pan out, but if true this is a great example of how difficult it is to get out of the linear thinking trap. The smartphone has existed in some form since 2001 and didn’t get to the price and quality where it represented a worldwide market until 2007.

On this site we’ve spent a lot of time reading and projecting technology trends. We try to avoid tossing out ideas just because they seem radical. But I’m finding it hard to accept that the smartphone, which barely existed ten years ago and has driven growth in world computing for only 5 years, is about to be obsolete.

Let’s say for the sake of argument that Google really introduces useful Android-powered smartglasses this year, and they really cost $250-600. Chances are these represent the 2001 Palm OS version of what augmented reality glasses will really be capable of. But can we expect it to take even the 6 years it took smartphones for them to catch up? First, since this is running Android, it’s not a new software stack, but instead a smaller hardware packaging. This makes it more like the shift from desktop to laptop computers; for the most part, the software is already there. Second, these shifts are trending shorter over time. It took nearly 4 decades to shift from mainframes to PCs, more than 2 decades to shift to laptops, and the mobile shift took 6 years. Perhaps we’ll see mature augmented reality glasses 3 years after introduction.

As hard as it is, I think we need to expect more changes like this, and we need to expect them to come even more rapidly. An 11-year window (let alone a 30-plus-year one) in which a particular type of computing product is the most advanced available may be something we never see again.

Abundance of Links — Getting Everybody Online, The Rise of the Darknet, One Possible Future Job

1. Can We Get Everyone Online by 2018?

“The 550 Challenge – the world borderless by February 3, 2018 – promotes the expansion of Internet access to include everyone on earth by the 550th anniversary of Johannes Guttenberg’s death. Gutenberg died on February 3, 1468 in relative obscurity before the printing press got credit for ending the Dark Age and setting in motion 200 years of accelerated progress in art, literature, and learning known as the Renaissance.”

2. The Darknet: One More Reason Why Artificial Scarcity May Be Impossible

“One of the most striking examples of a darknet comes from Mexico where it was recently discovered that the the Zetas drug cartel has set up several private cell phone and radio repeater systems in the state of Veracruz as well as along 500 miles of the Texas-Mexico border.  Some portions of this system were in remote areas and were powered by solar cells, and used commercially available components.”

3. The (Supposed) Hot Tech Gig of 2022: Data Scientist

“A decade from now the smart techies who decided to become app developers may wish they had taken an applied-mathematics class or two. The coming deluge of data (more on that in a moment) will create demand for a new kind of computer scientist — a gig that’s one part mathematician, one part product-development guru, and one part detective.”

When asked to imagine the jobs of the future, people often come up with ideas like “data scientist.” First of all, it goes without saying that this would likely be an elite job for only a small segment of the population. Second, what these data scientists do, largely math and pattern recognition, should be just as susceptible to automation (if not more so) as any other field. By 2022, we will have much faster computers, and much more advanced AI. So while we will have more data that needs analyzing, we won’t necessarily need more humans to help us analyze that data. The golden era of the “data scientist” may turn out to be more like the next three to five years. So maybe hold off on rushing back to school to get that math degree.

Economic Charts From 2011: Health Care Costs and The Toil Index

Here is a link to eighteen charts, selected by eighteen economists and policy makers as their “favorite charts of 2011.”

You should take the time to look through all of them, but for now I picked out two I was interested in commenting on:

 (1) Health Care Costs Extrapolated All the Way Out to 2050

This chart was selected by Rep. Paul Ryan. In general, I think it is important to be highly skeptical of charts that extrapolate costs forty years into the future. I have no idea of the methodology that went into making this chart, but I am willing to bet it did not rigorously model all the potential impacts of forty years of technological progress, such as complex and interlocking developments in areas such as biotech, artificial intelligence, and nanotechnology. I understand that we have financial commitments to an increasingly aging population and that this is likely to drive health care expenses up some unspecified amount. But what about the effect of technological progress driving those same costs back down? Trying to disentangle the cost of health care even fifteen years from now seems to me to be a huge exercise in guess work. Now if you chop 80% off of the right of this chart, maybe you’d have something a bit more credible.

(2) The Toil Index

This chart was selected by Robert Frank at New York University. It represents the “effort required to rent a house served by a school of average quality.”

When confronted with the specter of technological unemployment, optimistic libertarians will often argue that such developments don’t matter because even though people won’t be able to work as much, technology will make everything so cheap that people will still be able to pay their way with relative ease.

While I don’t discount this as a long term possibility, in the near term I worry about technology lowering the value of human labor a lot faster than it lowers cost of living. Such a dynamic could explain the results we see in this chart. The value of a human worker has dropped while the value of a scarce resource like housing has not dropped. The result: more toil for the average person.

Ten Years Is A Long Time

This chart from the Hamilton Project shows the time it will take to close the jobs gap opened by the Great Recession given various levels of job creation. It should be noted that the three levels graphed here are all higher than any month’s actual job creation since the crash (with a possible exception of the run-up to the 2010 Census). The most conservative estimate, which current evidence suggests is optimistic, puts recovery at 2022, roughly ten years from now. As we consider what might need to be done to fix our economy, it is imperative that we plan for the kinds of technological changes that will occur in the intervening (at least) decade. Despite the puzzling claims of some pundits, it’s pretty clear that the last ten years have seen massive changes in technological capability. Just for a few examples, ten years ago there was no such thing as an iPod or a smart phone and I didn’t even own a regular call-making cell. No car had ever successfully driven itself. A $2000 computer offered about 1Ghz of processing speed, on a single core. Wireless internet was brand-new and little used. A lot can change in ten years, and it’s baffling to me why virtually no one whose job it is to plan for the future of our economy does so taking into account the changes that are likely to occur.

Gartner Hype Cycle: A Framework For Describing the Emergence of New Technologies

I recently discovered the “Gartner Hype Cycle” which describes people’s changing perceptions of new technologies. According to the model, new technologies move through five stages:

  1. “Technology Trigger” — The first phase of a hype cycle is the “technology trigger” or breakthrough, product launch or other event that generates significant press and interest.
  2. “Peak of Inflated Expectations” — In the next phase, a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations. There may be some successful applications of a technology, but there are typically more failures.
  3. “Trough of Disillusionment” — Technologies enter the “trough of disillusionment” because they fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology.
  4. “Slope of Enlightenment” — Although the press may have stopped covering the technology, some businesses continue through the “slope of enlightenment” and experiment to understand the benefits and practical application of the technology.
  5. “Plateau of Productivity” — A technology reaches the “plateau of productivity” as the benefits of it become widely demonstrated and accepted. The technology becomes increasingly stable and evolves in second and third generations. The final height of the plateau varies according to whether the technology is broadly applicable or benefits only a niche market.

Here is the concept in graph form:

And here is a 2010 era graph labeled with specific emerging technologies:

I think this framework is a good way of describing a phenomenon I have definitely perceived to be true during the course of my own life. The most obvious example would be the internet itself, which went through all these stages. In addition to the market phenomenon of the dot com bubble and crash, general attitudes about the internet have followed this trajectory. At first people were full of hyperbole about how amazing and utopian the internet would be. Then people started freaking out about how spam, pornography, online predators, information glut and other supposed problems were going to ruin everything. And finally people have settled into the middle, realizing that the internet is here to stay, and it will indeed change everything, just not necessarily overnight.

This insight also has conceptual similarities to Amara’s Law, the idea that people tend to overestimate progress in the short term and underestimate in the long term.

The Fallacy of Appealing to Nature

From Wikipedia:

“An appeal to nature can sometimes be considered a fallacy of relevance…
General form of this type of argument:
N is natural. Therefore, N is good or right.
U is unnatural. Therefore, U is bad or wrong.
In some contexts, the meanings of “nature” and “natural” can be vague, leading to unintended associations with other concepts…
Skeptic Julian Baggini argues: ‘[E]ven if we can agree that some things are natural and some are not, what follows from this? The answer is: nothing. There is no factual reason to suppose that what is natural is good (or at least better) and what is unnatural is bad (or at least worse).'”

How often do you hear specious reasoning of this sort? “Homosexuality is unnatural. Product X is better than product Y because it uses all natural ingredients. Trying to defeat aging is unnatural, since to die of old age is to die of natural causes.”

The meaning of natural constantly changes. Often the word is used to argue against new technologies, since it is assumed that whatever humans do is by definition unnatural. Beehives are natural, while skyscrapers are somehow unnatural.