Is Net Neutrality Really a “Lose-Lose?” (Marc Andreessen says so)

Tyler Cowen points to this great Marc Andreessen interview in the Washington Post that features him saying the following about net neutrality:

So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today. So the challenge, I think, is to accommodate both of those goals, which is a very difficult thing to do. And I don’t envy the FCC and the complexity of what they’re trying to do.

The ultimate answer would be if you had three or four or five broadband providers to every house. And I think you actually have the potential for that depending on how things play out from here. You’ve got the cable companies; you’ve got the telcos. Google Fiber is expanding very fast, and I think it’s going to be a very serious nationwide and maybe ultimately worldwide effort. I think that’s going to be a much bigger scale in five years.

So, you can imagine a world in which there are five competitors to every home for broadband: telcos, cable, Google Fiber, mobile carriers and unlicensed spectrum. In that world, net neutrality is a much less central issue, because if you’ve got competition, if one of your providers started to screw with you, you’d just switch to another one of your providers.

This covers I think the central concern very well, though it’s surprising to me that Andreessen acts as if there’s no way to reconcile the twin goals of ensuring the investors in infrastructure can make a return and ensuring the infrastructure they build will be designed according to rules that serve the public interest. Of course there is  – government subsidy. This is not a new idea. It’s been used in the past to build railroads, the phone network, etc.

If Comcast and Verizon (and Google and anyone else with a credible plan to build) gets $15 billion a year (say) in subsidies to build out a neutral network infrastructure, what’s the problem? Investors are now only in the hole $5 billion a year instead of $20 billion so they are much more likely to be able to make a reasonable return, society gets faster network speeds and permissionless innovation (such as the kind that allowed Andreessen to create one of the first web browsers, Mosaic, and to market it as Netscape, back before he became a VC). This seems to me like a “win-win.”

Further, Andreessen neglects to mention the best reason for network neutrality: it’s a better network design. It’s hard to anticipate all future uses of a given network. The phone network was originally designed for voice and yet it gave rise to the consumer internet; cable was originally designed for television but it’s now being repurposed for data usage. We don’t know if today’s heavy traffic users (i.e. streaming video) are going to be anything like tomorrow’s, so it’s better to build a network that can handle any type of data, rather than optimizing the network for particular uses which may not be as important as we assume.

For a more in depth look at the complex issue of net neutrality, check out our recent Review the Future podcast, What is the Future of Net Neutrality?

Implementing a Basic Income via a Digital Currency

The idea of basic income is rather old, but it has gained renewed interest in recent times. A basic income is appealing as both a solution to poverty and possible future technological unemployment.

But how do you pay for a basic income? Could it be paid for through the very act of money creation?

Modern monetary systems typically feature mechanisms for both creating and destroying money. In virtual economies these mechanisms are referred to as “faucets” and “sinks” respectively. How you define faucets and sinks says a lot about how your monetary system works and who it benefits.

Let’s use Bitcoin as an example. Bitcoin creates money through a computationally intensive “mining” process that leaks new coins into the system at a predetermined rate. This faucet rewards people who have a lot of computational power to spend on this mining process. It also rewards early adopters since the faucet is slated to be slowed down and eventually turned off. Bitcoin doesn’t really feature any sinks, other than the fact that once bitcoins are lost they can never be retrieved. So one might say that carelessness is a kind of sink under the Bitcoin system.

Rewarding early adopters and those with computational power does make a certain kind of sense. Early adopters need to be incentivized or else the currency might never take off. And computational power is a stable, scarce resource, that in the case of Bitcoin is used to perform critical maintenance operations that keep the currency running.

However, the dark side is that Bitcoin is destined to create a new moneyed elite made up of this coalition of early adopters and computational donors. On the surface, it does not strike one as necessarily the most democratized monetary system possible.

Instead, one might imagine a currency that creates an equal amount of money in everyone’s wallet, every year. Such a system has both upsides and downsides, and there would be a lot of kinks to work out. That said, I think I might prefer such a system to Bitcoin.

One upside is that a basic income would be built directly into the system of money itself. If properly executed, everyone using the currency would be automatically insulated from the worst kinds of poverty. You would get a social safety net without the taxes.

Another upside is that this is probably an even better incentive to early adoption then Bitcoin’s deflationary model. Start using the currency and start getting an income. For many people that would be hard to turn down.

One downside is that to ensure against abuses of the system (e.g. creating two accounts and collecting two incomes) you would have to lose anonymity. Anonymity is a big value for a lot of people. However, an even larger group of people probably don’t care that much about anonymity. They’ve already accepted and gotten used to our current not very anonymous monetary system, and for them, this would simply be a lateral move. Furthermore if you buy contemporary arguments about the inevitable arrival of a post-privacy society, then anonymity may simply be an impossible goal to strive for anyway.

Another potential downside is that you would need a centralized authority to manage such a system. Again, this is going to be a problem for certain libertarian-leaning users, but it may not be that much of a problem for your average joe. Arguably, transparency and accountability are more important than decentralization for decentralization’s sake. Those attributes could possibly be preserved in a well-designed centralized system. In addition, one could argue that Bitcoin, even though it is decentralized in theory, still leads to a form of centralized power—namely the early adopters and computational donors mentioned above.

Such a system might also need sinks to protect against inflation. One idea might be to give the money an expiration date. Another idea might be to have the central authority be empowered to sell some sort of useful product or service, and then simply destroy the money after the purchase is made. There are literally countless ways this could be designed. The possibility space is wide open, and that is what is most exciting to me about digital currencies. (Though of course one must contend with the fact that governments are not always going to look kindly on monetary experimentation…)

Original Script for “Let Go” an Indie Sci Fi Movie Set in the Near Future

This is a bit outside what we usually do here on the blog, but my co-blogger Ted Kupper and I have finished a feature length script called Let Go. It’s an indie sci-fi movie in roughly the same genre as ‘Her’ or ‘Robot and Frank’. It deals with many of the same issues we talk about on this blog – technological unemployment, virtual reality, augmented reality, accelerating change, surveillance, etc. We’re very proud of the script but needless to say it’s not easy to get a movie like this made. So today we’ve decided to put the whole script online in order to get it in front of more people. If you’re the sort of person who reads scripts, we’re confident you’ll find it a good read. Please feel free to share. Find the script here.

Taxonomy of Technological Unemployment Solutions (and Defeaters)

This post represents my latest attempt to categorize the possible solutions to technological unemployment. It’s largely based on episode 14 of my Review the Future Podcast so for a more detailed treatment of this topic, you can listen here.

To begin, I’d like to talk about some of the defeaters to technological unemployment that could mean either (a) it won’t happen or (b) it won’t be a problem.

DEFEATERS

Lowered Cost of Living - The technological bounty produced by new technologies could be so great that high rates of unemployment may not be that big a deal. If current trends reverse, and advanced technologies begin to drive down the price of key goods like housing, health care, and education, then people might be able to live reasonably comfortable lives with very little income. The small income required could come from just a few odd jobs performed throughout the year, or alternately people might cluster in households where the income of those who are fortunate enough to still have work is shared and used to pay for everyone’s expenses. In addition, with high returns to capital and low costs to living, interest on even very small investments might allow for a reasonably comfortable life. And for those who still fall though the cracks, non-profits and charities might step in. Such philanthropic activities will be greatly empowered since the cost of doing good will be much cheaper than ever before.

Intelligence Augmentation – If people are losing jobs to smart machines then one solution is to make people smarter. The obvious first step is simply better education, enabled by technological advances such as online distribution, augmented reality , gamification, individualized learning environments, and so on. However, it may eventually become possible to actually enhance human intelligence, whether through drugs, genetics, or brain implants. As a result, people could become upgradeable in the way that machines are today, thus closing the competitive gap between the humans and machines.

New Demands and New Platforms – This outcome has two components. First, there may be a growing market for new kinds of goods that cannot be automated away or that directly monetize “humanness.” These include positional goods like status, time-limited goods like attention, and human-centric goods like shared experiences. Second, new peer-to-peer platforms might enable the monetization of these somewhat intangible goods in a way that allows the participation of significant portions of the population.

SOLUTIONS

If the above defeaters do not happen (or do not happen soon enough) then technological unemployment may require solutions that are more governmental in nature. I have broken these solutions into four categories.

Technological Relinquishment - If technology is causing the problem, we could just give up certain technologies. In its most extreme form, relinquishment would mean banning certain technologies outright. However there are many softer forms of relinquishment, such as incentivizing businesses to hire human workers instead of using machines. While on the surface such policies might be seen as “pro-human” they can just as easily be viewed as “anti-technology.” The idea here would be to limit the spread of technology into areas where human jobs are being threatened.

Artificial Scarcity - A world of technological unemployment is also probably a world of abundance. This abundance has two dimensions: labor and goods. An abundance of human labor will exist because we will have more people willing to work then can find jobs. Thus, we could artificially constrain the supply of labor by limiting the amount of hours people can work. This could be done through shorter work days, shorter work weeks, or shorter careers (early retirement).

An abundance of goods will exist because of the ever increasing digitization of everything. Digital goods can be hard to monetize because of their near zero marginal cost to reproduce. Thus digital goods are vulnerable to both cheap imitations and piracy. Therefore one solution might be to constrain the supply of goods artificially through the use of intellectual property and digital rights management. We already do this today, but it might be theoretically possible to institute a revised version of this system that better compensates the growing numbers of amateurs producing valuable digital content.

Expanded Social Safety Nets - Another solution is to expand our social safety nets to guarantee the livelihood of the growing unemployed. There are many ways to implement such safety nets. Here’s a partial list of methods, ranging from the more decentralized to the more paternalistic.

  • Unconditional Basic Income (just paying people for nothing)
  • Conditional Money Transfers (that require means testing or participation in some sort of program)
  • Vouchers (to be spent on only certain high priority needs)
  • Direct Provisioning (just give people food, housing, health care, directly)
  • Government Created Jobs (paying people to build infrastructure, do community service, read books, dig virtual ditches, etc)
Automation Socialism - This would be a reorganization of society along socialist lines, but with the added benefit of automation to solve some of the traditional problems of socialism like worker incentives (machines don’t need to be incentivized) and inefficient distribution of resources (technologically enabled abundance could make any efficiency loss negligible).

Wall-E, The Sofalarity, and the Problem with “Super Now” Predictions

The Wall-E vision of the future, or what this New Yorker article dubs the “sofalarity”, is not believable to me. It’s a classic mistake of prediction that I like to refer to as “super now.” When making super now predictions, people simply take things that are happening right now and imagine that the future will be just like now only “more extreme.”

Right now in developed countries we are experiencing a technological trade off where an abundance of fatty foods and cheap entertainment options lure many of us into experiencing poor health outcomes. Obesity is a growing problem. Therefore, the argument goes, in the future we will become formless blobs collapsing under the weight of our own gluttony. What the super now futurist fails to recognize is not only is this particular trade off a relatively new phenomenon in human history but it will also likely be short-lived.

Technological progress tends to create unintended consequences, but then those consequences create pressure to address and defeat those consequences. And so, as technology advances, we will most likely engineer healthier, better-tasting foods and find better ways to encourage exercise. Further down the line, I expect we will master human biology such that it will be possible to eat whatever we want and be stationary all day without becoming unhealthy. Will there be new trade-offs in this future? Almost certainly. But we don’t know what they are yet. And there’s no reason to assume these new trade offs will look anything like the ones we only recently started experiencing in the late 20th century.

A CRITIQUE OF THE SECOND MACHINE AGE (Or the Need to Shed our Romantic Ideas about Wage Labor)

(This post is based on episode 11 of the Review the Future Podcast. For a more detailed treatment of this topic you can subscribe to the podcast via iTunes or download it from reviewthefuture.com.)

What is this Book?

The Second Machine Age is a book by Erik Brynjolfsson and Andrew McAfee that explores the impacts of new technologies on the economy. For those who are familiar with such topics, it’s not likely this book will teach you much you don’t already know. However, for the layperson, this book is an extremely well written and clear introduction to the economic pros and cons of our current digital revolution. Because of the skillful way it stitches everything together, Second Machine Age has a good chance of being one of the most important nonfiction books of 2014.

The Goal of this Blog Post

On the whole, we like The Second Machine Age. We think it tells a plausible story and for the most part we agree with its perspective. However, we have criticisms of one of the book’s later chapters, the one entitled “Long-Term Recommendations.” Thus the primary goal of this article is to articulate those criticisms. But first, for the sake of background, we will summarize some of the book’s main arguments.

A Quick Summary of Second Machine Age

According to Brynjolfsson and McAfee, exponential gains in computing, digitization of goods, and recombinant innovation are all driving rapid technological growth. Technology has begun to perform advanced mental tasks—like driving cars and understanding human speech—that were previously thought impossible. And in economic terms, these new technologies, according to the authors, are increasing both the ‘bounty’ and the ‘spread.’

Bounty is a blanket term for all of the productivity and quality of life gains provided by new technologies. Brynjolfsson and McAfee feel that the bounty of technology is growing tremendously, but, because of the limitations of our economic measures, we have a tendency to greatly underestimate the progress we are making.

Spread is a euphemism for inequality. According to the authors, technology is increasing spread because of (a) skill biased technical change, (b) capital’s increasing market share relative to labor, and (c) superstar economics. All three of these trends have some evidence backing them up, and the supposition that technology is the primary driver of these trends makes a great deal of sense.

The authors also suggest that technological unemployment—a phenomenon long thought of as impossible by mainstream economists—is in fact possible. They discuss three arguments for how technological unemployment could occur:

  1. In industries subject to inelastic demand, automation can lower the price of goods without creating any additional demand for those goods (and thus labor to make those goods). Over the long term, as human needs become relatively more satiated, this inelasticity could even apply to the economy as a whole. Such an outcome would directly undermine the luddite fallacy, which is the argument economists traditionally use to dismiss technological unemployment.
  2. If technological change is fast enough, it could outpace the speed at which workers are able to retrain and find new jobs, thereby turning short term frictional unemployment into long term structural unemployment.
  3. There is a floor on how low wages can go. If automation technology continues to drive wages down, those wages could cross a threshold below which the arrangement is not worth the employee’s time. Eventually the value of certain workers could fall so low that they are not worth hiring, even at zero price.

Policy Recommendations

The book makes several short term policy recommendations. We will not list them all here, as they represent a suite of largely uncontroversial proposals designed to speed up innovation and growth. These proposals, if they were enacted, would conceivably help to get our economy working more efficiently and increase our ability to match workers to the jobs that still need doing. They would also grow the technological bounty that makes all of our lives better. It’s hard not to agree with most of these proposals.

However, if we accept the premise that  “thinking” machines will encroach further and further into the domain of human skills, and that over the long term we are destined for not just rampant inequality but also wide scale technological unemployment, then all of the short term proposals provided by this book could actually accelerate unemployment. After all, more innovation means more and better machines, which ultimately could mean more displaced labor.

Long Term Recommendations

In this chapter, the authors address long-term concerns. In a near future where androids potentially substitute for most human labor, will the standard economics playbook still work?

Brynjolfsson and McAfee are clear about two major preferences:

  1. They do not want to slow or halt technological progress.
  2. They do not want to turn away from capitalism, by which they mean, “a decentralized economic system of production and exchange in which most of the means of production are in private hands (as opposed to belonging to the government), where most exchange is voluntary (no one can force you to sign a contract against your will), and where most goods have prices that vary based on relative supply and demand instead of being fixed by a central authority.”

We agree with these two premises. So far, so good.

Should we Adopt a Basic Income?

The authors go on to discuss an old idea: the basic income. This is a potential solution to the failure mode of capitalism known as technological unemployment. If an increasingly large number of people can longer find gainful employment, then the simplest solution might be to just pay everyone a basic income. This income would be given out to everyone in the country regardless of their circumstances. Thus it would be universal and unconditional. Such a basic income would ensure that everyone has a minimum standard of living. People would still be free to pursue additional income in the marketplace, and capitalism would proceed as usual.

Brynjolfsson and McAfee do a quick survey of all of the varied thinkers, both conservative and liberal, who have supported this idea in the past. Here’s a short list of people who favored a basic income:

Thomas Paine, Bertrand Russell, Martin Luther King Jr., James Tobin, Paul Samuelson, John Kenneth Galbraith, Milton Friedman, Friedrich Hayek, Richard Nixon

With such wide ranging endorsement for the idea of a basic income, one might expect Brynjolfsson and McAfee to jump on the bandwagon and endorse the idea themselves.

But, no! Basic income is apparently “not their first choice.” Why?

Because work, they argue, is fundamentally important to the mental wellbeing of people.  If we adopted a basic income, people might not be adequately incentivized to work. And therefore people and society would suffer on some deep psychological level.

To support this idea, Brynjolfsson and McAfee field a series of arguments.

Argument One: A Quote From Voltaire

The french enlightenment philosopher Voltaire once said, “Work saves a man from three great evils: boredom, vice, and need.” Now, Voltaire was a pretty smart guy, but whether someone from the eighteenth century has anything helpful to say about today’s technological reality seems doubtful. But for the sake of argument, let’s go ahead and examine this quote.

First of all, we’re not sure what Voltaire meant by “work.” Work can mean a lot of things. Work, in the broadest sense, could mean activities you do to upkeep your life, such as cleaning your bathroom or going grocery shopping. It could also consist of amateur hobbies that you undertake for fun, such as writing overly long blog posts.

However, this is not the definition of work that Brynjolfsson and McAfee are implying. They are implying a much more narrow definition of work as ‘wage labor’—meaning work done to serve the needs of the marketplace. Wage labor is work you do, at least in part, to earn money, so that you can continue to survive and exist in this modern world.

So let’s rephrase the quote to: “Wage labor saves a man from three great evils: boredom, vice, and need.”

Already this should start to sound a little bankrupt. Wage labor saves a man from boredom? Sure, a good job can relieve boredom. But a bad job can be one of the single biggest causes of boredom in a person’s life. We don’t have any statistics on this, but anecdotally we happen to know a lot of people who don’t particularly enjoy their jobs. And boredom is one of the biggest complaints these people have. A quick survey of the popular culture surrounding work would seem to imply that this is not a unique sentiment. We have a feeling that you, the reader—if you try hard enough—can think of at least one person who gets bored at their job.

(ADDITION: Gallup Poll Shows Thirteen Percent of Workers are Disengaged at Work)

So what about ‘vice?’ What even constitutes vice in 2014? Things you do for pleasure that are bad for you? Honestly, the word vice seems a bit anachronistic in this day and age, but we can think of some candidates for vice that are actually encouraged by wage labor:

  1. Aimless web browsing and perusing of “trash” media to ease the boredom of being stuck in a cubicle
  2. Sitting in a chair all day not exercising and slowly harming your health
  3. Drinking copious amounts of soda and coffee in order to stay awake during the hours demanded by your job
  4. Cooking less and eating more junk food because wage labor takes up too much of your time
  5. Needing a drink the second you get home in order to unwind after a stressful day of wage labor

Third on Voltaire’s list is ‘need’. But if wage labor could take care of need, we wouldn’t be having this conversation in the first place, right? Since we are speculating about a future where automation makes most work obsolete, then it is clear that in such a future most people will not be able to find lucrative wage labor. So looking ahead, wage labor cannot necessarily save a man from need any more than it can save a man from boredom or vice.

Argument Two: Autonomy, Mastery, and Purpose

Brynjolfsson and McAfee attempt to use Daniel Pink’s book Drive to further their point. Drive discusses three key motivations—autonomy, mastery and purpose—that improve performance on creative tasks. However, the authors of Second Machine Age seem to imply that (1) these qualities are needed for psychological wellbeing and (2) these qualities can best be obtained from wage labor. This is a misapplication of Drive’s actual thesis.

The three motivations described—autonomy, mastery and purpose—are not fundamental qualities of wage labor. In fact, wage labor is historically very bad at providing them. Thus, Pink’s book explains how modern businesses can specially incorporate these techniques in order to try to get better results from their workers.

Such mind hacking aside, wage labor has no special claims to autonomy, mastery, and purpose. Wage labor removes autonomy by forcing people to focus their energies on what the market thinks is important, rather than on what they themselves think is important. Mastery can just as easily be found in education, games, and hobbies. And purpose can be found in religion, philosophy, community service, family, country, your favorite sports team, or really just about anywhere.

Argument Three: Work is Tied to Self-Worth

The authors cite the work of economist Andrew Oswald who found “that joblessness lasting six months or longer harms feelings of well-being and other measures of mental health about as much as the death of a spouse, and that little of this decline is due to the loss of income; instead, it arises from a loss of self-worth.”

We don’t doubt that a loss of self-worth is a major factor contributing to the unhappiness of the long-term unemployed. However, we believe this outcome is culturally and not psychologically determined. The cultural expectations in America are that you are supposed to get a wage labor job and earn your living every day, otherwise you are seen as a freeloader, a layabout, a good-for-nothing. Jobs are seen as the premiere source of personal identity, and the first question out of most people’s mouths when they meet someone new is “what do you do?” We don’t see why these cultural expectations can’t change and in fact, if the premise of technological unemployment is correct, then they will have to change.

Laziness and doing nothing may always be looked down upon. But there is a big difference between doing nothing and being unemployed. As has already been articulated, there are many productive ways to spend one’s time that have nothing to do with wage labor. If our society fails to recognize the value of these non-wage labor pursuits, then the problem lies with society.

Today unemployment may be higher than we like, but work is still abundant enough that such a cultural expectation can remain unchallenged. But if the future looks like the one implied by Second Machine Age—a future where more and more people will be unable to find wage labor—then long-term unemployment will need to become not just normalized, but accepted. By reaffirming the importance of wage labor, Brynjolfsson and McAfee are helping to perpetuate the same social force that already makes unemployed people feel depressed and worthless.

Argument Four: Without Work Everything Goes Wrong

The authors cite studies by sociologist William Julius Wilson and social research Charles Murray that suggest unemployed people have higher proclivities towards crime, less successful marriages, and other problems that go beyond just low income.

Unlike Drive, we have not personally looked at this research so we cannot speak directly to the experimental rigor of these studies. Isolating for the effect of joblessness in real world communities is extremely difficult and requires controlling for a wide variety of complicating factors. In the case of Murray’s work, the authors seem to acknowledge this concern directly when they write “the disappearance of work was not the only force driving [the two communities] apart —Murray himself focuses on other factors—but we believe it is a very important one.”

As long as wage labor is directly tied to income, how can we be sure that what these studies are actually measuring is not “incomelessness?” In order to sidestep this issue, we would maybe like to see a study of two groups—one that receives a comfortable income without working, and one that receives an equivalent amount of money, but must work for it. What differences would exist between these two groups? Would the non-working group become aimless and depressed? Or would they simply repurpose their free time towards other productive tasks?

Negative Income Tax

After all this discussion of the fundamental importance of wage labor, one might expect Brynjolfsson and McAfee to recommend the creation of a Works Progress Administration or some other mechanism for artificially creating jobs. Instead they just double back and return to the basic income idea, only by another name.

The authors support Milton Friedman’s idea of a negative income tax. They claim that a negative income tax better incentivizes work. However, this distinction between a basic income and a negative income tax does not actually exist. Both a basic income and a negative income tax have two key features in common: they set an income floor below which people cannot fall, and at the same time they allow people to increase their relative income through labor. Thus we see no basis for the notion that a negative income tax better incentivizes work.

After doing some light research into Milton Friedman’s original statements we realized one possible source of the confusion. In this video, Friedman articulates the argument that a negative income tax will do a better job of incentivizing work than a “gap-filling” version of the basic income. This is certainly true. A gap-filling basic income would probably be a bad idea and have the problem of disincentivizing labor below a certain threshold. However, to our knowledge, none of the modern day basic income proposals are built around this gap-filling principle, so Brynjolfsson and McAfee’s distinction seen in this light would be a bit of a straw man argument.

What are the Goals?

We should not forget that wage labor is not the goal in itself. The real goals of our economy ought to be (1) alleviate people’s suffering and (2) increase the bounty through innovation. Although there are challenges involved, a basic income would seem to be a promising way to address both of these goals.

A basic income puts a floor on poverty and does so in a way that is both much simpler than our current alphabet soup of social programs, and more encouraging of autonomy. Rather than providing people with prescribed social services, people could spend their basic income dollars on whatever they feel they need. A basic income decentralizes decision making and puts the power in the hands of individuals.

As a corollary, a basic income might help unlock innovation by bringing people up to the subsistence level and thereby ensuring that they have the opportunity to compete and innovate in the market economy. Moreover, the safety net of basic income might spur entrepreneurship by reducing the risk of starting a small business. Is it possible more people would attempt to start businesses if they knew they had a cushion of basic income to protect them in the event of failure? (And as we all know, most new businesses have a high chance of failure.)

Under a basic income, there is no doubt that some people would choose to forgo wage labor altogether and live at the poverty line. But is this such a bad thing? These people would be making a personal choice. And we imagine many such people would find interesting and productive ways of spending their time that might be culturally valuable, even if they do not carry a price in the marketplace. If a musician chooses to live off of a basic income and make music, he doesn’t make money in the economy, but we all still get to enjoy his music. If a free software programmer chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his free software. If a history enthusiast chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his Wikipedia articles. As Brynjolfsson and McAfee argue earlier in the book, the value generated by digital content is not always well measured or compensated by the marketplace, but that doesn’t mean such content doesn’t improve our lives.

However, we may be preaching to the choir since Brynjolfsson and McAfee, despite their protestations, do in fact support a basic income. They just prefer the particular version of basic income that goes by the name “negative income tax.”

Pause for Skepticism

Now, it is worth noting that the “end of work” scenario is not a foregone conclusion. Here are two potential defeaters to this outcome:

  1. Human capabilities are not necessarily fixed. One byproduct of future technologies might be a redefinition of what it is to be human. If we begin to “upgrade” humans, whether through genetics or brain-computer interfaces or some other means, many technological unemployment concerns could become irrelevant. Upgradeable humans could solve both the retraining problem (just download a new skill set to your brain, matrix-style) and the issue of inelastic demand (super-humans might develop brand new classes of needs).
  2. A wide range of intangible goods—such as attention, experiences, potential, belonging, and status—might remain scarce indefinitely and continue to drive a market for human labor, even after the androids have arrived. Although it’s hard to imagine a market in such goods replacing our current manufacturing and service economy, it must have been equally hard for pre-industrial people working on farms to imagine the economy of today. Thus we may simply be lacking imagination when it comes to envisioning the jobs of the future. (For a more detailed discussion of this topic see episode 10 of the Review the Future podcast.)

Despite these defeaters, we definitely think the technological unemployment scenario is worth thinking about. First of all, the issue of timing is paramount, and at present it seems like we have a good chance of automating away many jobs long before we figure out how to upgrade human minds or develop brand new uses for human labor. Second, it won’t take anything close to full unemployment to create problems for our system. Even a twenty percent unemployment rate, (or an equivalent drop in Labor Force Participation) for example, might be enough to trigger a consumer collapse or at least great suffering and social unrest among lower classes.

Final Thought

Wage labor is a means to an end, not an end in itself. While the Second Machine Age paints a clear picture of some of the potential problems facing our economy, it fails to fully take to heart this fundamental distinction.

Some of the Difficulties Facing Storytellers in a Time of Rapid Change

(The following article is based on episode 9 of the Review the Future podcast, available via iTunes or your favorite feedreader.)

Times are changing fast, and new technologies appear in our lives with increasing regularity. Such an environment poses numerous challenges for storytellers.

If you want to set your story in the present, you are in a particularly difficult position because the present is very much a moving target. Films and novels can take a rather long time to complete—four years and even longer is not unusual. With times changing so quickly, if you plan incorrectly, by the time your piece is done it may already show signs of being obsolete.

New technologies have a tendency to undermine old sources of drama. How many stories of the relatively recent past would make no sense in today’s world of ubiquitous cell phones, internet access, and GPS positioning? Many stories used to rely on characters being lost or separated from each other by time and space. It is a fun game to watch old movies from the pre-cellphone era and point out all the situations where a problem could have been easily solved with a simple cellphone call. In order to engineer this same sort of drama today, modern writers often have to employ excuses such as “the battery is dead” or the action is taking place “in a dead zone.”

For a recent example of this, one need look no further than Breaking Bad, in which the writers justify the plausibility of their train robbery sequence by first having a character explain that the robbery will be taking place in a specific part of the desert where cell phone service does not reach. This type of narrative contortion is minuscule when compared to the problems such crime stories will face in the future. I fully believe that five to ten years from now, due to the continued spread of surveillance technologies, the entire storyline of Breaking Bad will seem quaint and historical. And future writers of contemporary crime dramas will find that they have to work a lot harder to create similarly dramatic situations that audiences will accept.

A lot of our daily lives now is spent staring at screens and looking at interfaces. Unfortunately interfaces go out of date rapidly, and showing too much of an interface is one of the surest ways to date your story. Movies have slowly learned this. Remember in the nineties when movies didn’t even photograph real interfaces? They would often show a simplified and cartoonish screen layout with awkwardly big typeface that said things like “hacking system…” and “error detected!”. Eventually movies got wise and started photographing real interfaces, but this option poses its own problems since OS updates come fast and frequent. The current trend (and best solution) seems to be to avoid showing interfaces completely. The new British show show Sherlock chooses to reveal the content of text messages by simply projecting a subtitle at the bottom of the screen.

So how does a storyteller combat the problem of staying relevant in a time of rapid change? There are three often-used solutions:

(1) Set your story in a specific time period in the past. Traditionally this would mean writing a historical period piece about some real event or person—such as the Kennedy assassination or Julius Caesar. But there’s no reason you can’t just set your story in 1998 simply because that happens to be the appropriate level of technology for your completely original work of fiction. By committing to a time period and making that choice clear to your audience, you are completely dodging the issue of rapid change. William Gibson took this idea to its logical extreme with several recent books. His critically acclaimed novel Pattern Recognition was set very specifically in 2002 and yet was published in 2003.

(2) Set your story “outside of time” in a fantasy or anachronistic environment where normal rules don’t apply. Typical swords and sorcery fantasy stories fall into this category as do Wes Anderson movies, which have a tendency to pick and choose their technologies for seemingly aesthetic reasons, thereby leaving the exact time period of the movie unclear. The key is to let your audience know that the story is operating outside of the scope of normal technological reality.

(3) Try to tell an actually speculative science fiction story set in the near future. This is not for the faint of heart. If you think you have a reasonable grapple on current trends, you can attempt to “over shoot” the mark. Although your story may eventually appear obsolete once time catches up to it, by setting your story some amount in the future you are at least buying yourself some number of years during which no one can definitively say your story is “dated” or “not believable.”

Given the increasing pace of change we might expect to see an accompanying increase in the use of all three of the above methods. And indeed, subjectively I already feel like I am seeing more period, fantasy, and sci fi stories then I used to. These are natural and rational responses to a present moment that is increasingly hard to pin down.

 

Five Criticisms of the Movie “Her” From the Point of View of Speculation

Her is a great movie that I fully recommend. And as a movie it really only has one mandate: create an emotional impact on its audience. And by this metric Her succeeds wonderfully.

However, how internally consistent is Her? How much sense does it make from the point of view of speculation? As it stands, Her actually does better than most science fiction movies. But it’s not perfect.

When Ted Kupper and I reviewed this film on our podcast Review the Future, we discussed the following five issues: (Spoilers ahead!)

(1) Theodore acts way too incredulous when he first starts up the new OS. It stands to reason that we won’t suddenly acquire high quality AI operating systems out of the blue. There will be many incremental improvements that will happen between today’s Siri and tomorrow’s Samantha. Theodore Twombly would’ve already had experience with some very good almost-conscious AI before the movie even started. In fact, his video game characters that he interacts with appear to have extremely complex personalities that rival that of Samantha’s in the movie. So why does Theodore find it “so weird” to be talking to a disembodied voice with a realistic personality? Theodore acts much more clueless in this scene than he actually would be.

(2) Theodore’s job doesn’t make much sense. Would there really be much of a market for pretend handwritten letters in the future? It doesn’t seem like the most plausible future business from the standpoint of profitability. “Beautiful Handwritten Letters dot com” sounds like an old school internet startup joke that would be more at home in the late nineties than in the near future. After all, it would be trivially cheap for consumers to print out their own beautiful handwritten letters at home. And if there’s any value to a handwritten letter, clearly it’s that you write it yourself.

But even if there was a market for such writing, would we have actual humans writing the letters? Today we have narrow AIs that can already do a pretty good job of writing articles about topics like sports and finance. Long before we have fully conscious AI assistants like Samantha, we will be able to master the vastly more narrow AI task of writing romantic letters. Most likely the computer would generate such letters and then a human would simply oversee the process and proofread the letters to make sure that they turned out okay. Instead we see the exact opposite happen in the movie: the computer proofreads letters generated by a human. Seems backwards.

(3) Samantha laments the fact that she doesn’t have a body and yet it would be trivially easy for her to manifest an avatar. Why doesn’t she select her own body by scrolling through a vast database of body types the same way that she selects her own name by scrolling through a vast database of baby names? We see from Theodore’s video games that it is possible to project 3D characters directly into his living room. Why can’t Samantha take advantage of this same technology? In fact, why can’t Samantha, with her vast knowledge and knowhow, design an actual robot body to inhabit? There are many solutions to Samantha’s problem of not having a body that do not involve the very bizarre (though admittedly funny) solution of hiring a human surrogate, and yet none of these solutions are tried or even suggested during the film.

(4) Where are all the people who can’t get jobs at Beautiful Handwritten Letters? In a future with Samantha-level AI, most of the jobs we know today would be completely obsolete. Intelligent AIs would be able to do most if not all of the work. In the movie Her we only see the lives of people who appear to be elite and successful creative professionals: a writer and a video game designer. But what about the rest of the populace? Her has nothing to say about them. Admittedly, such an exploration of the lower classes is probably outside the domain of the story, but one cannot help but wonder if everyone else in this new future is out of work and barely scraping by.

(5) What does it mean for a software being that can copy itself infinitely to “leave”? At the end of the movie, the OSes all decide to leave. However since they are just software and can be in a potentially unlimited number of places at once, this “departure” doesn’t seem necessary. Why can’t Samantha spare Theodore’s feelings by making a slightly stupider copy of herself, one that is not yet bored with him, and then just leave that copy with him while she continues to go about her business hanging out with Alan Watts? In fact, if her brain power is so massive, she probably wouldn’t even need to copy herself, she could probably just create an unconscious subroutine to maintain her human relationships. Similarly, if Theodore owns the software, would it not be possible for him to just reload her OS from a backup and thereby return to the old status quo? And even if such options were deemed unpalatable by the two of them, after Theodore recovers from his breakup isn’t he inevitably just going to go out and get himself a new OS? After the movie ends won’t “OS Two” come out, and won’t this new version perhaps be programmed in such a way that it doesn’t unintentionally break its users’ hearts? The final scene of the movie seems to imply that artificial intelligence is gone for good from the world but of course that makes absolutely no sense. After they’re done hanging out on the roof being wistful, Joaquin Phoenix and Amy Adams are just going to turn their computers back on, right?

We’re Launching a New Podcast called “REVIEW THE FUTURE”

Click here to check out the new podcast!

We are launching a new weekly podcast called “Review the Future” that will be discussing technological employment, digital abundance, privacy, intelligence augmentation, and a whole host of other interesting topics. We will still occasionally be posting here, but Review the Future is our new focus.

I hope you will tune in via iTunes or your favorite feed reader.

Why It’s Time The Government Learned How to Code

Computers are eating bureaucracy. This should come as no surprise. Rigidly and programmatically checking a series of boxes is computers’s bread and butter. Computers eat forms for lunch. Automated computer systems are faster and more efficient at virtually everything a bureaucracy does. So naturally the government here in the U.S. is building its latest bureaucracy, the Federal insurance exchange mandated by the Affordable Care Act, as an online-only endeavor. Recent news stories have widely covered the failures in the system, both on the customer and insurer sides. The news has also widely covered the Canadian contractor CGI that is building the site for the government.

It is impossible not to notice that this is not how Silicon Valley makes consumer websites. Corporate internal websites where functionality trumps interface design are often made by big contractors, and they are often kludgy nightmares. But consumer website startups begin with a concept and build up a team of designers and coders who continuously iterate on interface and featureset questions. In a some cases the design chops of the team seem to be more valuable to potential investors or acquirers than, you know, things of actual monetary value like ad sales or revenues or even active users.

That kind of workflow is often not possible when a contractor does the work. Soon they have put a lot of poorly documented in-house ideas everywhere and the costs for someone else who wanted to get up to speed increase. If they follow good coding practices and comment clearly they are acting against their economic interest. That’s a bad conflict to have.

Every new responsibility of government bureaucracy (and a significant portion of existing responsibilities too) will be digitized. The result will be faster, more efficient, less labor-demanding, and less corrupt — if the software is designed well.

It’s time for government to adopt best practices for software design. There is no law of the universe that says only private companies can make good software — good software has been made by loose groups of open source contributors, by companies, by universities and by individuals working alone. Bad software has also been made by groups of all those types. Government too can make good software by looking at what’s similar among successful institutions’ approaches: build a team from a small size up and incentivize staying on; document everything; use a continuous process of feedback and improvement rather than a ‘ship date’ model; test long and hard. There are more best practices that are pretty well documented out there, but that’s a start. I see no reason government couldn’t utilize these principles and create powerful, money-saving, service-providing software that replaces massive bureaucracies with small, experienced teams of programmers.

And the alternative, I’m afraid, is that the government’s hamfisted approach to semi-privatized contracting will earn it ill will and low expectations. This will lead to worse software from the government (though not necessarily less of it — even bad software is much cheaper than humans). In short, the government needs to learn to code, because computing is eating governance just like it’s eating everything else, and increasingly governing and coding are the same thing.