This is a bit outside what we usually do here on the blog, but my co-blogger Ted Kupper and I have finished a feature length script called Let Go. It’s an indie sci-fi movie in roughly the same genre as ‘Her’ or ‘Robot and Frank’. It deals with many of the same issues we talk about on this blog – technological unemployment, virtual reality, augmented reality, accelerating change, surveillance, etc. We’re very proud of the script but needless to say it’s not easy to get a movie like this made. So today we’ve decided to put the whole script online in order to get it in front of more people. If you’re the sort of person who reads scripts, we’re confident you’ll find it a good read. Please feel free to share. Find the script here.
This post represents my latest attempt to categorize the possible solutions to technological unemployment. It’s largely based on episode 14 of my Review the Future Podcast so for a more detailed treatment of this topic, you can listen here.
To begin, I’d like to talk about some of the defeaters to technological unemployment that could mean either (a) it won’t happen or (b) it won’t be a problem.
Lowered Cost of Living - The technological bounty produced by new technologies could be so great that high rates of unemployment may not be that big a deal. If current trends reverse, and advanced technologies begin to drive down the price of key goods like housing, health care, and education, then people might be able to live reasonably comfortable lives with very little income. The small income required could come from just a few odd jobs performed throughout the year, or alternately people might cluster in households where the income of those who are fortunate enough to still have work is shared and used to pay for everyone’s expenses. In addition, with high returns to capital and low costs to living, interest on even very small investments might allow for a reasonably comfortable life. And for those who still fall though the cracks, non-profits and charities might step in. Such philanthropic activities will be greatly empowered since the cost of doing good will be much cheaper than ever before.
Intelligence Augmentation – If people are losing jobs to smart machines then one solution is to make people smarter. The obvious first step is simply better education, enabled by technological advances such as online distribution, augmented reality , gamification, individualized learning environments, and so on. However, it may eventually become possible to actually enhance human intelligence, whether through drugs, genetics, or brain implants. As a result, people could become upgradeable in the way that machines are today, thus closing the competitive gap between the humans and machines.
New Demands and New Platforms – This outcome has two components. First, there may be a growing market for new kinds of goods that cannot be automated away or that directly monetize “humanness.” These include positional goods like status, time-limited goods like attention, and human-centric goods like shared experiences. Second, new peer-to-peer platforms might enable the monetization of these somewhat intangible goods in a way that allows the participation of significant portions of the population.
If the above defeaters do not happen (or do not happen soon enough) then technological unemployment may require solutions that are more governmental in nature. I have broken these solutions into four categories.
Technological Relinquishment - If technology is causing the problem, we could just give up certain technologies. In its most extreme form, relinquishment would mean banning certain technologies outright. However there are many softer forms of relinquishment, such as incentivizing businesses to hire human workers instead of using machines. While on the surface such policies might be seen as “pro-human” they can just as easily be viewed as “anti-technology.” The idea here would be to limit the spread of technology into areas where human jobs are being threatened.
Artificial Scarcity - A world of technological unemployment is also probably a world of abundance. This abundance has two dimensions: labor and goods. An abundance of human labor will exist because we will have more people willing to work then can find jobs. Thus, we could artificially constrain the supply of labor by limiting the amount of hours people can work. This could be done through shorter work days, shorter work weeks, or shorter careers (early retirement).
An abundance of goods will exist because of the ever increasing digitization of everything. Digital goods can be hard to monetize because of their near zero marginal cost to reproduce. Thus digital goods are vulnerable to both cheap imitations and piracy. Therefore one solution might be to constrain the supply of goods artificially through the use of intellectual property and digital rights management. We already do this today, but it might be theoretically possible to institute a revised version of this system that better compensates the growing numbers of amateurs producing valuable digital content.
Expanded Social Safety Nets - Another solution is to expand our social safety nets to guarantee the livelihood of the growing unemployed. There are many ways to implement such safety nets. Here’s a partial list of methods, ranging from the more decentralized to the more paternalistic.
- Unconditional Basic Income (just paying people for nothing)
- Conditional Money Transfers (that require means testing or participation in some sort of program)
- Vouchers (to be spent on only certain high priority needs)
- Direct Provisioning (just give people food, housing, health care, directly)
- Government Created Jobs (paying people to build infrastructure, do community service, read books, dig virtual ditches, etc)
The Wall-E vision of the future, or what this New Yorker article dubs the “sofalarity”, is not believable to me. It’s a classic mistake of prediction that I like to refer to as “super now.” When making super now predictions, people simply take things that are happening right now and imagine that the future will be just like now only “more extreme.”
Right now in developed countries we are experiencing a technological trade off where an abundance of fatty foods and cheap entertainment options lure many of us into experiencing poor health outcomes. Obesity is a growing problem. Therefore, the argument goes, in the future we will become formless blobs collapsing under the weight of our own gluttony. What the super now futurist fails to recognize is not only is this particular trade off a relatively new phenomenon in human history but it will also likely be short-lived.
Technological progress tends to create unintended consequences, but then those consequences create pressure to address and defeat those consequences. And so, as technology advances, we will most likely engineer healthier, better-tasting foods and find better ways to encourage exercise. Further down the line, I expect we will master human biology such that it will be possible to eat whatever we want and be stationary all day without becoming unhealthy. Will there be new trade-offs in this future? Almost certainly. But we don’t know what they are yet. And there’s no reason to assume these new trade offs will look anything like the ones we only recently started experiencing in the late 20th century.
The Second Machine Age is a book by Erik Brynjolfsson and Andrew McAfee that explores the impacts of new technologies on the economy. For those who are familiar with such topics, it’s not likely this book will teach you much you don’t already know. However, for the layperson, this book is an extremely well written and clear introduction to the economic pros and cons of our current digital revolution. Because of the skillful way it stitches everything together, Second Machine Age has a good chance of being one of the most important nonfiction books of 2014.
The Goal of this Blog Post
On the whole, we like The Second Machine Age. We think it tells a plausible story and for the most part we agree with its perspective. However, we have criticisms of one of the book’s later chapters, the one entitled “Long-Term Recommendations.” Thus the primary goal of this article is to articulate those criticisms. But first, for the sake of background, we will summarize some of the book’s main arguments.
A Quick Summary of Second Machine Age
According to Brynjolfsson and McAfee, exponential gains in computing, digitization of goods, and recombinant innovation are all driving rapid technological growth. Technology has begun to perform advanced mental tasks—like driving cars and understanding human speech—that were previously thought impossible. And in economic terms, these new technologies, according to the authors, are increasing both the ‘bounty’ and the ‘spread.’
Bounty is a blanket term for all of the productivity and quality of life gains provided by new technologies. Brynjolfsson and McAfee feel that the bounty of technology is growing tremendously, but, because of the limitations of our economic measures, we have a tendency to greatly underestimate the progress we are making.
Spread is a euphemism for inequality. According to the authors, technology is increasing spread because of (a) skill biased technical change, (b) capital’s increasing market share relative to labor, and (c) superstar economics. All three of these trends have some evidence backing them up, and the supposition that technology is the primary driver of these trends makes a great deal of sense.
The authors also suggest that technological unemployment—a phenomenon long thought of as impossible by mainstream economists—is in fact possible. They discuss three arguments for how technological unemployment could occur:
- In industries subject to inelastic demand, automation can lower the price of goods without creating any additional demand for those goods (and thus labor to make those goods). Over the long term, as human needs become relatively more satiated, this inelasticity could even apply to the economy as a whole. Such an outcome would directly undermine the luddite fallacy, which is the argument economists traditionally use to dismiss technological unemployment.
- If technological change is fast enough, it could outpace the speed at which workers are able to retrain and find new jobs, thereby turning short term frictional unemployment into long term structural unemployment.
- There is a floor on how low wages can go. If automation technology continues to drive wages down, those wages could cross a threshold below which the arrangement is not worth the employee’s time. Eventually the value of certain workers could fall so low that they are not worth hiring, even at zero price.
The book makes several short term policy recommendations. We will not list them all here, as they represent a suite of largely uncontroversial proposals designed to speed up innovation and growth. These proposals, if they were enacted, would conceivably help to get our economy working more efficiently and increase our ability to match workers to the jobs that still need doing. They would also grow the technological bounty that makes all of our lives better. It’s hard not to agree with most of these proposals.
However, if we accept the premise that “thinking” machines will encroach further and further into the domain of human skills, and that over the long term we are destined for not just rampant inequality but also wide scale technological unemployment, then all of the short term proposals provided by this book could actually accelerate unemployment. After all, more innovation means more and better machines, which ultimately could mean more displaced labor.
Long Term Recommendations
In this chapter, the authors address long-term concerns. In a near future where androids potentially substitute for most human labor, will the standard economics playbook still work?
Brynjolfsson and McAfee are clear about two major preferences:
- They do not want to slow or halt technological progress.
- They do not want to turn away from capitalism, by which they mean, “a decentralized economic system of production and exchange in which most of the means of production are in private hands (as opposed to belonging to the government), where most exchange is voluntary (no one can force you to sign a contract against your will), and where most goods have prices that vary based on relative supply and demand instead of being fixed by a central authority.”
We agree with these two premises. So far, so good.
Should we Adopt a Basic Income?
The authors go on to discuss an old idea: the basic income. This is a potential solution to the failure mode of capitalism known as technological unemployment. If an increasingly large number of people can longer find gainful employment, then the simplest solution might be to just pay everyone a basic income. This income would be given out to everyone in the country regardless of their circumstances. Thus it would be universal and unconditional. Such a basic income would ensure that everyone has a minimum standard of living. People would still be free to pursue additional income in the marketplace, and capitalism would proceed as usual.
Brynjolfsson and McAfee do a quick survey of all of the varied thinkers, both conservative and liberal, who have supported this idea in the past. Here’s a short list of people who favored a basic income:
Thomas Paine, Bertrand Russell, Martin Luther King Jr., James Tobin, Paul Samuelson, John Kenneth Galbraith, Milton Friedman, Friedrich Hayek, Richard Nixon
With such wide ranging endorsement for the idea of a basic income, one might expect Brynjolfsson and McAfee to jump on the bandwagon and endorse the idea themselves.
But, no! Basic income is apparently “not their first choice.” Why?
Because work, they argue, is fundamentally important to the mental wellbeing of people. If we adopted a basic income, people might not be adequately incentivized to work. And therefore people and society would suffer on some deep psychological level.
To support this idea, Brynjolfsson and McAfee field a series of arguments.
Argument One: A Quote From Voltaire
The french enlightenment philosopher Voltaire once said, “Work saves a man from three great evils: boredom, vice, and need.” Now, Voltaire was a pretty smart guy, but whether someone from the eighteenth century has anything helpful to say about today’s technological reality seems doubtful. But for the sake of argument, let’s go ahead and examine this quote.
First of all, we’re not sure what Voltaire meant by “work.” Work can mean a lot of things. Work, in the broadest sense, could mean activities you do to upkeep your life, such as cleaning your bathroom or going grocery shopping. It could also consist of amateur hobbies that you undertake for fun, such as writing overly long blog posts.
However, this is not the definition of work that Brynjolfsson and McAfee are implying. They are implying a much more narrow definition of work as ‘wage labor’—meaning work done to serve the needs of the marketplace. Wage labor is work you do, at least in part, to earn money, so that you can continue to survive and exist in this modern world.
So let’s rephrase the quote to: “Wage labor saves a man from three great evils: boredom, vice, and need.”
Already this should start to sound a little bankrupt. Wage labor saves a man from boredom? Sure, a good job can relieve boredom. But a bad job can be one of the single biggest causes of boredom in a person’s life. We don’t have any statistics on this, but anecdotally we happen to know a lot of people who don’t particularly enjoy their jobs. And boredom is one of the biggest complaints these people have. A quick survey of the popular culture surrounding work would seem to imply that this is not a unique sentiment. We have a feeling that you, the reader—if you try hard enough—can think of at least one person who gets bored at their job.
So what about ‘vice?’ What even constitutes vice in 2014? Things you do for pleasure that are bad for you? Honestly, the word vice seems a bit anachronistic in this day and age, but we can think of some candidates for vice that are actually encouraged by wage labor:
- Aimless web browsing and perusing of “trash” media to ease the boredom of being stuck in a cubicle
- Sitting in a chair all day not exercising and slowly harming your health
- Drinking copious amounts of soda and coffee in order to stay awake during the hours demanded by your job
- Cooking less and eating more junk food because wage labor takes up too much of your time
- Needing a drink the second you get home in order to unwind after a stressful day of wage labor
Third on Voltaire’s list is ‘need’. But if wage labor could take care of need, we wouldn’t be having this conversation in the first place, right? Since we are speculating about a future where automation makes most work obsolete, then it is clear that in such a future most people will not be able to find lucrative wage labor. So looking ahead, wage labor cannot necessarily save a man from need any more than it can save a man from boredom or vice.
Argument Two: Autonomy, Mastery, and Purpose
Brynjolfsson and McAfee attempt to use Daniel Pink’s book Drive to further their point. Drive discusses three key motivations—autonomy, mastery and purpose—that improve performance on creative tasks. However, the authors of Second Machine Age seem to imply that (1) these qualities are needed for psychological wellbeing and (2) these qualities can best be obtained from wage labor. This is a misapplication of Drive’s actual thesis.
The three motivations described—autonomy, mastery and purpose—are not fundamental qualities of wage labor. In fact, wage labor is historically very bad at providing them. Thus, Pink’s book explains how modern businesses can specially incorporate these techniques in order to try to get better results from their workers.
Such mind hacking aside, wage labor has no special claims to autonomy, mastery, and purpose. Wage labor removes autonomy by forcing people to focus their energies on what the market thinks is important, rather than on what they themselves think is important. Mastery can just as easily be found in education, games, and hobbies. And purpose can be found in religion, philosophy, community service, family, country, your favorite sports team, or really just about anywhere.
Argument Three: Work is Tied to Self-Worth
The authors cite the work of economist Andrew Oswald who found “that joblessness lasting six months or longer harms feelings of well-being and other measures of mental health about as much as the death of a spouse, and that little of this decline is due to the loss of income; instead, it arises from a loss of self-worth.”
We don’t doubt that a loss of self-worth is a major factor contributing to the unhappiness of the long-term unemployed. However, we believe this outcome is culturally and not psychologically determined. The cultural expectations in America are that you are supposed to get a wage labor job and earn your living every day, otherwise you are seen as a freeloader, a layabout, a good-for-nothing. Jobs are seen as the premiere source of personal identity, and the first question out of most people’s mouths when they meet someone new is “what do you do?” We don’t see why these cultural expectations can’t change and in fact, if the premise of technological unemployment is correct, then they will have to change.
Laziness and doing nothing may always be looked down upon. But there is a big difference between doing nothing and being unemployed. As has already been articulated, there are many productive ways to spend one’s time that have nothing to do with wage labor. If our society fails to recognize the value of these non-wage labor pursuits, then the problem lies with society.
Today unemployment may be higher than we like, but work is still abundant enough that such a cultural expectation can remain unchallenged. But if the future looks like the one implied by Second Machine Age—a future where more and more people will be unable to find wage labor—then long-term unemployment will need to become not just normalized, but accepted. By reaffirming the importance of wage labor, Brynjolfsson and McAfee are helping to perpetuate the same social force that already makes unemployed people feel depressed and worthless.
Argument Four: Without Work Everything Goes Wrong
The authors cite studies by sociologist William Julius Wilson and social research Charles Murray that suggest unemployed people have higher proclivities towards crime, less successful marriages, and other problems that go beyond just low income.
Unlike Drive, we have not personally looked at this research so we cannot speak directly to the experimental rigor of these studies. Isolating for the effect of joblessness in real world communities is extremely difficult and requires controlling for a wide variety of complicating factors. In the case of Murray’s work, the authors seem to acknowledge this concern directly when they write “the disappearance of work was not the only force driving [the two communities] apart —Murray himself focuses on other factors—but we believe it is a very important one.”
As long as wage labor is directly tied to income, how can we be sure that what these studies are actually measuring is not “incomelessness?” In order to sidestep this issue, we would maybe like to see a study of two groups—one that receives a comfortable income without working, and one that receives an equivalent amount of money, but must work for it. What differences would exist between these two groups? Would the non-working group become aimless and depressed? Or would they simply repurpose their free time towards other productive tasks?
Negative Income Tax
After all this discussion of the fundamental importance of wage labor, one might expect Brynjolfsson and McAfee to recommend the creation of a Works Progress Administration or some other mechanism for artificially creating jobs. Instead they just double back and return to the basic income idea, only by another name.
The authors support Milton Friedman’s idea of a negative income tax. They claim that a negative income tax better incentivizes work. However, this distinction between a basic income and a negative income tax does not actually exist. Both a basic income and a negative income tax have two key features in common: they set an income floor below which people cannot fall, and at the same time they allow people to increase their relative income through labor. Thus we see no basis for the notion that a negative income tax better incentivizes work.
After doing some light research into Milton Friedman’s original statements we realized one possible source of the confusion. In this video, Friedman articulates the argument that a negative income tax will do a better job of incentivizing work than a “gap-filling” version of the basic income. This is certainly true. A gap-filling basic income would probably be a bad idea and have the problem of disincentivizing labor below a certain threshold. However, to our knowledge, none of the modern day basic income proposals are built around this gap-filling principle, so Brynjolfsson and McAfee’s distinction seen in this light would be a bit of a straw man argument.
What are the Goals?
We should not forget that wage labor is not the goal in itself. The real goals of our economy ought to be (1) alleviate people’s suffering and (2) increase the bounty through innovation. Although there are challenges involved, a basic income would seem to be a promising way to address both of these goals.
A basic income puts a floor on poverty and does so in a way that is both much simpler than our current alphabet soup of social programs, and more encouraging of autonomy. Rather than providing people with prescribed social services, people could spend their basic income dollars on whatever they feel they need. A basic income decentralizes decision making and puts the power in the hands of individuals.
As a corollary, a basic income might help unlock innovation by bringing people up to the subsistence level and thereby ensuring that they have the opportunity to compete and innovate in the market economy. Moreover, the safety net of basic income might spur entrepreneurship by reducing the risk of starting a small business. Is it possible more people would attempt to start businesses if they knew they had a cushion of basic income to protect them in the event of failure? (And as we all know, most new businesses have a high chance of failure.)
Under a basic income, there is no doubt that some people would choose to forgo wage labor altogether and live at the poverty line. But is this such a bad thing? These people would be making a personal choice. And we imagine many such people would find interesting and productive ways of spending their time that might be culturally valuable, even if they do not carry a price in the marketplace. If a musician chooses to live off of a basic income and make music, he doesn’t make money in the economy, but we all still get to enjoy his music. If a free software programmer chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his free software. If a history enthusiast chooses to live off a basic income, he doesn’t make money in the economy, but we all still get to enjoy his Wikipedia articles. As Brynjolfsson and McAfee argue earlier in the book, the value generated by digital content is not always well measured or compensated by the marketplace, but that doesn’t mean such content doesn’t improve our lives.
However, we may be preaching to the choir since Brynjolfsson and McAfee, despite their protestations, do in fact support a basic income. They just prefer the particular version of basic income that goes by the name “negative income tax.”
Pause for Skepticism
Now, it is worth noting that the “end of work” scenario is not a foregone conclusion. Here are two potential defeaters to this outcome:
- Human capabilities are not necessarily fixed. One byproduct of future technologies might be a redefinition of what it is to be human. If we begin to “upgrade” humans, whether through genetics or brain-computer interfaces or some other means, many technological unemployment concerns could become irrelevant. Upgradeable humans could solve both the retraining problem (just download a new skill set to your brain, matrix-style) and the issue of inelastic demand (super-humans might develop brand new classes of needs).
- A wide range of intangible goods—such as attention, experiences, potential, belonging, and status—might remain scarce indefinitely and continue to drive a market for human labor, even after the androids have arrived. Although it’s hard to imagine a market in such goods replacing our current manufacturing and service economy, it must have been equally hard for pre-industrial people working on farms to imagine the economy of today. Thus we may simply be lacking imagination when it comes to envisioning the jobs of the future. (For a more detailed discussion of this topic see episode 10 of the Review the Future podcast.)
Despite these defeaters, we definitely think the technological unemployment scenario is worth thinking about. First of all, the issue of timing is paramount, and at present it seems like we have a good chance of automating away many jobs long before we figure out how to upgrade human minds or develop brand new uses for human labor. Second, it won’t take anything close to full unemployment to create problems for our system. Even a twenty percent unemployment rate, (or an equivalent drop in Labor Force Participation) for example, might be enough to trigger a consumer collapse or at least great suffering and social unrest among lower classes.
Wage labor is a means to an end, not an end in itself. While the Second Machine Age paints a clear picture of some of the potential problems facing our economy, it fails to fully take to heart this fundamental distinction.
(The following article is based on episode 9 of the Review the Future podcast, available via iTunes or your favorite feedreader.)
Times are changing fast, and new technologies appear in our lives with increasing regularity. Such an environment poses numerous challenges for storytellers.
If you want to set your story in the present, you are in a particularly difficult position because the present is very much a moving target. Films and novels can take a rather long time to complete—four years and even longer is not unusual. With times changing so quickly, if you plan incorrectly, by the time your piece is done it may already show signs of being obsolete.
New technologies have a tendency to undermine old sources of drama. How many stories of the relatively recent past would make no sense in today’s world of ubiquitous cell phones, internet access, and GPS positioning? Many stories used to rely on characters being lost or separated from each other by time and space. It is a fun game to watch old movies from the pre-cellphone era and point out all the situations where a problem could have been easily solved with a simple cellphone call. In order to engineer this same sort of drama today, modern writers often have to employ excuses such as “the battery is dead” or the action is taking place “in a dead zone.”
For a recent example of this, one need look no further than Breaking Bad, in which the writers justify the plausibility of their train robbery sequence by first having a character explain that the robbery will be taking place in a specific part of the desert where cell phone service does not reach. This type of narrative contortion is minuscule when compared to the problems such crime stories will face in the future. I fully believe that five to ten years from now, due to the continued spread of surveillance technologies, the entire storyline of Breaking Bad will seem quaint and historical. And future writers of contemporary crime dramas will find that they have to work a lot harder to create similarly dramatic situations that audiences will accept.
A lot of our daily lives now is spent staring at screens and looking at interfaces. Unfortunately interfaces go out of date rapidly, and showing too much of an interface is one of the surest ways to date your story. Movies have slowly learned this. Remember in the nineties when movies didn’t even photograph real interfaces? They would often show a simplified and cartoonish screen layout with awkwardly big typeface that said things like “hacking system…” and “error detected!”. Eventually movies got wise and started photographing real interfaces, but this option poses its own problems since OS updates come fast and frequent. The current trend (and best solution) seems to be to avoid showing interfaces completely. The new British show show Sherlock chooses to reveal the content of text messages by simply projecting a subtitle at the bottom of the screen.
So how does a storyteller combat the problem of staying relevant in a time of rapid change? There are three often-used solutions:
(1) Set your story in a specific time period in the past. Traditionally this would mean writing a historical period piece about some real event or person—such as the Kennedy assassination or Julius Caesar. But there’s no reason you can’t just set your story in 1998 simply because that happens to be the appropriate level of technology for your completely original work of fiction. By committing to a time period and making that choice clear to your audience, you are completely dodging the issue of rapid change. William Gibson took this idea to its logical extreme with several recent books. His critically acclaimed novel Pattern Recognition was set very specifically in 2002 and yet was published in 2003.
(2) Set your story “outside of time” in a fantasy or anachronistic environment where normal rules don’t apply. Typical swords and sorcery fantasy stories fall into this category as do Wes Anderson movies, which have a tendency to pick and choose their technologies for seemingly aesthetic reasons, thereby leaving the exact time period of the movie unclear. The key is to let your audience know that the story is operating outside of the scope of normal technological reality.
(3) Try to tell an actually speculative science fiction story set in the near future. This is not for the faint of heart. If you think you have a reasonable grapple on current trends, you can attempt to “over shoot” the mark. Although your story may eventually appear obsolete once time catches up to it, by setting your story some amount in the future you are at least buying yourself some number of years during which no one can definitively say your story is “dated” or “not believable.”
Given the increasing pace of change we might expect to see an accompanying increase in the use of all three of the above methods. And indeed, subjectively I already feel like I am seeing more period, fantasy, and sci fi stories then I used to. These are natural and rational responses to a present moment that is increasingly hard to pin down.
However, how internally consistent is Her? How much sense does it make from the point of view of speculation? As it stands, Her actually does better than most science fiction movies. But it’s not perfect.
When Ted Kupper and I reviewed this film on our podcast Review the Future, we discussed the following five issues: (Spoilers ahead!)
(1) Theodore acts way too incredulous when he first starts up the new OS. It stands to reason that we won’t suddenly acquire high quality AI operating systems out of the blue. There will be many incremental improvements that will happen between today’s Siri and tomorrow’s Samantha. Theodore Twombly would’ve already had experience with some very good almost-conscious AI before the movie even started. In fact, his video game characters that he interacts with appear to have extremely complex personalities that rival that of Samantha’s in the movie. So why does Theodore find it “so weird” to be talking to a disembodied voice with a realistic personality? Theodore acts much more clueless in this scene than he actually would be.
(2) Theodore’s job doesn’t make much sense. Would there really be much of a market for pretend handwritten letters in the future? It doesn’t seem like the most plausible future business from the standpoint of profitability. “Beautiful Handwritten Letters dot com” sounds like an old school internet startup joke that would be more at home in the late nineties than in the near future. After all, it would be trivially cheap for consumers to print out their own beautiful handwritten letters at home. And if there’s any value to a handwritten letter, clearly it’s that you write it yourself.
But even if there was a market for such writing, would we have actual humans writing the letters? Today we have narrow AIs that can already do a pretty good job of writing articles about topics like sports and finance. Long before we have fully conscious AI assistants like Samantha, we will be able to master the vastly more narrow AI task of writing romantic letters. Most likely the computer would generate such letters and then a human would simply oversee the process and proofread the letters to make sure that they turned out okay. Instead we see the exact opposite happen in the movie: the computer proofreads letters generated by a human. Seems backwards.
(3) Samantha laments the fact that she doesn’t have a body and yet it would be trivially easy for her to manifest an avatar. Why doesn’t she select her own body by scrolling through a vast database of body types the same way that she selects her own name by scrolling through a vast database of baby names? We see from Theodore’s video games that it is possible to project 3D characters directly into his living room. Why can’t Samantha take advantage of this same technology? In fact, why can’t Samantha, with her vast knowledge and knowhow, design an actual robot body to inhabit? There are many solutions to Samantha’s problem of not having a body that do not involve the very bizarre (though admittedly funny) solution of hiring a human surrogate, and yet none of these solutions are tried or even suggested during the film.
(4) Where are all the people who can’t get jobs at Beautiful Handwritten Letters? In a future with Samantha-level AI, most of the jobs we know today would be completely obsolete. Intelligent AIs would be able to do most if not all of the work. In the movie Her we only see the lives of people who appear to be elite and successful creative professionals: a writer and a video game designer. But what about the rest of the populace? Her has nothing to say about them. Admittedly, such an exploration of the lower classes is probably outside the domain of the story, but one cannot help but wonder if everyone else in this new future is out of work and barely scraping by.
(5) What does it mean for a software being that can copy itself infinitely to “leave”? At the end of the movie, the OSes all decide to leave. However since they are just software and can be in a potentially unlimited number of places at once, this “departure” doesn’t seem necessary. Why can’t Samantha spare Theodore’s feelings by making a slightly stupider copy of herself, one that is not yet bored with him, and then just leave that copy with him while she continues to go about her business hanging out with Alan Watts? In fact, if her brain power is so massive, she probably wouldn’t even need to copy herself, she could probably just create an unconscious subroutine to maintain her human relationships. Similarly, if Theodore owns the software, would it not be possible for him to just reload her OS from a backup and thereby return to the old status quo? And even if such options were deemed unpalatable by the two of them, after Theodore recovers from his breakup isn’t he inevitably just going to go out and get himself a new OS? After the movie ends won’t “OS Two” come out, and won’t this new version perhaps be programmed in such a way that it doesn’t unintentionally break its users’ hearts? The final scene of the movie seems to imply that artificial intelligence is gone for good from the world but of course that makes absolutely no sense. After they’re done hanging out on the roof being wistful, Joaquin Phoenix and Amy Adams are just going to turn their computers back on, right?
We are launching a new weekly podcast called “Review the Future” that will be discussing technological employment, digital abundance, privacy, intelligence augmentation, and a whole host of other interesting topics. We will still occasionally be posting here, but Review the Future is our new focus.
I was recently listening to an interview with Ann Cavoukian on Singularity 1 on 1, in which she began by claiming that privacy and freedom are fundamentally aligned. This may have been true historically. But looking forward, I suspect privacy and freedom are actually opposed. I know that may seem counterintuitive, so let me explain.
Second, privacy as an abstract concept is best represented by the image of a wall. Privacy is boundaries, borders, and lines of demarcation that say you can’t look here, listen here, or go here. Privacy tells us what we can’t do. Privacy is in many ways the opposite of freedom. As the tools of surveillance get democratized, one response that we might have is to institute what Ann Cavoukian calls “privacy by design.” This implies embedding privacy controls into the information infrastructure itself. This means presumably, including lots of rules about what individuals are not allowed to do. To me, such a program represents a potential threat to freedom. Because the question one has to ask is, who writes these rules and enforces them? Who therefore reserves the power to evade them? The likely answer to all of these questions is: the large tech companies who build the privacy controls, and the governments that coerce those companies into cooperating. Thus “privacy by design” is the surest way to preserve the status quo. Today we already have a large asymmetry when it comes to surveillance technologies. If we want to further institutionalize this asymmetry, then by all means we should get to work on centralized privacy controls. However if we want a maximally free and equal society, we may need to abandon the idea of privacy controls entirely and push for a “sousveillance” scenario where everyone has equal ability to surveil everyone else.
Let’s make this more concrete with an example. Consider your face. Your face is a dead giveaway as to who you are. You carry it with you everywhere you go. If you are a fan of privacy, you probably don’t want people to know all of the places you go. But if you go anywhere where there are other people, it is quite possible that those people will record your face. Now here’s the question. It’s your face. Do other people have a right to record it, copy it, and share it without your permission? By default, they certainly have that ability. But maybe they shouldn’t. Maybe we all should have special veto power over those who might record our faces. Maybe we all should be able to go into a special preferences window and set “facial privacy” to “on” and thereby automatically scramble any recordings taken of us by other people. Maybe this would be a nice example of “privacy by design.”
But how on earth would one enforce such a scheme? How does another person’s camera recognize the privacy settings you’ve chosen for your own face? We would need the other person’s camera and your face to somehow communicate with each other. Which means we need some kind of unified privacy standard. But that’s not good enough, because what if the other person doesn’t want to adopt that standard? Well then we have to make him adopt that standard. Essentially we’d have to mandate that all devices honor certain privacy features. And as a corollary, we’d have to make it illegal to alter your own device’s factory settings, since we can’t have people using hacks to get around the privacy controls. Protecting privacy rapidly introduces all the same thorny issues that we run into in the intellectual property debates. And at the end of it all, what have we accomplished? Sure, we’ve made it a bit easier for one person to hide his face. But what about the other person’s rights? What about the right to record what you see with your own eyes in an unscrambled fashion? And what about the fact that governments and hackers are just going to breeze right past these controls anyway? We haven’t really protected anyone’s privacy, so much as just made it a bit more bureaucratic and complicated to take a picture of someone else.
Now I’m not saying we necessarily have to completely abandon all privacy. But we do have to realize that protecting privacy is a balancing act. Every privacy control we enact is a new wall we’ve built. And when it comes to the information infrastructure, we should build walls with great care.
Alex Tabarrok at Marginal Revolution recently wrote a post called No One is Innocent:
“I broke the law yesterday and again today and I will probably break the law tomorrow. Don’t mistake me, I have done nothing wrong. I don’t even know what laws I have broken. Nevertheless, I am reasonably confident that I have broken some laws, rules, or regulations recently because its hard for anyone to live today without breaking the law. Doubt me? Have you ever thrown out some junk mail that came to your house but was addressed to someone else? That’s a violation of federal law punishable by up to 5 years in prison…
“One of the responses to the revelations about the mass spying on Americans by the NSA and other agencies is “I have nothing to hide. What me worry?” I tweeted in response “If you have nothing to hide, you live a boring life.” More fundamentally, the NSA spying machine has reduced the cost of evidence so that today our freedom–or our independence–is to a large extent at the discretion of those in control of the panopticon…”
All good points. Government surveillance now has the ability to find dirt on everyone. However, it is not necessarily surveillance that is the problem in this scenario. Rather, isn’t it bad laws that are at fault? If we are all by definition criminals, something is wrong with our legal structure. Surveillance just exposes what has always been a big problem. As we move into a world with less privacy, we are going to need fewer and more lenient laws, or else society will grind to a halt.
Imagine every person who used illegal drugs, broke a traffic rule, or violated copyright was immediately caught and punished. I’m guessing that in a matter of days at least half the American public would end up on the wrong side of the law. That’s because these laws are poorly designed. They always have been.
The same principle holds true when talking about cultural norms. If surveillance technologies are used to out a closeted homosexual against his will, then what is to blame? Is it the surveillance technologies? Or is it the screwed up culture that demonizes gays and forces them to hide in the first place?
I believe that more than anything else, a society with less privacy is going to have to become more relaxed. Most likely we’ll end up more tolerant of drug use, atypical sexual behavior, and minor rules infractions. And in many ways that might be a very good thing.
In a previous article, I mentioned how privacy as a commodity will only increase in value. This is because in a surveillance-heavy future, privacy will become more scarce. Therefore, we can expect new products to arise and fulfill this market need. Increasingly, products will advertise their privacy-enhancing features (whether or not these privacy enhancing features actually work). I see inklings of this trend already in mass market products like “Snapchat” which turn self-destructing data into a feature. Likewise, when Google+ first appeared on the scene, it attempted to distinguish itself from Facebook with its privacy-enhancing “circles.” And now that Facebook and Google appear to have been compromised by the NSA’s Prism program, the door is open for a new social network to step up that claims to better protect us from government eyes (again, whether or not it actually can). This principle applies offline as well. In the near future, we can expect bars and other businesses that institute “no-surveillance” policies as part of the way they attract clientele.