Is Net Neutrality Really a “Lose-Lose?” (Marc Andreessen says so)

Tyler Cowen points to this great Marc Andreessen interview in the Washington Post that features him saying the following about net neutrality:

So, I think the net neutrality issue is very difficult. I think it’s a lose-lose. It’s a good idea in theory because it basically appeals to this very powerful idea of permissionless innovation. But at the same time, I think that a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today. So the challenge, I think, is to accommodate both of those goals, which is a very difficult thing to do. And I don’t envy the FCC and the complexity of what they’re trying to do.

The ultimate answer would be if you had three or four or five broadband providers to every house. And I think you actually have the potential for that depending on how things play out from here. You’ve got the cable companies; you’ve got the telcos. Google Fiber is expanding very fast, and I think it’s going to be a very serious nationwide and maybe ultimately worldwide effort. I think that’s going to be a much bigger scale in five years.

So, you can imagine a world in which there are five competitors to every home for broadband: telcos, cable, Google Fiber, mobile carriers and unlicensed spectrum. In that world, net neutrality is a much less central issue, because if you’ve got competition, if one of your providers started to screw with you, you’d just switch to another one of your providers.

This covers I think the central concern very well, though it’s surprising to me that Andreessen acts as if there’s no way to reconcile the twin goals of ensuring the investors in infrastructure can make a return and ensuring the infrastructure they build will be designed according to rules that serve the public interest. Of course there is  – government subsidy. This is not a new idea. It’s been used in the past to build railroads, the phone network, etc.

If Comcast and Verizon (and Google and anyone else with a credible plan to build) gets $15 billion a year (say) in subsidies to build out a neutral network infrastructure, what’s the problem? Investors are now only in the hole $5 billion a year instead of $20 billion so they are much more likely to be able to make a reasonable return, society gets faster network speeds and permissionless innovation (such as the kind that allowed Andreessen to create one of the first web browsers, Mosaic, and to market it as Netscape, back before he became a VC). This seems to me like a “win-win.”

Further, Andreessen neglects to mention the best reason for network neutrality: it’s a better network design. It’s hard to anticipate all future uses of a given network. The phone network was originally designed for voice and yet it gave rise to the consumer internet; cable was originally designed for television but it’s now being repurposed for data usage. We don’t know if today’s heavy traffic users (i.e. streaming video) are going to be anything like tomorrow’s, so it’s better to build a network that can handle any type of data, rather than optimizing the network for particular uses which may not be as important as we assume.

For a more in depth look at the complex issue of net neutrality, check out our recent Review the Future podcast, What is the Future of Net Neutrality?

Why It’s Time The Government Learned How to Code

Computers are eating bureaucracy. This should come as no surprise. Rigidly and programmatically checking a series of boxes is computers’s bread and butter. Computers eat forms for lunch. Automated computer systems are faster and more efficient at virtually everything a bureaucracy does. So naturally the government here in the U.S. is building its latest bureaucracy, the Federal insurance exchange mandated by the Affordable Care Act, as an online-only endeavor. Recent news stories have widely covered the failures in the system, both on the customer and insurer sides. The news has also widely covered the Canadian contractor CGI that is building the site for the government.

It is impossible not to notice that this is not how Silicon Valley makes consumer websites. Corporate internal websites where functionality trumps interface design are often made by big contractors, and they are often kludgy nightmares. But consumer website startups begin with a concept and build up a team of designers and coders who continuously iterate on interface and featureset questions. In a some cases the design chops of the team seem to be more valuable to potential investors or acquirers than, you know, things of actual monetary value like ad sales or revenues or even active users.

That kind of workflow is often not possible when a contractor does the work. Soon they have put a lot of poorly documented in-house ideas everywhere and the costs for someone else who wanted to get up to speed increase. If they follow good coding practices and comment clearly they are acting against their economic interest. That’s a bad conflict to have.

Every new responsibility of government bureaucracy (and a significant portion of existing responsibilities too) will be digitized. The result will be faster, more efficient, less labor-demanding, and less corrupt — if the software is designed well.

It’s time for government to adopt best practices for software design. There is no law of the universe that says only private companies can make good software — good software has been made by loose groups of open source contributors, by companies, by universities and by individuals working alone. Bad software has also been made by groups of all those types. Government too can make good software by looking at what’s similar among successful institutions’ approaches: build a team from a small size up and incentivize staying on; document everything; use a continuous process of feedback and improvement rather than a ‘ship date’ model; test long and hard. There are more best practices that are pretty well documented out there, but that’s a start. I see no reason government couldn’t utilize these principles and create powerful, money-saving, service-providing software that replaces massive bureaucracies with small, experienced teams of programmers.

And the alternative, I’m afraid, is that the government’s hamfisted approach to semi-privatized contracting will earn it ill will and low expectations. This will lead to worse software from the government (though not necessarily less of it — even bad software is much cheaper than humans). In short, the government needs to learn to code, because computing is eating governance just like it’s eating everything else, and increasingly governing and coding are the same thing.

How Government Surveillance is Like Piracy

Many civil libertarians are up in arms about the NSA snooping revelations. And there are serious issues with the secrecy and oversight elements that I’m going to ignore here. But the fact that they are snooping doesn’t surprise me and, in itself, doesn’t bother me. I see privacy as a dead issue. Like my co-blogger Jon Perry and many other thinkers, I’m concerned that we must fight to allow citizen “sousveillance” and protect due process rather than chasing after technically infeasible privacy.

But there’s a way the NSA debate is like the piracy debate. The problem with a file sharer isn’t that he or she copied, but that the copy was done without permission. The NSA can be characterized as doing the same thing: copying data without permission. In both cases, a fundamental quality of digital technology — frictionless, nonrivalrous copying — enables the behavior. In both cases, the authority to grant permission is the key issue.

A pirate uploads a movie without authorization from the studio; the NSA downloads an email (OK, all the emails) without authorization from the user.

In both cases, the real-world analogues for which we have established law are not adequate. It is not quite correct to say that downloading a file is ‘stealing’ in the traditional sense of that word (whatever the moral equivalents might be, there is a physical difference between stealing something rivalrous and copying something nonrivalrous and it is hardly trivial). It is not quite adequate to say that the fourth amendment protects us from unreasonable ‘search and seizure,’ when one is talking about data. Data can be searched and copied without being seized or stolen in the physical sense of those terms. What protection are we afforded from seizure-less search? What about theft that robs someone only of their product’s artificial scarcity, not of any physical good?

Self-Driving Cars Need Their Own Speed Limits

As the video above demonstrates dramatically, speed limits are a well-meaning regulation that’s going to have to be rethought in the era of self-driving cars. Eric Schmidt’s on record saying that a major problem with the current design of the Google self-driving car is that it obeys speed limits. But a computer that can safely drive can do so at faster speeds than a human. Further, a car specially designed from the ground up to be piloted by machine could have acceleration and braking systems optimized for the machine’s much faster reaction time, so cars that use self-driving technology might become rapidly faster and more efficient.

We need to establish an objective procedure, by which autonomous vehicles can demonstrate safety at various speeds, and if we are going to continue to have speed limits (I think we should!) they should be designed to maximize safety and efficiency while keeping up with current-generation technology.

(H/t +Wayne Radinsky for the video)

Why Do You “Vaguely Remember” Things You Saw On Facebook?

I’ve had a social interaction repeat itself over the past few days that went like this:

When speaking with a friend, someone I know but haven’t seen recently, I was asked to fill in details regarding something I had posted on Facebook. She said she’d “vaguely remembered” seeing something about the subject at hand on Facebook. The way the question was phrased stuck out to me, so before answering I gently asked a few questions to see how much she’d learned from the post. It became clear that she had what I’d consider a firm grasp on the content. Far from a vague remembrance, this was a clear view.

I forgot about this and then almost the same situation played out again with a different friend asking about a different particular. Then something really surprising happened: I caught myself doing it.

In a third conversation I asked a question about a trivial matter I’d seen alluded to on Facebook. In truth I remembered the minute details of this matter as it had been posted and subsequently discussed by our mutual friends. I lurked in the thread with interest and could probably recreate the argument extemporaneously. But I represented myself as having only glancingly noticed the headline in face to face conversation. I had internalized a social rule: Remembering everything you see online is not polite. Some thoughts on why this happens:

  • To avoid looking like a stalker.
  • To avoid ‘caring’ openly about social media, which is apparently still not fashionable.
  • To avoid looking tech savvy and therefore geeky.
  • To avoid ‘caring’ openly about your friends’ lives, which is also apparently not fashionable.

I don’t know why it is impolite to seem to have complete knowledge of social trivialities, but it occurred to me that, as information retrieval gets ever more quick and embedded, it will be challenging to keep up this charade of ignorance when both participants know your in-ear AI is whispering facts as you deny it.

Could Luxottica’s Eyewear Monopoly Threaten Google Glass?

infographic

Italian eyewear behemoth Luxottica is not afraid to throw its weight around — After Oakley took them to court over a patent dispute, the integrated manufacturer/retailer/insurer bought its biggest rival and now Oakley is just another Luxottica brand — the eyewear equivalent of Coke buying Pepsi.

That Google Glass is potentially a major disruptor in eyewear is self-evident. As has been widely reported, Luxottica has not been sitting on its laurels — Oakley has been working on HUDs for years and has a HUD product on the market, an expensive system for skiers. Like Google’s developer set now in circulation among a relatively select few, it’s not a mass market product yet, but it’s out there. If Google needs Luxottica in order to get Glass manufactured, distributed, covered by insurance, or stocked in common stores like Sunglass Hut or LensCrafters, it is completely at Luxottica’s mercy. They’ll need a strategy to bypass this hegemon at least three of those four ways (the insurance issue seems minor in the case of Glass).

Google has reportedly been speaking to upstart Warby Parker about helping out with the design of Glass, which sounds like a good idea to solve the design and manufacturing problem. But Warby Parker uses a send-the-frames-then-send-them-back strategy for fitting, which seems impractical with Glass, where the frames contain a lot of the value. That means they’ll need a retail presence, perhaps through another partnership, where people can get their Glass fit. Right now the prototype for that experience has already been made available to the developers using Glass.

Like when smartphones put onetime collaborators like Motorola and Apple into direct competition, wearable computing is going to put new rivals up against one another. In this case you have two massive near-monopolies facing off; Luxottica and Google both have deep enough pockets that I doubt either could just buy the other.

Can Google bypass the incumbent? Can Oakley’s technology beat Google’s?  Think Google might decide to license Glass software to Luxottica just to get it out there, like they did with Android? If so, would Luxottica go for that? What do you think?

MOOCs versus Traditional Classrooms: How Do You Judge Nonscarce Goods?

I’ve seen a spate of recent articles about the difficultly of applying traditional metrics to MOOCs. The general thrust of these articles is one of two things:

  1. MOOCs are vastly superior to regular classes because they reach many more people.
  2. MOOCs are vastly inferior to regular classes because the drop rates are astronomical.

I find both claims somewhat problematic. The fact that private institutions with stringent admissions, a culture of classwork, and high per-unit prices have a much lower drop rate says more about the structure of college education versus the structure of free educational materials available on the internet than it does about instructional quality. If you get into a college, pay for it, and associate with students, you are likely to finish your classes. If you are educating yourself a la carte and there’s no penalty to start or quit a class, you can be expected to start more classes and finish only those you really like.

So drop rates aren’t really a good comparison for traditional versus online learning options. Let’s say I have an open-enrollment, free MOOC and it signs up 100,000 students. If only 10% finish, I taught 10,000 students!

And what of the variety? When you choose a college it is often for the totality of their course offerings — the more in your fields of interest the better. But when you can switch institutions at no cost all the institutions of the world are open to you. Isn’t there value in that vastly expanded course catalog / idea marketplace?

So before we go demonizing MOOCs as education for the waffler and the flake, perhaps we should interrogate whether collegiate learning environments ought to be more open and more encouraging of experimentation in learning styles and disciplines. Both are at least equally valid conclusions from the data.

The One Surprising Thing That Really Makes Monsanto Evil

A lot of people seem to think the problem with Monsanto is the fact that they use a technology called GMO to produce the seeds that they sell. Some think this technology is in some way ‘bad,’ or ‘not natural,’ while others think it is a live-saver. My opinion is that like any technology it will have good and bad possible implementations. But let’s assume for a second that there is nothing fundamentally wrong with the technology of GMO and talk about where Monsanto really gets its power from. The real problem with GMOs isn’t the technology that created them — it’s the artificial scarcity policy that disallows the market from correcting their mistakes. Monsanto’s evil because it is a monopoly.

Let’s say Monsanto makes a GMO strain of wheat that is Roundup resistant and hearty, but also contains a protein you don’t like. In a functioning market, another company would make a pesticide-resistant strain that maintained the protein ratio desired. Most of the legitimate complaints I’ve heard about GMOs are basically just engineering problems. More or less cost is involved, there are consequences for growers, storage, etc., but all these issues can probably be worked out in a way that satisfies. The problem is that, if any company or even an individual doing cross-breeding at home (cross-breeding is a GMO technology that humans have been using for ages), were to solve this problem, they would be liable for suit. Why? Because Monsanto has patents on the genes in its GMOs. That’s the problem. Essentially, they cribbed notes off Mother Nature and got the Patent Office to write a note saying a plant’s genome is now theirs. Which leads to absurd situations such as the one depicted in FOOD INC, where a farmer was sued for trying to save his own seeds.

Fundamentally, I’m optimistic about the possibility of GMOs to drastically increase yields,  the health content of food, and profit for food producers over time. But only if competition and innovation are allowed.

The Internet is Bringing Us Closer Together

You often hear technology characterized as a dangerous temptation that disrupts social growth. Technological interactions are fundamentally “not real,” we are told, and are antithetical to “real” traditional social interactions.

Among many examples to the contrary, file this: a study out recently finds that, with reasonable attempts to control for compounding variables, access to broadband internet increases marriage rates. If the study’s conclusions are valid, that means a traditional social interaction (whatever one thinks of it) is directly being supported and encouraged by computer communication technology.

Can We Create a Future of Autonomy, Mastery, and Purpose?

I was reminded today of this Dan Pink talk which I love. It lays out what’s now the consensus view from behavioral economics/behavioral psychology on how to best manage people who do creative work during the day (increasingly known as ‘the employed’ as routine tasks are automated away). It got me thinking about the three motivators of human action, Autonomy, Mastery, and Purpose. I know, in my own experience, that these are my strongest motivators. And, though we are transitioning to a jobless future, I wonder whether we can, positively, architect these qualities into the future world. So here’s a quick thought experiment about how these aspects of life might fare in the future. Please add your thoughts in the comments!

  • Autonomy

Without employment or tyrannical government, autonomy is the default. If you don’t have work, you don’t have a boss. So I see few short-term threats to autonomy. One can argue that mind hacks like advertising decrease autonomy, and perhaps that stuff gets better at manipulating us in the near term. Real mind-control might be possible in the long-term, and that’s truly frightening. One only hopes that the mindware anti-virus is good enough to keep up with the evolving threats.

  • Mastery

Getting good at things is a prime motivator of creative work. This is likely to continue in the future because, for example, no chess lovers gave up chess just because a computer can now beat them at it. People will continue to do new things like writing new software, and old things like woodworking and playing musical instruments, for fun long after the economic incentive is gone. The future seems full of opportunities for mastery.

  • Purpose

In the short term, it’s easier than ever to connect with a group of people who share a purpose and take collective action. Local and global concerns are tremendously more powerful than in the past as a result, and people have far more opportunities to collectivize than before. That said, many people especially here in the U.S. define their work — their employment — as the institution that provides that purpose. We are seeing many companies adopt a purpose-driven frame as a result, but we are also seeing fewer and fewer people working their whole lives at one company (or in some cases working ever again period). That’s a major source of purpose — work — that will be more scarce in the coming years. This is dangerous, because a purposeless generation or two could see reduced productivity or worse.

In the medium to long term, I see us replacing that source of purpose with our families, hobbies, faith, and cultural pursuits of altruism, science, and art. These are already valued in human culture. The only question is whether they can close the gap left by employment, and if so, how quickly.

So to conclude this little thought experiment, I am hopeful and positive that the abundant future will provide opportunities to be autonomous, challenged, and part of something larger than yourself, which I believe will motivate further creativity on the part of humans and our creations, but there are cultural and technical challenges that we must face along the way.