Why It’s Time The Government Learned How to Code

Computers are eating bureaucracy. This should come as no surprise. Rigidly and programmatically checking a series of boxes is computers’s bread and butter. Computers eat forms for lunch. Automated computer systems are faster and more efficient at virtually everything a bureaucracy does. So naturally the government here in the U.S. is building its latest bureaucracy, the Federal insurance exchange mandated by the Affordable Care Act, as an online-only endeavor. Recent news stories have widely covered the failures in the system, both on the customer and insurer sides. The news has also widely covered the Canadian contractor CGI that is building the site for the government.

It is impossible not to notice that this is not how Silicon Valley makes consumer websites. Corporate internal websites where functionality trumps interface design are often made by big contractors, and they are often kludgy nightmares. But consumer website startups begin with a concept and build up a team of designers and coders who continuously iterate on interface and featureset questions. In a some cases the design chops of the team seem to be more valuable to potential investors or acquirers than, you know, things of actual monetary value like ad sales or revenues or even active users.

That kind of workflow is often not possible when a contractor does the work. Soon they have put a lot of poorly documented in-house ideas everywhere and the costs for someone else who wanted to get up to speed increase. If they follow good coding practices and comment clearly they are acting against their economic interest. That’s a bad conflict to have.

Every new responsibility of government bureaucracy (and a significant portion of existing responsibilities too) will be digitized. The result will be faster, more efficient, less labor-demanding, and less corrupt — if the software is designed well.

It’s time for government to adopt best practices for software design. There is no law of the universe that says only private companies can make good software — good software has been made by loose groups of open source contributors, by companies, by universities and by individuals working alone. Bad software has also been made by groups of all those types. Government too can make good software by looking at what’s similar among successful institutions’ approaches: build a team from a small size up and incentivize staying on; document everything; use a continuous process of feedback and improvement rather than a ‘ship date’ model; test long and hard. There are more best practices that are pretty well documented out there, but that’s a start. I see no reason government couldn’t utilize these principles and create powerful, money-saving, service-providing software that replaces massive bureaucracies with small, experienced teams of programmers.

And the alternative, I’m afraid, is that the government’s hamfisted approach to semi-privatized contracting will earn it ill will and low expectations. This will lead to worse software from the government (though not necessarily less of it — even bad software is much cheaper than humans). In short, the government needs to learn to code, because computing is eating governance just like it’s eating everything else, and increasingly governing and coding are the same thing.

Why Privacy and Freedom Can Sometimes Be Opposed

I was recently listening to an interview with Ann Cavoukian on Singularity 1 on 1, in which she began by claiming that privacy and freedom are fundamentally aligned. This may have been true historically. But looking forward, I suspect privacy and freedom are actually opposed. I know that may seem counterintuitive, so let me explain.

First of all, when talking about privacy, we can’t just focus on government vs. the individual. This is the old paradigm and it is changing. The tools of surveillance are rapidly being democratized. This might seem to be a strange point of view given the massive mountains of data currently controlled by just a few gatekeepers such as Google, Facebook, and yes, the US government. But any description of right now is inherently fleeting given the rapid pace of technological change. And data is multiplying so rapidly that we will all soon be sitting on massive mountains of data. We will all be sensing, recording, and storing everything we come into contact with. For this reason, we need to focus on the implications of individuals spying on other individuals. Ideally we should strive to have one privacy policy that applies to everyone equally, whether that person is a member of the government or not. And we should expect (and hope) that individuals will aggressively spy on their own government officials. After all, government secrecy is just the flip-side of individual privacy, and both are threatened by new technologies.

Second, privacy as an abstract concept is best represented by the image of a wall. Privacy is boundaries, borders, and lines of demarcation that say you can’t look here, listen here, or go here. Privacy tells us what we can’t do. Privacy is in many ways the opposite of freedom. As the tools of surveillance get democratized, one response that we might have is to institute what Ann Cavoukian calls “privacy by design.” This implies embedding privacy controls into the information infrastructure itself. This means presumably, including lots of rules about what individuals are not allowed to do. To me, such a program represents a potential threat to freedom. Because the question one has to ask is, who writes these rules and enforces them? Who therefore reserves the power to evade them? The likely answer to all of these questions is: the large tech companies who build the privacy controls, and the governments that coerce those companies into cooperating. Thus “privacy by design” is the surest way to preserve the status quo. Today we already have a large asymmetry when it comes to surveillance technologies. If we want to further institutionalize this asymmetry, then by all means we should get to work on centralized privacy controls. However if we want a maximally free and equal society, we may need to abandon the idea of privacy controls entirely and push for a “sousveillance” scenario where everyone has equal ability to surveil everyone else.

Let’s make this more concrete with an example. Consider your face. Your face is a dead giveaway as to who you are. You carry it with you everywhere you go. If you are a fan of privacy, you probably don’t want people to know all of the places you go. But if you go anywhere where there are other people, it is quite possible that those people will record your face. Now here’s the question. It’s your face. Do other people have a right to record it, copy it, and share it without your permission? By default, they certainly have that ability. But maybe they shouldn’t. Maybe we all should have special veto power over those who might record our faces. Maybe we all should be able to go into a special preferences window and set “facial privacy” to “on” and thereby automatically scramble any recordings taken of us by other people. Maybe this would be a nice example of “privacy by design.”

But how on earth would one enforce such a scheme? How does another person’s camera recognize the privacy settings you’ve chosen for your own face? We would need the other person’s camera and your face to somehow communicate with each other. Which means we need some kind of unified privacy standard. But that’s not good enough, because what if the other person doesn’t want to adopt that standard? Well then we have to make him adopt that standard. Essentially we’d have to mandate that all devices honor certain privacy features. And as a corollary, we’d have to make it illegal to alter your own device’s factory settings, since we can’t have people using hacks to get around the privacy controls. Protecting privacy rapidly introduces all the same thorny issues that we run into in the intellectual property debates. And at the end of it all, what have we accomplished? Sure, we’ve made it a bit easier for one person to hide his face. But what about the other person’s rights? What about the right to record what you see with your own eyes in an unscrambled fashion? And what about the fact that governments and hackers are just going to breeze right past these controls anyway? We haven’t really protected anyone’s privacy, so much as just made it a bit more bureaucratic and complicated to take a picture of someone else.

Now I’m not saying we necessarily have to completely abandon all privacy. But we do have to realize that protecting privacy is a balancing act. Every privacy control we enact is a new wall we’ve built. And when it comes to the information infrastructure, we should build walls with great care.

How Government Surveillance is Like Piracy

Many civil libertarians are up in arms about the NSA snooping revelations. And there are serious issues with the secrecy and oversight elements that I’m going to ignore here. But the fact that they are snooping doesn’t surprise me and, in itself, doesn’t bother me. I see privacy as a dead issue. Like my co-blogger Jon Perry and many other thinkers, I’m concerned that we must fight to allow citizen “sousveillance” and protect due process rather than chasing after technically infeasible privacy.

But there’s a way the NSA debate is like the piracy debate. The problem with a file sharer isn’t that he or she copied, but that the copy was done without permission. The NSA can be characterized as doing the same thing: copying data without permission. In both cases, a fundamental quality of digital technology — frictionless, nonrivalrous copying — enables the behavior. In both cases, the authority to grant permission is the key issue.

A pirate uploads a movie without authorization from the studio; the NSA downloads an email (OK, all the emails) without authorization from the user.

In both cases, the real-world analogues for which we have established law are not adequate. It is not quite correct to say that downloading a file is ‘stealing’ in the traditional sense of that word (whatever the moral equivalents might be, there is a physical difference between stealing something rivalrous and copying something nonrivalrous and it is hardly trivial). It is not quite adequate to say that the fourth amendment protects us from unreasonable ‘search and seizure,’ when one is talking about data. Data can be searched and copied without being seized or stolen in the physical sense of those terms. What protection are we afforded from seizure-less search? What about theft that robs someone only of their product’s artificial scarcity, not of any physical good?

Self-Driving Cars Need Their Own Speed Limits

As the video above demonstrates dramatically, speed limits are a well-meaning regulation that’s going to have to be rethought in the era of self-driving cars. Eric Schmidt’s on record saying that a major problem with the current design of the Google self-driving car is that it obeys speed limits. But a computer that can safely drive can do so at faster speeds than a human. Further, a car specially designed from the ground up to be piloted by machine could have acceleration and braking systems optimized for the machine’s much faster reaction time, so cars that use self-driving technology might become rapidly faster and more efficient.

We need to establish an objective procedure, by which autonomous vehicles can demonstrate safety at various speeds, and if we are going to continue to have speed limits (I think we should!) they should be designed to maximize safety and efficiency while keeping up with current-generation technology.

(H/t +Wayne Radinsky for the video)

Why Do You “Vaguely Remember” Things You Saw On Facebook?

I’ve had a social interaction repeat itself over the past few days that went like this:

When speaking with a friend, someone I know but haven’t seen recently, I was asked to fill in details regarding something I had posted on Facebook. She said she’d “vaguely remembered” seeing something about the subject at hand on Facebook. The way the question was phrased stuck out to me, so before answering I gently asked a few questions to see how much she’d learned from the post. It became clear that she had what I’d consider a firm grasp on the content. Far from a vague remembrance, this was a clear view.

I forgot about this and then almost the same situation played out again with a different friend asking about a different particular. Then something really surprising happened: I caught myself doing it.

In a third conversation I asked a question about a trivial matter I’d seen alluded to on Facebook. In truth I remembered the minute details of this matter as it had been posted and subsequently discussed by our mutual friends. I lurked in the thread with interest and could probably recreate the argument extemporaneously. But I represented myself as having only glancingly noticed the headline in face to face conversation. I had internalized a social rule: Remembering everything you see online is not polite. Some thoughts on why this happens:

  • To avoid looking like a stalker.
  • To avoid ‘caring’ openly about social media, which is apparently still not fashionable.
  • To avoid looking tech savvy and therefore geeky.
  • To avoid ‘caring’ openly about your friends’ lives, which is also apparently not fashionable.

I don’t know why it is impolite to seem to have complete knowledge of social trivialities, but it occurred to me that, as information retrieval gets ever more quick and embedded, it will be challenging to keep up this charade of ignorance when both participants know your in-ear AI is whispering facts as you deny it.

Could Luxottica’s Eyewear Monopoly Threaten Google Glass?

infographic

Italian eyewear behemoth Luxottica is not afraid to throw its weight around — After Oakley took them to court over a patent dispute, the integrated manufacturer/retailer/insurer bought its biggest rival and now Oakley is just another Luxottica brand — the eyewear equivalent of Coke buying Pepsi.

That Google Glass is potentially a major disruptor in eyewear is self-evident. As has been widely reported, Luxottica has not been sitting on its laurels — Oakley has been working on HUDs for years and has a HUD product on the market, an expensive system for skiers. Like Google’s developer set now in circulation among a relatively select few, it’s not a mass market product yet, but it’s out there. If Google needs Luxottica in order to get Glass manufactured, distributed, covered by insurance, or stocked in common stores like Sunglass Hut or LensCrafters, it is completely at Luxottica’s mercy. They’ll need a strategy to bypass this hegemon at least three of those four ways (the insurance issue seems minor in the case of Glass).

Google has reportedly been speaking to upstart Warby Parker about helping out with the design of Glass, which sounds like a good idea to solve the design and manufacturing problem. But Warby Parker uses a send-the-frames-then-send-them-back strategy for fitting, which seems impractical with Glass, where the frames contain a lot of the value. That means they’ll need a retail presence, perhaps through another partnership, where people can get their Glass fit. Right now the prototype for that experience has already been made available to the developers using Glass.

Like when smartphones put onetime collaborators like Motorola and Apple into direct competition, wearable computing is going to put new rivals up against one another. In this case you have two massive near-monopolies facing off; Luxottica and Google both have deep enough pockets that I doubt either could just buy the other.

Can Google bypass the incumbent? Can Oakley’s technology beat Google’s?  Think Google might decide to license Glass software to Luxottica just to get it out there, like they did with Android? If so, would Luxottica go for that? What do you think?

MOOCs versus Traditional Classrooms: How Do You Judge Nonscarce Goods?

I’ve seen a spate of recent articles about the difficultly of applying traditional metrics to MOOCs. The general thrust of these articles is one of two things:

  1. MOOCs are vastly superior to regular classes because they reach many more people.
  2. MOOCs are vastly inferior to regular classes because the drop rates are astronomical.

I find both claims somewhat problematic. The fact that private institutions with stringent admissions, a culture of classwork, and high per-unit prices have a much lower drop rate says more about the structure of college education versus the structure of free educational materials available on the internet than it does about instructional quality. If you get into a college, pay for it, and associate with students, you are likely to finish your classes. If you are educating yourself a la carte and there’s no penalty to start or quit a class, you can be expected to start more classes and finish only those you really like.

So drop rates aren’t really a good comparison for traditional versus online learning options. Let’s say I have an open-enrollment, free MOOC and it signs up 100,000 students. If only 10% finish, I taught 10,000 students!

And what of the variety? When you choose a college it is often for the totality of their course offerings — the more in your fields of interest the better. But when you can switch institutions at no cost all the institutions of the world are open to you. Isn’t there value in that vastly expanded course catalog / idea marketplace?

So before we go demonizing MOOCs as education for the waffler and the flake, perhaps we should interrogate whether collegiate learning environments ought to be more open and more encouraging of experimentation in learning styles and disciplines. Both are at least equally valid conclusions from the data.

If Everyone Has Something to Hide, Then It’s Not Surveillance that is the Problem

Alex Tabarrok at Marginal Revolution recently wrote a post called No One is Innocent:

“I broke the law yesterday and again today and I will probably break the law tomorrow. Don’t mistake me, I have done nothing wrong. I don’t even know what laws I have broken. Nevertheless, I am reasonably confident that I have broken some laws, rules, or regulations recently because its hard for anyone to live today without breaking the law. Doubt me? Have you ever thrown out some junk mail that came to your house but was addressed to someone else? That’s a violation of federal law punishable by up to 5 years in prison…

“One of the responses to the revelations about the mass spying on Americans by the NSA and other agencies is “I have nothing to hide. What me worry?” I tweeted in response “If you have nothing to hide, you live a boring life.” More fundamentally, the NSA spying machine has reduced the cost of evidence so that today our freedom–or our independence–is to a large extent at the discretion of those in control of the panopticon…”

All good points. Government surveillance now has the ability to find dirt on everyone. However, it is not necessarily surveillance that is the problem in this scenario. Rather, isn’t it bad laws that are at fault? If we are all by definition criminals, something is wrong with our legal structure. Surveillance just exposes what has always been a big problem. As we move into a world with less privacy, we are going to need fewer and more lenient laws, or else society will grind to a halt.

Imagine every person who used illegal drugs, broke a traffic rule, or violated copyright was immediately caught and punished. I’m guessing that in a matter of days at least half the American public would end up on the wrong side of the law. That’s because these laws are poorly designed. They always have been.

The same principle holds true when talking about cultural norms. If surveillance technologies are used to out a closeted homosexual against his will, then what is to blame? Is it the surveillance technologies? Or is it the screwed up culture that demonizes gays and forces them to hide in the first place?

I believe that more than anything else, a society with less privacy is going to have to become more relaxed. Most likely we’ll end up more tolerant of drug use, atypical sexual behavior, and minor rules infractions. And in many ways that might be a very good thing.

The One Surprising Thing That Really Makes Monsanto Evil

A lot of people seem to think the problem with Monsanto is the fact that they use a technology called GMO to produce the seeds that they sell. Some think this technology is in some way ‘bad,’ or ‘not natural,’ while others think it is a live-saver. My opinion is that like any technology it will have good and bad possible implementations. But let’s assume for a second that there is nothing fundamentally wrong with the technology of GMO and talk about where Monsanto really gets its power from. The real problem with GMOs isn’t the technology that created them — it’s the artificial scarcity policy that disallows the market from correcting their mistakes. Monsanto’s evil because it is a monopoly.

Let’s say Monsanto makes a GMO strain of wheat that is Roundup resistant and hearty, but also contains a protein you don’t like. In a functioning market, another company would make a pesticide-resistant strain that maintained the protein ratio desired. Most of the legitimate complaints I’ve heard about GMOs are basically just engineering problems. More or less cost is involved, there are consequences for growers, storage, etc., but all these issues can probably be worked out in a way that satisfies. The problem is that, if any company or even an individual doing cross-breeding at home (cross-breeding is a GMO technology that humans have been using for ages), were to solve this problem, they would be liable for suit. Why? Because Monsanto has patents on the genes in its GMOs. That’s the problem. Essentially, they cribbed notes off Mother Nature and got the Patent Office to write a note saying a plant’s genome is now theirs. Which leads to absurd situations such as the one depicted in FOOD INC, where a farmer was sued for trying to save his own seeds.

Fundamentally, I’m optimistic about the possibility of GMOs to drastically increase yields,  the health content of food, and profit for food producers over time. But only if competition and innovation are allowed.

“Now With Enhanced Privacy!”

In a previous article, I mentioned how privacy as a commodity will only increase in value. This is because in a surveillance-heavy future, privacy will become more scarce. Therefore, we can expect new products to arise and fulfill this market need. Increasingly, products will advertise their privacy-enhancing features (whether or not these privacy enhancing features actually work). I see inklings of this trend already in mass market products like “Snapchat” which turn self-destructing data into a feature. Likewise, when Google+ first appeared on the scene, it attempted to distinguish itself from Facebook with its privacy-enhancing “circles.” And now that Facebook and Google appear to have been compromised by the NSA’s Prism program, the door is open for a new social network to step up that claims to better protect us from government eyes (again, whether or not it actually can). This principle applies offline as well. In the near future, we can expect bars and other businesses that institute “no-surveillance” policies as part of the way they attract clientele.