Why Do You “Vaguely Remember” Things You Saw On Facebook?

I’ve had a social interaction repeat itself over the past few days that went like this:

When speaking with a friend, someone I know but haven’t seen recently, I was asked to fill in details regarding something I had posted on Facebook. She said she’d “vaguely remembered” seeing something about the subject at hand on Facebook. The way the question was phrased stuck out to me, so before answering I gently asked a few questions to see how much she’d learned from the post. It became clear that she had what I’d consider a firm grasp on the content. Far from a vague remembrance, this was a clear view.

I forgot about this and then almost the same situation played out again with a different friend asking about a different particular. Then something really surprising happened: I caught myself doing it.

In a third conversation I asked a question about a trivial matter I’d seen alluded to on Facebook. In truth I remembered the minute details of this matter as it had been posted and subsequently discussed by our mutual friends. I lurked in the thread with interest and could probably recreate the argument extemporaneously. But I represented myself as having only glancingly noticed the headline in face to face conversation. I had internalized a social rule: Remembering everything you see online is not polite. Some thoughts on why this happens:

  • To avoid looking like a stalker.
  • To avoid ‘caring’ openly about social media, which is apparently still not fashionable.
  • To avoid looking tech savvy and therefore geeky.
  • To avoid ‘caring’ openly about your friends’ lives, which is also apparently not fashionable.

I don’t know why it is impolite to seem to have complete knowledge of social trivialities, but it occurred to me that, as information retrieval gets ever more quick and embedded, it will be challenging to keep up this charade of ignorance when both participants know your in-ear AI is whispering facts as you deny it.

Could Luxottica’s Eyewear Monopoly Threaten Google Glass?

infographic

Italian eyewear behemoth Luxottica is not afraid to throw its weight around — After Oakley took them to court over a patent dispute, the integrated manufacturer/retailer/insurer bought its biggest rival and now Oakley is just another Luxottica brand — the eyewear equivalent of Coke buying Pepsi.

That Google Glass is potentially a major disruptor in eyewear is self-evident. As has been widely reported, Luxottica has not been sitting on its laurels — Oakley has been working on HUDs for years and has a HUD product on the market, an expensive system for skiers. Like Google’s developer set now in circulation among a relatively select few, it’s not a mass market product yet, but it’s out there. If Google needs Luxottica in order to get Glass manufactured, distributed, covered by insurance, or stocked in common stores like Sunglass Hut or LensCrafters, it is completely at Luxottica’s mercy. They’ll need a strategy to bypass this hegemon at least three of those four ways (the insurance issue seems minor in the case of Glass).

Google has reportedly been speaking to upstart Warby Parker about helping out with the design of Glass, which sounds like a good idea to solve the design and manufacturing problem. But Warby Parker uses a send-the-frames-then-send-them-back strategy for fitting, which seems impractical with Glass, where the frames contain a lot of the value. That means they’ll need a retail presence, perhaps through another partnership, where people can get their Glass fit. Right now the prototype for that experience has already been made available to the developers using Glass.

Like when smartphones put onetime collaborators like Motorola and Apple into direct competition, wearable computing is going to put new rivals up against one another. In this case you have two massive near-monopolies facing off; Luxottica and Google both have deep enough pockets that I doubt either could just buy the other.

Can Google bypass the incumbent? Can Oakley’s technology beat Google’s?  Think Google might decide to license Glass software to Luxottica just to get it out there, like they did with Android? If so, would Luxottica go for that? What do you think?

“He’s Not on Mushrooms, He Just Has the Google Eyes”

You’re at a party, and you don’t know anyone, so you end up exploring the house. You come across an exotic looking potted plant and you wonder “What is that?” Your google glasses reveal that the plant in question is etilingera elatior, also known as ‘Torch Ginger.” A link appears to a gardening website where you can order seeds. You dismiss the link, but in the process you are reminded of your mother, who likes to garden, and think, “Oh shit, when is mother’s day?” You look up the date and are relieved to discover that you still have plenty of time to get a gift, so you can relax. But idly you are thinking, what is the origin of mother’s day anyway? Which leads you to a corresponding wikipedia entry…

Flash forward and suddenly you’re that guy who’s been staring at a potted plant for fifteen minutes.

In the Future No One Will See the Same Thing

Augmented reality is coming and I don’t think anyone can predict for sure what the cultural response will be. However it’s definitely fun to think about the possibilities.

My usual thought experiment goes like this: I imagine a world where everyone is wearing special glasses or contacts, and these lenses automatically record everything everyone sees. I then mix in ubiquitous network access, location tracking, and face recognition, and I start to see a lot of evidence for what you might call the “the end of privacy.”

In such a future, one might expect there to be much less confusion as to what happened at a given date and time. Fuzzy eyewitness accounts ought to become obsolete beside the relative certainty of digital recordings. As a culture we might find it a lot easier to agree on facts, as so much data will be available to support the “correct” story. We might start to develop a unified history.

But there’s another side to augmented reality that throws a big wrench in this vision. Digital recordings are extremely malleable. And when you are wearing augmented lenses all the time, “what you see” becomes just another software preference. You will ostensibly be able to to tweak and filter your vision with the same ease that you might change your computer’s desktop wallpaper. If you want to make your world look like an old movie, you could potentially do that. If you want the sun to be shining all the time, you could potentially do that too. And if you want your husband to look like Brad Pitt, just check a box in the control panel, and it is done. Just know that your husband is probably doing the same thing to you.

I’m not saying that people won’t still choose to see the same things under a lot of circumstances. But the level of individual solipsism that such a technology enables might in some cases fracture the truth to an even higher degree than we’re already used to. And I haven’t even begun to delve into the possibilities of having your vision hacked without your knowing it…

Are Augmented Reality Glasses Really Coming THIS YEAR?

This is just a rumor right now, and might not pan out, but if true this is a great example of how difficult it is to get out of the linear thinking trap. The smartphone has existed in some form since 2001 and didn’t get to the price and quality where it represented a worldwide market until 2007.

On this site we’ve spent a lot of time reading and projecting technology trends. We try to avoid tossing out ideas just because they seem radical. But I’m finding it hard to accept that the smartphone, which barely existed ten years ago and has driven growth in world computing for only 5 years, is about to be obsolete.

Let’s say for the sake of argument that Google really introduces useful Android-powered smartglasses this year, and they really cost $250-600. Chances are these represent the 2001 Palm OS version of what augmented reality glasses will really be capable of. But can we expect it to take even the 6 years it took smartphones for them to catch up? First, since this is running Android, it’s not a new software stack, but instead a smaller hardware packaging. This makes it more like the shift from desktop to laptop computers; for the most part, the software is already there. Second, these shifts are trending shorter over time. It took nearly 4 decades to shift from mainframes to PCs, more than 2 decades to shift to laptops, and the mobile shift took 6 years. Perhaps we’ll see mature augmented reality glasses 3 years after introduction.

As hard as it is, I think we need to expect more changes like this, and we need to expect them to come even more rapidly. An 11-year window (let alone a 30-plus-year one) in which a particular type of computing product is the most advanced available may be something we never see again.