But, I want more than a camera: or how to think of ideas for Google Glass

Consider this as a continuation, Part II, of the thoughts I previously posted about Google Glass. If we leave recording out of the picture, we come up with what is essentially a display device. So, thinking about how Google Glass could become its own paradigm and not just an extension of the mobile paradigm centers around understand how Google Glass displays information differently than the mobile phone (or desktop, tv, etc).

The mobile phone, with its mobility and location awareness, and its constant presence in our lives makes it perfect for what I'd call “context awareness.” What this means is that, when your friend happens to check into the same Foursquare venue as you did, your phone is great at buzzing and letting you know. It's also a great way to be more generally aware: when people throughout the day begin tweeting about some event, your phone is great for an easy way to check in and see those in the five minutes while you're waiting around. Google Glass is similarly equipped, especially because it relies on your cellphone for location awareness, and can do many of these things. This alone doesn't make it attractive or groundbreaking as an information device.

Where Glass could really excel is at what I'd call “task-based information surfacing.” While similar to this idea of context-awareness, Glass's presence on your face means that it sees what you see and can “interact” with that visual information in a way that's wholly different from your phone. There are two major differences between this and your phone:

  1. While your phone is great at tasks where the phone is the center of your focus (e.g., looking up nearby movie times), Glass will succeed where a task is being performed out in the physical, as opposed to virtual world.
  2. “Just-in-time” information surfacing is similar to “just-in-time” manufacturing: the information will likely be very minimal, and highly relevant to your current activity. The screen real estate and access to your attention your phone has is leaps and bounds above the resources Glass will have access to (even in its most advanced versions, attention will be a huge limiting factor for Glass, but this'll make it unique).

So how do you think about applications for Google Glass? I think one mistake many are making is to think about the tasks that people perform and the information that they might want to do them. But this is the mobile paradigm: these are essentially notifications. While there may be applications here, my guess is that many of these applications would just as well be done on the phone. This is the advice I would give people:

Think about what tasks people can't do because they don't have the information while they do it. This is actually the inverse of what was mentioned above: You're not thinking about tasks, you're thinking about entirely new objectives and capabilities that users will have because they have access to new information. This is what is so thrilling about AR games or virtual surgery: neither of these activities are possible without the accompanying information. No information, no games, no surgery. This is the information that Glass can deliver because it sees what you see, the stuff that nothing else can do right now.

In chatting with friends (thanks Prasanna and Justin, these are six potential patterns of interactions for Google Glass:

  • Justin described the first category as “low latency decision making.” Here, you have some high-bandwidth activity and one is surfacing information to you in an easy-to-parse manner (generally faster than reading) allowing you to make better decisions. The really stereotypical example is things like Air Force pilots: they need to make split second decisions based on information being relayed to them. Because we're skiers, Justin and I spoke about doing things like using color to indicate water content of snow: if I saw a deeper colored patch of snow, it's relaying in a visual, easy to parse manner the fact that that patch over there is icy. *
  • AR Games are a pattern in and of themselves. I think this could make a blog post in and of itself, so I'll leave it up to your imagination.
  • The next pattern has to do with the mixing of physical and virtual inputs. This means in particular, the mixing of haptic and visual feedback. While the haptic feedback might come from the physical world, Glass can provide different visual feedback to go with that information. This is where the ideas related to virtual surgery would fit in, but even more broadly, this could be huge for applications like 3d modeling and next-gen manufacturing processes. This is huge: Over the past two decades we've seen more and more creation become digital. But in the next decade, we're going to see digital creation that's meant for the physical world. Glass could be a huge resource for this next generation of tools.
  • I'm going to mention driving, because it is probably the most wide-scale, everyday usecase for many of the above patterns. That said, it's also an enormously risky area for development: the UK is already banning the use of Google Glass behind the wheel, and won't be the last government to do so. Driving is also not likely to be a driving factor behind the adoption of Google Glass. Like cameras, the best GPS is the one you have in your pocket. Lastly, my concern is that much of the data we'd like to see while driving just doesn't exist, as opposed to the idea that it somehow lacks a way to be presented to users. Many of these usecases may be too similar to the mobile use case.
  • The last big use case is a bit tricky, because it blurs the line between traditional mobile usecases: the ability to integrate information with location. Again, coming back to the general principal of “just-in-time” information, this location based information will be crucial to your current activity, or an actual focal point of your current activity. There are, again, many military use cases for this, but the big one I see on the civilian side are tourism applications. While a mobile phone will be great for presenting information then allowing you to make a decision, it will likely continue to work primarily in situations where you know a decision has to be made (and mobile phones are likely to be better than Glass for a long time for making these decisions). Google Glass will be for when your information is constantly changing, or your decisions are much less deterministic: your phone will help you decide where to go to dinner tonight, but Glass will be an awesome tour guide.
  • Porn: if there is no porn use case this is dead in the water. Seriously, this is one of those technologies that is likely to be turbo-charged by porny use cases.

* One of the big issues with low-latency decision making scenarios, including driving is the fact that most of these usecases will be difficult if not impossible with the current form factor for Google Glass: the way Google Glass is set up right now requires switching between extremes of focusing distance, which can actually be a pretty slow activity for the human eye. So, we'll have to imagine a slightly more advanced version of Glass.