How Google Glass could change the future

Sergey Brin's live demonstration of Google Glass at the Google I/O Conference in June clearly showed that wearable, always-on, Web-connected computing technology is here now—and that it works. While such augmented-reality (AR) eyewear is in its early stages of development, future versions of the technology, whether Google's or some other company's, could lead to dramatic changes in the way we work, play, travel, and communicate.

The best killer applications for wearable AR tech involve situations where it's important that users have free use of their hands or be able to walk while using the app. The coolest apps also display data and images in a way that interacts with the real-world imagery that users see. For example a basic AR app might place information bubbles over real-world landmarks within in the frame of the glasses (it might display the name of a movie theater along with the names of movies that are starting within the next hour).

Here are some of the more exciting applications that may be coming.

Hands-free gaming

The coolest games for wearable AR apps will probably involve glasses that cover the whole eye, or a contact-lens-style display that covers one or both eyes. But Google Glass might provide a view that augments a multiplayer “reality game” played on the street or in a forest.

For example, in a team-based game in which players operate at different locations within a given area, the glasses could provide a view from above, showing the exact locations of all team members. The lens could also display an I-see-what-you-see view, allowing one team member—the team leader, perhaps—to see through the eyes of another team member.

The wearable aspect of the glasses would be especially useful in a game where the players have to have their hands free to shoot or tag.

Seeing your friends

You go to an amusement park with a group of friends. People are everywhere. You and your friends decide to split up and go your separate ways, but you all want to meet up again later.

If your friends voluntarily share their locations, AR glasses could use assisted GPS, Wi-Fi, and cellular networks to track everyone's location and display each one in the glasses. Those locations could appear on a “view from above” map, or as silhouettes at ground level (if they are separated from the viewer by one or more walls). The glasses could also identify your friends in a crowd.

Another possibility is that you may wonder what your friends are doing as the afternoon progresses, on the chance that they've found something more fun to do than what you're doing. In that case, you could look through the camera view of any of your friends' glasses. If you liked the look of what they were doing, you could head in that direction, guided by directions on a map displayed on the glasses.

Next page: How it would change your trip to Paris...

Translation

Last year I visited Paris for the first time. My chief anxiety about going there was the language barrier. For me, part of the fun of visiting new places is to get a taste of what the people there are like. But not being able to speak French prevented me from getting that. I had a translation app on my smartphone; but of course, consulting my phone every second during a live conversation was a nonstarter.

The microphone on the glasses would be able to hear what is being said to you, and then translate it into English on the lens. It could even speak it as English into the glasses' earpiece. Responding with the right words is a little more difficult. You would have to speak your response into the microphone on the glasses, and then let the servers display the words in French on the display, at which point you'd speak the words in French.

This would introduce some gaps in the conversation, but it might not tank the conversation altogether. Of course, the person you were talking to might be fully aware of the role of the glasses in the conversation. But the translation apps of the future might recognize the content of Jean-Marc's speech, and suggest some common responses.

Medical emergencies

Most of us have taken CPR training at some point in the past, but how many of us could actually perform the duty if someone collapsed right next to us? Chances are we would be terrified and would have a hard time remembering our training. But the situation might be different if we could call up a program to walk us through the process of saving the distressed person's life.

Such an app would display simple instructions on the screen and voice it in the earpiece of the glasses. The program might use the camera to help the wearer zero in on the right place to position the heel of the hand on the victim's chest before starting compressions. The app would watch the user's actions and advise the wearer when to switch from giving breath to doing chest compressions.

Very advanced apps might even be able to watch the reaction of the victim and advise the person performing CPR of when to adjust the next steps or when to quit performing CPR. If connected to the 911 emergency communications system, the application could perhaps downlink a teleconference line to an ER doctor, who could see see through your glasses and walk you through the steps needed to keep the patient alive until help arrived.

Just knowing that such powerful information was within reach might give us the confidence to perform CPR more effectively. The information displayed in the glasses, when delivered in real-time and adapted to the minute-by-minute needs of patients, could very well save lives.

Travel information

Travel is data-intensive. You're constantly accessing information from print sources or electronically to help get you to the right place at the right time to catch a vehicle to take you to the next stage of the trip. And you're usually trying to access that data in the midst of your travels—while you're waiting for a taxi or walking through the airport concourse toward your gate. Your hands are holding luggage and other things, so using wearable technology to access your travel data seems like just the ticket.

With Google Glass you could use voice commands to pull up travel information on the lens. The content could be anything from ground transportation data to flight numbers to rooms available in hotels at your destination.

During travel, you may be moving through places that are completely foreign to you. Navigating through large airports or subway systems (try Tokyo's!) can be daunting, especially when the signage is in a foreign language. The glasses could translate all the signs and give you step-by-step directions, with routes and landmarks overlaid on your field of vision. With everything decoded, the unfamiliar environment would seem more welcoming, and your stress level would go down.

Wi-Fi detection and measurement

A Wi-Fi signal locator and speed measurement app would place a Wi-Fi icon next to the base stations within the wearer's field of vision, and the icon would grow larger or smaller depending on the strength of the signal being transmitted.

In another mode, the glasses might cast a green hue over areas where one or more Wi-Fi signals are strong, and cast a red hue over weak Wi-Fi environments. They could provide similar graphic representations of cellular signal strength.

Another app might poll crowdsourced databases to find out where free Wi-Fi is available, and might even provide quick directions on how to get there.

Next page: Cure your fear of public speaking...

Speeches and presentations

Using the lens of the glasses as a miniature teleprompter over your eye could change speech and presentation making forever. The lines of your speech could scroll down the screen at a pace that you chose (or that the voice recognition software detected that you were going), and only you could see it. To the audience, you would appear to be looking straight out, even while reading from the display. Of course, the glasses themselves would give you away—but someday those glasses might become a contact lens.

Another problem with presentations is that you can't see your own slides, which are usually projected behind you on the wall while you're speaking. Sure, you could stay glued to the podium, where you could see the slides on your laptop, but most presenters want to move around freely when addressing an audience. The glasses could overlay a transparent view of your slides, along with any presentation notes you might need.

The glasses could be extremely useful—albeit in a different way—during the Q&A session after your presentation. If a tough question arises in connection with a past event or statistic, say, your support team could send you relevant information via the glasses lens. Knowing the the right thing to say is valuable at any time, but in a public affairs context, it can be priceless.

History of places

Content can become a hundred times more meaningful when presented over the real-world thing it relates to. One of my favorite sites, oldsf.org, maintains a collection of old pictures of San Francisco organized over a Google map on the basis of the part of the city visible in the photograph. You can also move a slider up and down a timeline to see photos from specific time periods.

If such content were retrofitted to display on the glasses, it might truly come alive. As you walked around the city wearing the glasses, historical pictures of specific places you were seeing through the lens could appear. You could fix your eye on the steps of City Hall, and then use the slider to go back in time to superimpose older and older pictures of the scene, watching the ghosts of the past come into view and depart again.

Social mapping and navigation

Wearable AR might make services like Google Street View go social. Today these services have to pay fleets of cars to drive around and snap millions of still photos of streets and surrounding environs. The fleets must return to the streets periodically to keep the street views up to date and accurate.

Now, imagine that Street View used imagery (even video) captured through the lenses of thousands of Google Glass users and recorded to the server. Street Views would be more accurate because many many more views would have contributed to them. User images would come from many places (such as inside malls, on beaches, or in forests) where the Google camera cars can't go.

The map makers would have to develop the technology for piecing together the best of the millions of user views of a particular place. The process would involve continually replacing older or inferior images with newer or better ones.

Nutrition and shopping

When you were at the grocery store, you'd be able fix a certain product in your lens, and then see an overlay of all the nutrition information about that product. Food companies would begin touting their ability to make this nutrition information easily accessible via the augmented reality app.

You could port the nutrition information to a dieting site (like Weightwatchers) and/or an exercise site. You'd then immediately know how much of the food you could eat while adhering to the rules of your diet, and how much exercise you'd have to do to work off one serving of the food.  Knowing the nutrition, dieting, and fitness aspects of the food you were looking at would help you make better-informed decisions about what and how much to buy.

Subscribe to the Best of TechHive Newsletter

Comments