Google Glass: the Feminine Fashion Concern

If you’ve seen the photo shoots that’ve come out thus far for Google’s Project Glass, you know good and well that they’ve taken just as many photos of the device on the heads of women as they have of men. The idea that the device will not be as appealing to the feminine side of the equation here is about more than just the idea that women will or will not want to wear the first wave of Glass as it appears on the market, but according to a couple of sources we’ve had a peek at this week, there does seem to be some concern that only the distinctly male amongst us will want to go “wearable” with Google in 2013.

amanda_shades_1

First you’ll want to see what TechCrunch has whipped up using the results of a recent Google Glass “#ifihadglass” Twitter and Google+ contest. They found that respondents to this content ended up being either massively male or too ambiguously named to tell. Females appeared in these findings as well, but they ended up only appearing as a small fraction of contest-goers: unless, of course, they decided to call themselves men on the internet or decided they didn’t want to be recognized with a distinctly female name (according to that site’s name/gender algorithm.)

Then you’ll be interested to know that Google appears to be reaching out to women with a set of new photo-shoots with female Googlers. While these shoots are limited, this isn’t the first time Google reached out to a female-dominated outlet to see Glass rest on the faces of ladies. Back on September 10th, 2012, you’ll find a Glass-toting DVF fashion show heading down the runway during Fashion Week.

dvf_google_glass_fashion_show_0

The following photo set comes from Google employees Isabelle Olsson and Amanda Rosenberg, both of whom worked on the DVF show last year. They’re both working on Project Glass and we’re expecting that this isn’t the last we’ve seen of either of them pushing for a continually fashion-forward appeal in the hardware – and how it appears in is final form.

isabelle_olsson
amanda_shades_2
Amanda_rosenberg

So given that tiny cross-section of instances in which Google specifically addressed how Glass will look on your face, do you feel that your gender will be playing a role in how Google will be marketing the product in the future? How about those of you, specifically, that consider yourselves more feminine than you are masculine: does Glass appeal to you? Do you feel like the appeal here has anything to do with fashion, or is it purely based on how you will or will not be interacting with the technology in the near future?


Google Glass: the Feminine Fashion Concern is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass competition ramps up: Vuzix M100 developer units shipping

It’s time for the Smart Glasses wave to blast forth with today’s big entry being none other than the Vuzix M100. We’ve had our hands and eyes on an early edition of this pair of Google Glass competitors back at CES 2013, and today’s annWement surrounds the developer edition being shipped out to “Gold Developers” within the next 30 days. With this little beast heading to developers on the back of a newly invigorated Vuzix M100 Developer Program, we can expect the final model for consumers sooner than later!

vuzix

With the Vuzix M100 you’re getting a miniature computer that sits on the side of your head with an display that’s viewable through and eyepiece on the right or left of your head. Oddly enough, each of the demonstration units we’ve seen thus far sit on the right side of the head – similar to the most common Google Project Glass units in demonstration materials that’ve been public thus far. Beyond that and the fact that the Vuzix M100 also runs Android, this unit and Google’s couldn’t be more dissimilar.

With the Vuzix M100 Developer Program moving into its second phase with developer units being shipped over the next month, the wearable craze can once again continue to crash forth. We’ve seen not just Google and a set of near-veterans like Vuzix coming in to attack this upcoming market, but a possible entry from Apple as well. With Apple’s approach we won’t be seeing glasses, on the other hand, but the possibility of a wearable watch-sized machine.

Have a peek at our hands-on with the Vuzix M100 and have a peek at the timeline below to see additional adventures we’ve had with Vuzix wearable machines. They’ve been in this universe for several years now – it’s high time we had something as sleek as the M100 to see for ourselves!


Google Glass competition ramps up: Vuzix M100 developer units shipping is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

GlassUp AR glasses hands-on: Google Glass gets competition

Gagging for Glass but can’t afford Google’s $1,500 Explorer Edition? GlassUp thinks it may have the answer, a wearable display that looks almost like a regular set of glasses, and harnesses the power of your existing smartphone to flash real-time information into your eyeline. On show in prototype form at CeBIT, and set to ship later in the year, GlassUp takes a more humble approach to wearables than Google does with Glass, making its headset a companion display rather than a standalone computer.

glassup_hands-on_5

Whereas Glass has a full Android-powered computer integrated into the headset, GlassUp is merely a wireless display, using Bluetooth to link to your Android, iOS, or – eventually – Windows Phone handset. That keeps power consumption down; a standby time of around 150hrs is promised for the first-gen model, or a full day of periodic use such as, say, when emails or Tweets come in. An updated model will use Bluetooth 4.0, making it more power-efficient.

glassup_hands-on_6

What differentiates GlassUp is the display technology itself. Whereas Google has opted for a transparent prismic display embedded in a glass block positioned at the corner of your eye, GlassUp’s patented system uses a micro-projector fixed on the inside of the glasses arm. That focuses a yellow monochromatic image on the inner surface of the right lens, at 320 x 240 resolution. Not enough to replace your phone or tablet for multimedia duties, true, but certainly sufficient for text updates and basic graphics.

Like Glass, there are a fair few sensors and controls integrated into the arm of the glasses: GlassUp has a touch-surface which recognizes tap and double-tap, long-press, and swipe, in addition to a power/control button. There’s also an accelerometer, digital compass, ambient light sensor, and altimeter.

glassup_hands-on_0

Unfortunately, the prototype GlassUp brought along to CeBIT wasn’t market-ready. More striking in its design than the concept – which manages to look reasonably discrete, in a chunky retro way – the silver headset required a USB link to a computer for its display signal and power, and the projection itself is onto a noticeably orange-tinted pane in the right lens. Meanwhile, even when the battery-powered version is ready, if you want to have the display active all the time – such as when navigating, for instance – the runtimes will be “a few hours” rather than the all-day longevity promised with more sporadic use.

glassup_hands-on_3

GlassUp argues that, whereas Google’s wearable requires users to glance up and to the side to see the display, their system is far more discrete: the information floats directly in your eyeline. Another advantage is availability and price, though neither Glass nor GlassUp are quite ready for the mass-market. GlassUp is accepting preorders for the headset, at €299/$399, with deliveries of the first units expected in September 2013.

glassup_hands-on_5
glassup_hands-on_6
glassup_hands-on_7
glassup_hands-on_0
glassup_hands-on_1
glassup_hands-on_2
glassup_hands-on_3
glassup_hands-on_4


GlassUp AR glasses hands-on: Google Glass gets competition is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

JetBlue dreams of an airport with Google Glass, forgets to include lost luggage

JetBlue dreams of an airport with Google Glass, forgets to include lost luggage

Google has been asking prospective Glass owners how they would use the eyewear if they had the chance. The team at JetBlue did more than write a hashtagged post and call it a day: the airline posted mockups of its vision for how Google Glass would work at the airport. Its concept would mostly save passengers from the labyrinthine mess they know today by popping up useful alerts and directions in the right locations, such as flight times at the gate or (our favorite) the locations of those seemingly invisible power outlets. Of course, JetBlue’s images don’t necessarily reflect the final product, if there even is one. It’s not the likely gap between theory and practice that we’re worried about, mind you — we just have trouble believing in an airport where our flights are on time.

Filed under: , ,

Comments

Via: Skift

Source: JetBlue (Google+)

Nokia “Head Up”: How Lumia’s future is sharper than Glass

Are wearables like Google Glass the inevitable future for smartphones? Not if you ask Nokia, where simply floating a display in your line of sight doesn’t quite satisfy the self-imposed “head up” challenge its designers and engineers are facing. The evolution of Lumia isn’t just bigger displays or faster chips, it’s a new way of interacting with the digital world. SlashGear sat down with Jo Harlow, EVP of Smart Devices, Marco Ahtisaari, EVP of Design, and Stefan Pannenbecker, VP of Industrial Design at Mobile World Congress this week to talk “people versus robots”, rolling back the clock on convergence, and how the Finns want to pry our eyes away from smartphone screens, even if we’re looking at a Lumia.

nokia_lumia_glass

Spend any time talking future tech to Nokia’s executives, and you realize there are two themes running through their predictions. First, and perhaps most familiar to most industry watchers, there’s the relentless advance of sensors and the complexity of devices, with capabilities always evolving. Nokia differs in some respects in how its management see the form-factor of those devices: rather than a single, increasingly powerful phone in your pocket, all three VPs talked about a resurgence in dedicated devices; products that, as Marco Ahtisaari described it, “do a few things really well.”

Secondly, and arguably a more contrarian stance than others in the segment, is a desire to actually reduce the attention that’s paid to smartphones and mobile devices. Ahtisaari coined the phrase “heads up” internally to describe it, though it’s become an ethos for the long-term shared by others in the design team, like Stefan Pannenbecker.

“How can we get the “heads up”?”

“We see sometimes couples, out in a restaurant, romantically texting each other, or broadcasting… so that type of phenomena is interesting, and in a way bugs us a little bit, because the question is how can we get the “heads up”?” the Industrial Design chief explained to us. “So we do a lot of work on all kinds of levels in order to think that scenario through: what does that mean? So we’re interested in that type of topic, how do we get people’s heads up again.”

Nokia isn’t expecting to address that question in the next few months, or even the next couple of years. As Marco Ahtisaari told us, it’s an example of the company’s longer-term planning, though as an internal culture of design it has an impact on the Lumia devices we’ll see over the coming years. “The one thing I would say is that I talk about the “heads-up” principle in the studio, it’s like a 20-year principle. Creating computing technology that’s with us that doesn’t require more attention” he said.

nokia_mixed_reality_glasses-580x316

“And part of this pinning-to-Start [in the Windows Phone homescreen] is one example of that; things we’ve done with the glanceable, low-power mode on our devices in the past is an example of that; the NFC work we’re doing is an example of that,” Ahtisaari counted off. “You just touch the environment: the world becomes your interface, rather than having to go through twelve swipe-swipe-swipe. So that’s another component of that future, I think, and very important as we go to more distributed objects that do only a few things.”

Having got to a point where a person’s smartphone is often also their camera, their music player, their fitness tracker, and more, it might seem counter-intuitive to be considering breaking apart those components and turning again to individual gadgets. However, there’s a strong feeling within Nokia that specificity has its own advantages.

“There’s room again for devices that do a few things really well”

“I think there’ll be room for more and more dedicated devices that do a few things really well again” Ahtisaari predicts. “And that is slightly a contrarian view, but I think what we’ll see is increasing complexity and ability… you can either shortcut through the environment, but this means also space for dedicated devices that do a few things really well. Yes, a phone, but other functionalities too.”

Right now, all three executives are coy on what, exactly, Nokia’s portfolio of answers to these questions might look like. However, they’re more vocal on what they probably won’t be, and the approach seems less “in your face” than Glass, and more cautious than the “confident” search and prediction of Google Now.

nokia_frame_concept

“I’m not going to speculate [about Glass] because time will tell with regards what is the right execution with regards to this idea of “heads-up”, so I think we’ve a lot of work to do, frankly, so I’m not going to speculate about that” Pannenbecker said. “But I think, as I said, this is for me an area that we want to engage in, I mean, this topic of heads-up not this particular solution for example. As I said, there’s a whole bandwidth of opportunities, and I think we as a company need to look very deeply into these opportunities, and then commit.”

For Harlow, the question is of need: or, more accurately, the balance of plain geek appeal – as perhaps Google Glass embodies – against relevance to mass-market consumers. “I think that it’s just as true in any of these new areas that you have to solve the fundamental consumer problems, and you can’t… you innovate for the sake of innovation” the smartphones boss argued. “Usually there’s a small number of people who find them really cool, and the vast majority don’t see a reason why. That the use case is so on-the-point that they don’t see it.”

In fact, there’s a sense among all three that the Glass strategy – that is, taking what components might usually be associated with a smartphone, and making them something you can wear – is too easy a way out. Yes, there are battery challenges, and persistent wireless demands, and the need to craft an interface and interaction paradigm that suits a more hands-off usage style, but a wearable computer doesn’t necessarily address either user-need nor go far enough in liberating users from the tyranny of persistent, connected distraction.

“Either they solve latent needs, or unknown problems”

“I think that’s why you see fitness all over the place, because clearly if people stick with it then it can help solve a problem” Harlow explains, “but that’s where I think the energy will really come from, either that they solve latent needs that consumers can’t necessarily articulate, or solve unknown problems that they have and that sensors would solve.”

While the most attention has been paid to Nokia’s evolving Windows Phone handset range, the company has also been working on matching accessories, pushing ideas like wireless charging and NFC pairing. That focus on a well-designed, integrated ecosystem looks likely to spawn a family of shared technologies, each delivering its own component part of the overall usability.

nokia_morph_concept

“That’s something which we’re working on, and I’m not in a position… I will not talk about specific solutions to that, but absolutely that is a challenge for us” Pannenbecker agreed. “For us as designers. Because ultimately again it comes to better problems. This is more what we think a smartphone is supposed to be [holds up phone], but I think obviously there’s other ways of doing that.”

Nokia hasn’t been afraid of riffing on those possibilities in the past with concept designs, however. Its 2009 “Mixed Reality” headset predated Google Glass, and was envisaged with its own suite of accessories and sensors: a motion-tracking wristband for navigating a wearable display, for instance, along with wireless audio. Meanwhile, the idea of paring back information in a more context-driven way has also been explored, such as the Nokia-prompted “Frame” concept device that rethought the smartphone into a window that blurred the physical and digital worlds. Arguably it’s an idea that has expressed itself in Nokia City Lens, the augmented reality app now publicly available for Windows Phone.

Just as Google Now relies on its context engine, so has Nokia Research been pushing its own predictive technologies to better focus the user-experience. We mentioned the 2009 “Linked Internet UI Concept” from Nokia Research to Marco Ahtisaari, a project which learned from social networking attention and prioritized updates and geo-location of those people it calculated the user was most interested in, and asked him where the company’s roadmap was on integrating such ideas into its software.

“Partly that’s a question of focus” he said, pointing out that Nokia needed first of all to prove itself with a successfully selling Lumia range of phones. “Like I said, the most important thing we can do now is show momentum. These are things we definitely work on.”

However, he also argued that there is risk in making mobile devices too intelligent – or portraying them as having intelligence – because you run the risk of leaving the user feeling at odds with their device, not enabled by it. “If this makes sense there’s robots and people. People versus robots” Ahtisaari said, somewhat cryptically. “We’re on the side of people, in general. What I mean by that is certain personalization you can do, goes a long way. And the other example, if you took that, would be “hello, we just reconfigured your phone, it’s got all the people here, and we set it up for you”.”

“We’ve got the auto-magic today, it’s just making it not feel creepy”

In fact, Nokia could already integrate that sort of contextual technology into its phones today; the reservation is one of how the mainstream user – not the Glass aficionado – might react to that. “We’ve all of that auto-magic today, it’s just doing it in a way that doesn’t feel creepy, or has violated what you do” he argued. “It’s striking that balance. But definitely, the two things you’ve mentioned – contextually and prediction – are important.”

nokia_lumia

It’s early days for Nokia to look too far beyond smartphones; the Lumia line-up has only just reached five Windows Phone 8 handsets, the platform itself still holds an extreme minority share, and there’s no sign of a tablet on the horizon, at least not publicly. Nonetheless, it seems we can expect something other than a set of Windows Phone goggles.

“I’m not going to speculate [about Glass] because time will tell with regards what is the right execution with regards to this idea of “heads-up”, so I think we’ve a lot of work to do, frankly, so I’m not going to speculate about that” Pannenbecker demurred. “But I think, as I said, this is for me an area that we want to engage in, I mean, this topic of heads-up not this particular solution for example. As I said, there’s a whole bandwidth of opportunities, and I think we as a company need to look very deeply into these opportunities, and then commit.”

Though the strategies may be very different, there’s one thing Nokia and Google do agree on: the name of the game is elevating users from the voracious attention-soak of the touchscreen, not finding more ways of putting it in front of them. “If they require as much attention as a smartphone, then no more human contact” Ahitsaari concluded. “That’s the perspective we have, we’re still in the people-connecting business.”


Nokia “Head Up”: How Lumia’s future is sharper than Glass is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

“Emasculating” phones and plenty of rubbing: Brin amps Glass as #ifihadglass ends

You can’t accuse Google’s Sergey Brin of not doing his level best to promote Glass, with the co-founders comments that current smartphones are “emasculating” us in our inter-personal relationships coming as the #ifihadglass promo for the second round of units closes. ”Is this the way you’re meant to interact with other people?” Brin asked rhetorically at the TED conference this week, describing the current smartphone paradigm as one long bout of touchscreen rubbing, and revealing a vested interest in promoting wearables since he himself is a phone addict.

google_glass_sergey_brin

“The cell phone is a nervous habit — If I smoked, I’d probably smoke instead” Brin explained. “But I whip this out and look as if I have something important to do. It really opened my eyes to how much of my life I spent secluding myself away in email.”

In contrast, Glass epitomizes the new approach to search that Google previewed with Google Now; as Android designer Matias Duarte described it to us at Mobile World Congress this week, a more “confident” engine that replaces page after page of “maybes” with a few, more focused suggestions as to what it thinks users are looking for. Glass comes to that in part because of its form-factor – unlike the expansive display of a smartphone, there’s only a small window onto the digital world – but also because of how it will be used, with the wearer regularly dipping into their online life and needing immediacy in its responses.

That immediacy and purpose will, Brin hopes, cut down on the current addiction of staring at a phone display. “It’s kind of emasculating” he argued. “Is this what you’re meant to do with your body?

The first batch of Glass Explorer Edition units, put up for sale at Google IO 2012 midway through last year, are yet to ship, and developers who preordered the $1,500 headsets are yet to even see their credit cards charged. Meanwhile, the #ifihadglass promotion came to an end at midnight last night, with thousands of entries – many of which are more tongue-in-cheek than serious about potential Glass applications – on Twitter and on Google+.

Next stage is a panel of independent judges who will sift through the entries and pick out the 8,000 they believe best epitomise the potential of the wearable. Those who made the suggestions will be invited to stump up $1,500 for an early unit (though they’ll have to wait until after the first batch of IO orders ship, Google has said), with commercial availability – at a lower price – tipped before the end of the year.


“Emasculating” phones and plenty of rubbing: Brin amps Glass as #ifihadglass ends is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass in focus: UI, Apps & More

You’ve seen the Glass concept videos, you’ve read the breathless hands-on reports, but how exactly is Google’s augmented reality system going to work? The search giant’s Google X Lab team has been coy on specifics so far, with little in the way of technical insight as to the systems responsible for keeping the headset running. Thanks to a source close to the Glass project, though, we’re excited to give you some insight into what magic actually happens inside that wearable eyepiece, what that UI looks like, and how the innovative functionality will work, both locally and in the cloud.

google_glass_ui_leak_hero

Google knows smartphones, and that’s familiar territory for the Android team, and so unsurprisingly Glass builds on top of that technology. So, inside the colorful casing there’s Android 4.0 running on what’s believed to be a dual-core OMAP processor. This isn’t quite a smartphone – there’s WiFi and Bluetooth, along with GPS, but no cellular radio – but the familiar sensors are present, including a gyroscope and an accelerometer to keep track of where the wearer is facing and what angle their head is at.

glass eye-tracking

The eyepiece itself runs at 640 x 360 resolution and, when Glass is positioned on your face properly, floats discretely just above your line of vision; on the inner edge of the L-shaped housing there’s an infrared eye-tracking camera, while a bone conduction speaker is further back along. Glass is designed to get online either with its own WiFi connection, or to use Bluetooth and tether to your smartphone. That given, it’s pretty much platform agnostic for whatever device is used to get online: it doesn’t matter if you have a Galaxy S III in your pocket, or an iPhone, or a BlackBerry Z10, as long as they can be used as a modem.

Where Glass departs significantly from the typical Android phone is in how applications and services run. In fact, right now no third party applications run on Glass itself: the actual local software footprint is minimal. Instead, Glass is fully dependent on access to the cloud and the Mirror API the Glass team discussed briefly back in January.

google_glass_ui_leak_commands

In a sense, Glass has most in common with Google Now. Like that service on Android phones, Glass can pull in content from all manner of places, formatted into individual cards. Content from third-party developers will be small chunks of HTML, for instance, with Google’s servers supporting the various services that Glass users can take advantage of.

“Glass has most in common with Google Now”

When you activate Glass – by tilting your head up, to trigger the (customisable) motion sensor, or tapping the side, and then saying “OK, Glass” – you see the first of those cards, with the current time front and center. Navigation from that point on is either by swiping a finger across the touchpad on the outer surface of the headset or by issuing spoken commands, such as “Google …”, “take a picture”, “get directions to…”, or “hang out with…” A regular swipe moves left or right through the UI, whereas a more determined movement “flings” you through several items at a time, like whizzing a mouse’s scroll wheel. Tap to select is supported, and a downward swipe moves back up through the menu tree and, eventually, turns the screen off altogether. A two-finger swipe quickly switches between services.

google_glass_ui_leak_web_search

Some of the cards refer to local services or hardware, and a dog-ear folded corner indicates there are sub-cards you can navigate through. The most obvious use of this is in the Settings menu, which starts off with an indication of battery status and connectivity type, then allows you to dig down into menus to pair with, and forget, WiFi networks, toggle Bluetooth on or off, see battery percentage and charge status, view free storage capacity and firmware status (as well as reset the headset to factory settings), and mange the angle-controlled wake-up system.

In effect, each card is an application. So, if you ask Glass to perform a Google search – using the same server-based voice recognition service as offered on Android phones – you get a side-scrolling gallery of results cards which can be navigated by side swiping on the touchpad. It’s also possible to send one of those results to your phone, for navigating on a larger display.

google_glass_ui_leak_head_wake

For third-party developers, integrating with Glass is all about integrating with the Mirror API Google’s servers rely upon. So, if you’re Twitter, you’d use the API to push a card – say, to compose a new tweet, using voice recognition – to the Glass headset via the user’s Google+ account, coded in HTML, with a limited set of functions available on each card to keep things straightforward (say, dictate and tweet). Twitter pushes to Google’s servers, and Google pushes to Glass.

“You could push a card to Glass from anything: a website, an iOS app”

As a system, it’s both highly flexible and strictly controlled. You could feasibly push a card to Glass from anything – a website, an iOS app, your DVR – and services like Facebook and Twitter could add Glass support without the user even realizing it. Glass owners will log in with their Google account – your Google+ is used for sharing photos and videos, triggering Hangouts, and for pulling in contacts – and then by pairing a Twitter account to that Google profile, cards could start showing up on the headset. All service management will be done in a regular browser, not on Glass itself.

google_glass_ui_leak_wifi

On the flip-side, since Google is the conduit through which services talk to Glass, and vice-versa, it’s an all-controlling gatekeeper to functionality. One example of that is the sharing services – the cloud right services that Glass hooks into – which will be vetted by Google. Since right now there’s no other way of getting anything off Glass aside from using the share system – you can’t initiate an action on a service in any other way – that’s a pretty significant gateway. However, Google has no say in the content of regular cards themselves. The control also extends to battery life; while Google isn’t talking runtime estimates for Glass yet, the fact that the heavy lifting is all done server-side means there’s minimal toll on the wearable’s own processor.

google_glass_ui_bluetooth_battery

Google’s outreach work with developers is predominantly focused on getting them up to speed with the Mirror API and the sharing system, we’re told. And those developers should have ADB access, too, just as with any other Android device. Beyond that, it’s not entirely clear how Google will manage the portfolio of sharing services: whether, for instance, there’ll be an “app store” of sorts for them, or a more manual way of adding them to the roster of supported features.

google_glass_ui_leak_device_info

What is clear is that Google isn’t going into Glass half-hearted. We’ve already heard that the plan is to get the consumer version on the market by the end of the year, a more ambitious timescale than the originally suggested “within twelve months” of the Explorer Edition shipping. When developer units will begin arriving hasn’t been confirmed, though the new Glass website and the fresh round of preorders under the #ifihadglass campaign suggests it’s close at hand.

Glass still faces the expected challenges of breaking past self-conscious users, the inevitable questions when sporting the wearable in public, and probably the limitations of battery life as well. There’s also the legwork of bringing developers on board and getting them comfortable with the cloud-based system: essential if Glass is to be more than a mobile camera and Google terminal. All of those factors seem somehow ephemeral, however, in contrast to the potential the headset has for tying us more closely, more intuitively, to the online world and the resources it offers. Bring it on, Google: our faces are ready.

google_glass_ui_leak_hero
glass eye-tracking
google_glass_ui_leak_commands
google_glass_ui_leak_web_search
google_glass_ui_leak_device_info
google_glass_ui_bluetooth_battery
google_glass_ui_leak_wifi
google_glass_ui_leak_head_wake


Google Glass in focus: UI, Apps & More is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Good news: Google Glass isn’t just Pebble on your face

I admit it, I was getting worried. After the original Project Glass concept video promised far, far more than the wearable could deliver, and then the public tidbits from Googlers pointed to little more than a hands-free camera and the occasional email notification, I started to suspect Google had entirely dropped the ball with Glass. Less wearable computer, and more strap-a-Pebble-to-your-face.

glass3

Now there’s nothing wrong with making smartphone notifications more useful or easy to consume: that, after all, is why interest in Pebble and other smartwatches has been so high. Yet the initial promise of Glass had been so much more than that, harnessing the power of Android and ubiquitous connectivity and wearer-attention to augment your daily life in persistent ways a smartphone could never manage.

Okay, so the first promo video was ridiculously far-fetched, but as time went on – and the Google team members lucky enough to have access to Glass prototypes teased us with photos, videos, and sky-dives filmed using the headset – it began to look more like Glass was a camera first rather than a wearable computer. Those fears were compounded after early hands-on reports began to trickle out, with talk of little more than email alerts and other notifications dropping into the corner of your vision.

google_glass_translation

That seemed, frankly, a waste, and so it’s great to see a more realistic explanation of what Glass will do in Google’s new campaign. The display isn’t just a notification pane, it turns out, but a proper screen (albeit transparent) capable of showing Google search results, color navigation directions, and more.

Google Glass walkthrough:

Best of all, it’s very much a two-way stream of information. Glass isn’t just showing you data and then expecting you to pull out your phone to respond to it, as per most smartwatches we’ve seen, but uses voice commands of impressive complexity to operate. The instruction “OK Glass” apparently wakes the headset up, and then you can ask for Google searches, photographs and video, and even for language translations, with the headset discretely whispering the foreign phrases in your ear.

In fact, there’s little suggestion that the trackpad on the side of Glass plays much part, with Google showing only voice commands to navigate through the modified Android OS. It’s worth noting that the video chops together only the key features, however; the actual transitions between them – jumping back to whatever homescreen Glass has, and stepping through pages of search results, for instance – isn’t shown. That may well demand some touchpad stroking. There’s also the question of whether Glass works with touch controls alone, or if you have to give it vocal instructions: that could undermine discrete use of the headset, in situations when speaking out loud isn’t really acceptable. At least one of the pictures Google has freshly released today shows what appears to be an eye-tracking camera on the inside of the eyepiece.

Google Glass eye-tracking camera

“This isn’t really augmented reality”

One thing that’s clear already is that this isn’t really “augmented reality“, at least not as we generally conceive of it. Glass doesn’t modify your view of the world, or do any clever floating of glyphs or data around people or objects in your eyeline; it can’t change the way you see things. Instead, it’s more akin to a smartphone that’s been squeezed, extruded, and generally reshaped to fit your face rather than in your pocket: assisting your hunt for digital information, yes, but leaving it up to you as to how it integrates into your life.

Google seems keen to involve more than just developers in the latest round of Glass Explorer Edition presales; whereas only coders had the chance to slap down $1,500 back at Google I/O 2012, this time around the company tells us it’s looking for a more diverse group. In fact, the #ifihadglass campaign doesn’t even require those 8,000 picked to commit to producing their application suggestions. Instead, they’ll be selected on the basis of creativity, the social reach of them having devices (i.e. the scale of the audience they could preach the good Glass message to), and how compelling and original their ideas are.

There’s still plenty to be learned about Glass. Google has teased its cloud-based engine for the headset, but has otherwise said little about the development environment involved, and the biggest concern – battery life – is still conspicuously overlooked anytime the search giant mentions wearables publicly. We also don’t know when the Explorer Edition headsets will be released, though Google tells us that those people who ordered at Google I/O last year are first in line to get their units. Still, the huge amount of “geek” interest bodes well for the commercial launch, whenever that might be, and while Glass may not be the mainstream push for augmented reality we initially expected, the potential is still there to change the way we interact with the world – real, and digital – forever.


Good news: Google Glass isn’t just Pebble on your face is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

New Google Glass video demos true potential of water-resistant wearable

Google has spilled a fresh batch of Google Glass details, with a new video detailing what the wearable can do – including video, Google searches, photos, voice translation, and more – as well as showing the latest hardware. The new footage is apparently a far more realistic demonstration of Glass’ potential than Google’s original concept video, putting a preview pane of the Glass eyepiece in the upper right corner of the screen, and showing how the headset can react to spoken commands previewed with the order “OK, Glass.”

glass10

So, to take a photo you can merely wake the headset with the “OK, Glass” command, and then say “take a picture” complete with a preview in the corner of your vision. The same is true for video – “Start Recording” – and you can trigger Google+ Hangouts too, giving friends a live streaming view through the headset’s front-facing video camera.

There’s also support for directions, with overlays of which roads are coming up, what path to take, and ETA, together with the ability to Google for information such as “how long is the Brooklyn Bridge.” Glass even supports voice-dictated messages, and translations, so you can ask “how do you say bread in French?” and have the headset whisper the answer to you.

glass-directions

Google Now-style features, such as flight information cards, are also included, popping into your vision when relevant rather than forcing you to manually ask for them.

Meanwhile, there are new images of the Glass headsets, including five different colors – charcoal, tangerine, shale, cotton, and sky – and seemingly confirming that the wearable will be water-resistant. Considering it’s designed to be worn all the time, that’s probably a good idea. A version with sunglasses lenses attached is also shown, and we know Google is thinking about prescription lens support too. Finally, the headband itself is seemingly made from flexible metal, for better resilience.

glass8

Google is yet to delivery the first batch of Glass Explorer Edition headsets to Google I/O 2012 preorder customers, though that hasn’t stopped it opening up for a second round of orders. Developers who can give a sufficiently interesting use-case will be invited to preorder one of 8,000 more Glass units.

glass4
glass7
glass8
glass9
glass10
glass11
glass12
glass-directions
glass6
glass2
glass3
glass5
glass1


New Google Glass video demos true potential of water-resistant wearable is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

8,000 more Google Glass wearables on offer for creatives

Google has re-opened preorders for its Glass wearable computer, though it’s not just a case of opening up your wallet to the tune of $1,500: you’ll need to have some good ideas as to what exactly to do with the wearable to qualify. First put up for sale at Google I/O 2012 as the limited edition Glass Explorer Edition, still yet to ship though promised sometime in early 2013, the new round of orders extends the net to developers across the US.

google_glass_io-580x386 (1)

They’ll have to convince Google that they’re worthy customers, however, using either Google+ or Twitter to do that. In the space of a fifty word or less outline, accompanied with up to five photos and up to a fifteen second video, they’ll need to explain what they’d do if they had a Glass headset.

Applications are being accepted up until February 27, which basically means a week to come up with a killer idea. Of course, since the applications are all being made publicly, the longer you wait, the more likely it is that someone else might figure out your idea and detail it first.

Only three applications are allowed per person, and they can’t be modified after being submitted. Google will be judging them via an independent jury, based on creativity, compelling use, originality, and “social and spectrum”; there’ll be 8,000 headsets to be had in this new round of orders. Collection will be made in person, at one of three special “pick-up experience” events held in New York, Los Angeles, or in the San Francisco Bay area.

If you’re not a developer, but would still like to keep abreast of some of the ideas people are coming up with, you can follow along on both Twitter and Google+ using the #ifihadglass hashtag. More details in the FAQ.

[via The Verge]


8,000 more Google Glass wearables on offer for creatives is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.