Early Google Glass owners are dominated by developers and tinkerers, so it’s only fair that they get easy access to the downloads they need. Appropriately, Google has quietly set up a page that centralizes both Glass images and kernel source code. The company has even saved owners from having to hack their eyewear the hard way — one image comes pre-rooted for those willing to toss caution (and their warranties) to the wind. Most of us can’t take advantage of these downloads for about a year or more, but those with early access can swing by the new code hub today.
The Nest thermostat has been gaining a lot of popularity recently, mostly due to its sleek design and enhanced learning capabilities, not to mention that it can be controlled via a smartphone. However, the makers of the app are bringing compatibility to Google Glass with a Nest app that will allow you to control your Nest using voice commands.
The Nest will be able to hear a number of commands, but it will only provide three main functions, which are setting the device to away mode, returning the device from away mode, and changing the temperature. You can say things like “set temperature to…” or “leaving the office now” to make sure the Nest wakes up from away mode.
The Nest app is available for Google Glass right now, but it’s only available to a select number of Nest users. However, once it goes live for all users, all you’ll need to do is login with your Nest credentials and you’ll be off to the races. The source code for the app is actually available on GitHub, so if you’re wanting to dive in right away, you can play around with it for a bit if you’re comfortable navigating your way around code.
We’ve seen a lot of apps make their way to Google Glass recently, with a slew of them being released during Google I/O, including Facebook, Twitter, Evernote, and CNN. More are surely to come over the summer, and we should be seeing a heap of apps already available before Google Glass hits the mainstream next year.
If you haven’t heard of Sherpa, you’re mostly likely not alone. It’s essentially a new Android app that looks to dethrone Google Now and Apple’s Siri. Sherpa plans to launch on iPhone soon, as well as make its way to Google Glass to take down Google’s own voice command software on the new spectacles.
The company unveiled plans to bring its software to Google Glass, and Sherpa CEO Xabier Uribe-Etxebarria says that their voice command app is much more suited for Google Glass than Google’s own software, which is a bold statement. Uribe-Etxebarria says that voice commands on Google Glass are limited, and “it’s not taking advantage of all the features and the potential of Google Glass.”
What separates Sherpa from the rest of the pack is its ability to understand meaning and intent. The app can build its own metalanguage with rules and semantic concepts mixed in with using Google’s speech API. From that, Sherpa can do a wide variety of actions that you wouldn’t even think it could do.
The app is still in beta, but plans to roll out for Google Glass and other wearables either in late 2013 or early 2014. Sherpa is able to do a handful of neat tricks, including the ability to play music without downloading the tracks first, and automate directions for scheduled events in your calendar. The app can also do things like turn down the volume or toggle the WiFi.
And as Google Now does, Sherpa can also essentially predict what you will want to see and hear about, like score updates from your favorite sports teams or letting you know of some popular restaurants in the area if it’s close to dinner time. As for the kinds of things that the Google Glass app will do, that’s still unknown, but from what the company says, we can expect a lot more features out of Sherpa than with Google’s built-in offering.
This week the folks at CyanogenMod, far and away the most popular 3rd party ROM development group, have revealed their first shot at ClockworkMod Recovery for Google Glass. This interface is one of the bare-bones first steps toward creating a slew of customized user interfaces for Google Glass, starting here with the Explorer Edition of the device. CyanogenMod developer Brint Kriebel (aka bekit) has been so kind as to provide us with some up-close photos of the software on his own Glass device, too.
Now before you get too pumped up about this situation, you should mind the fact that if you do unlock your Glass device, you’ll be voiding your warrantee provided by Google. This is the same for most devices on the market today running Android, so keep a weather eye when you’re hacking along.
What you’re seeing here is a screen rather similar to that which you’d see if you were working with ClockworkMod Recovery on an Android smartphone or tablet. Here with Glass, the user will be using the camera shutter to scroll through menus and the power button to select items inside Recovery – on a smartphone, this is done with a device’s volume buttons and power button.
Kriebel has tested several elements inside this version of Recovery, but notes that he’s not yet tested any installations – since none yet exist. Once developers begin to create odd packages for Glass and zip them up real nice, Recovery will be able to flash them to the device with a button tap.
I have successfully tested the following: access via adb (including Koush’s new adb backup) wipe data/factory reset mount/unmount partitions backup/restore auto disable of stock recovery re-flash auto root – Kriebel (bekit)
Users wishing to work with this custom Recovery for Google Glass can head over to Brint Kriebel’s Google+ post to grab the link to the file image. If you’re feeling brave, let us know how it all goes – and if you’ve got any fabulous customized bits and pieces you’ve installed with Recovery, too!
Google’s consumer version of Glass will use Samsung OLED displays, reports out of South Korea have claimed, with the possibility of flexible panels being used for the futuristic wearable. The deal follows Google CEO Larry Page recently visiting a Samsung Display OLED production line, The Korea Times reports, and heavy-handed hints by the screen division’s CEO that wearables would figure highly in flexible OLED’s future.
“OLED on silicon may be used for glasses-type, augmented-reality devices much like the Google Glass” CEO Kim Ki-nam said during a SID keynote this past week. “The wearable market will be a major beneficiary of the free-form factor advantage of flexible OLEDs. Smartphone-linked wearable accessory products such as watches and health bands will use ultra-thin flexible OLEDs embedded with various sensors.”
Samsung has been talking up the potential of flexible OLED for some time, though is yet to commercially deploy the technology. That’s been promised for 2013 under the YOUM brand, however, slightly delayed after Samsung was apparently forced to dedicate the bulk of its production facility to making traditional AMOLED screens for devices like the Galaxy series of smartphones.
Back at CES, the company brought a number of concepts along, some using flexible OLED technology. There, the panels didn’t actively flex, but were instead wrapped around the shell of a device mock-up, and intended for use as an always-on status panel.
The current Glass Explorer Edition, which Google has sold to a limited number of developers for real-world testing and app development, uses a small plastic eyepiece into which the image is projected. Exact technical specifications for the display technology itself have not been shared, though it’s believed to be something along the lines of a transmissive color filter panel backlit with an LED in the headset section, near the camera module.
Switching to OLED would mean Google could do away with the separate LED backlighting, since OLED pixels produce their own lighting. It seems likely that Google would still use the wave splitter eyepiece block, since that allows the “floating” display to be translucent, though it’s worth noting that Samsung has been showing off translucent OLED panels for several years, and has in fact commercialized them on a small scale.
Either way, it would likely be a more compact setup than what is used in the Explorer Edition, as well as potentially more power-frugal. That could make for a lighter, longer-running Glass, something Google has said are key objectives for the consumer version.
Exactly when the mass-market Glass will launch is unclear, though Google chairman Eric Schmidt did suggest that sometime in 2014 is likely. Similarly unknown is how much it will retail for, though Google has been clear that it aims to make the wearable far more affordable than the $1,500 developer version.
As a sort of a “Part 2″ or even “Part 3″ of the Glass chat series SlashGear has appearing this week and last, today’s words with Google Glass’ lead industrial designer Isabelle Olsson lend some insight on the device’s road to final hardware. Speaking on how the original Glass prototypes eventually became the device you see today, Ollson shared three principles that allowed the team to solidify their process.
This is only one segment of the extended fireside chat shared with Google I/O attendees earlier this month. Also included in the chat were Senior Developer Advocate at Google for Project Glass Timothy Jordan, Product Director for Google Glass Steve Lee, and Google Glass Engineer Charles Mendis.
Isabelle Olsson: We took a reductionist approach. We removed everything that wasn’t absolutely essential. And then in addition to that, I formed three principles to guide the team through this ambitious, messy process. Those are:
• Lightness • Simplicity • Scalability
And those are not just fancy words: they mean something.
Lightness
O: So when it comes to lightness, it’s fairly straightforward. We are obsessed with weight. Not in the same way the fashion industry is – but we do care about every single gram. Because if it’s not light, you’re not gonna want to wear it for more than 10 minutes. And it’s not only about lightness but about balance. How it’s balanced on your face, and the way we designed it with our construction methods, and material choices, and how we place the components.
It weighs less than most sunglasses and regular glasses on the market. It’s pretty cool.
Simplicity
O: But it’s not only about physical lightness, it’s about visible lightness. We took the approach of hiding some of the largest components on the board behind the frame, so we could create this one, clean, simple appearance from the side.
Scalability
O: When I joined the project, we thought we needed 50 different adjustment mechanisms, but that wouldn’t make a good user experience. So we scaled it down to this one adjustment mechanism.
We make Glass modular. In this stage, this means you’re able to remove the board from the main frame. This is pretty cool. This opens up a lot of possibilities. It opens up possibilities for not only functionality but also scalability.
SlashGear will be exploring Google Glass in each of its software updates up to and through the point at which it becomes a consumer product in 2014. The Glass team have let it be known that upgrades in software will be sent each month – at least – and that the final consumer product may look similar to what we’re seeing in the Explorer Edition that everyone is wearing out in public today, or it could have several modifications in hardware as well by then.
As Google Glass continues to be a unique sort of hardware / software platform in the industry, so too do the creators of the wearable computer stay hot commodities for question and answer sessions. In the feature you’re about to see, two members of the main Glass creation and development team discuss the social etiquette
The wearables industry could be worth as much as $50bn in just three years time, Credit Suisse has predicted, as gadgets like portable fitness monitors and Glass-style headsets grow in popularity. Core to the likely growth is the prevalence of smartphones, with the finance firm estimating that there are in excess of 250m “installed mobile
Olympus has wearable display plans of its own, a new patent reveals, effectively splitting a digital camera into two pieces – eye-worn screen and imaging unit – for more flexibility in photography. The patent, “Camera and Wearable Image Display Apparatus”, describes a monocular eye-piece display that connects wirelessly to a camera body, clicking into image preview and review mode when the camera is held still to take one or more photos.
Where Google Glass counts photography as one of its abilities, with the display used at other times to show notifications, navigation directions, and other information, Olympus’ wearable would be much more focused. Rather than trading some clarity for transparency, as Glass has done, the Olympus eyepiece would use a moveable shutter which could selectively block out external light and so provide the sort of clear, virtual large-screen display necessary for accurately reviewing shots.
The camera section would use a vibration sensor, Olympus suggests in its filing, to decide whether it could trigger the eyepiece functionality. By having the display right in front of your eye, it suggests, blurry or fast movements from the camera could lead to discomfort if piped through to the display all the time.
Instead, it’s only when the camera is held still – as you would when framing a shot – that the display kicks into camera mode. By splitting the parts up, the camera itself could be lighter and more easily pocketed.
It also means greater flexibility in how photos can be framed, Olympus suggests. Shots could be taken from above the photographer’s head, or from below, or the side, while still allowing for a clear preview. Meanwhile multiple sequential shots – such as panning to shoot several images of a moving subject – could be taken by only moving the camera, allowing the photographer to stay still and more stable.
This isn’t the first time Olympus has flirted with wearable tech. Last year, the company revealed a more direct Google Glass competitor, the MEG4.0, a head-mounted computer which could be used as a remote display for a Bluetooth-tethered smartphone. Another recently published patent application, Egami reports, shows a more glasses-like headset with greater flexibility for adjustment than Google’s version, as well as a mounting point for a camera.
The “unconverged camera” approach is more specific than the MEG4.0, but arguably more applicable to Olympus’ core audience. Whether it will ever spawn a production model remains to be seen, however, though it’s entirely possible that a somewhat hacked-together version using something like the MEG4.0 or indeed Glass could be assembled using a head-mounted display as a remote screen for a wirelessly-enabled camera.
At this years’ Google I/O developers conference, a Fireside Chat with several members of the core Google Glass team proved to reveal much on not just the future of the device, but its origins as well. While earlier in the day a single slide had been shown depicting a set of six original prototypes of what was then called Project Glass, here lead industrial designer Isabelle Olsson had one key prototype on hand to show SlashGear in the plastic, as it were.
As Olsson made clear, this device was created as one of the very first iterations of what’s been reduced to a simple skeleton frame and single, removable computer element. What you’re seeing here is a set of development boards attached to two full-eye glass lenses, white plastic, tape, exposed cords and all.
Isabelle Olsson: When the team started working on this, it was very clear that we’re not taking something that already existed and making incremental improvements to it. The team wanted to create something that’s much more intuitive, immediate, and intimate. But to create a new kind of wearable technology, that’s so ambitious, and very messy at points.
In addition to showing how this pair of glasses worked with folding sides and a real, working set of innards (if you can call them innards, of course), Olsson showed one of the prototype pairs of prescription Glass glasses as well. These are seen in the box below, and their design was seen on Google employees here and there during the week as well, live and active.
Olsson: I will never forget the first day on the team and when I walked into a room wearing these CRAZY things on their heads. I brought the prototype so you could see what I walked in to. It comes in a fancy bag…
Olsson: I think like the colors of the board, maybe, fits my hair color, but I don’t know. It’s kind of heavy, though. I think I’m going to take it off now. So – but – how do you go from something like this to what we’re all wearing today?
Olsson: We took a reductionist approach. We removed everything that wasn’t absolutely essential. And then in addition to that, I formed three principals to guide the team through this ambitious, messy process. Those are:
• Lightness • Simplicity • Scalability
You’ll find this particular chat split up across three different features, each surrounding Olsson’s fireside chat contributions. The one you’re in now of course stays within the bounds of the prototype you’re seeing above and below. There are also posts on color choices for Glass, a bit about Modular Fashion, and another expanding on the design of the final product.
Olsson: When I joined the project, we thought we needed 50 different adjustment mechanisms, but that wouldn’t make a good user experience. So we scaled it down to this one adjustment mechanism.
This prototype works with the Glass projection unit nearly the same as what we see in the Explorer Edition of Glass. It’s attached to one of two computer boards, the one on the right temple – here also working with a camera, even in this early state.
This first development board is also connected to the second board, the second presumably reserved for storage space connections and a battery. While tape holds this unit together along with soldered bits and pieces along the board as well as glue, here and there, they do work.
There’s a single button above the camera lens that activates the camera – there’s a similar button (a hidden button) in this area on the Explorer Edition of Glass as well. This original prototype works with essentially every element available in the final release – here it’s just a bit larger, and not really made to look too fashionable for the uninitiated.
Google I/O 2013 also played host to a chat we had with Sergey Brin – co-founder of Google and currently Director of Special Projects for the company. He also gave some insight into the way Glass was first tested, noting that while there were some non-functional bricks used to test form for Glass, it certainly all started with function:
Sergey Brin: We did have some non-functional models, but mostly we had functional, uglier, heavier models. Very early on we realized that comfort was so important, and that [led to] the decision to make them monocular.
We also made the decision not to have it occlude your vision, because we tried. We tried different configurations, because [it’s] something you’re going [need] to be comfortable. Hopefully you’re comfortable wearing it all day? [That’s] something that’s hard to make. You’re going to have to make a lot of other trade-offs.
Have a peek at the photos in a larger sense in the gallery below and let us know if you see anything you recognize – it’s all there, piece by piece.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.