There’s still quite a few months before those lucky early adopters can get their eager hands (and eyes) on Google’s Explorer Edition set of wearables, but in the meantime, the company’s not wasting any time and is building up its team to have the frames as loaded as can be. One of the latest additions to Mountain View’s Project Glass squad is former Rdio and Danger software engineer Ian McKellar — who’d previously worked on the streaming service’s API, among other things. Mum’s the word on what exactly he’ll be tinkering with at the Project Glass laboratories, though we can’t imagine it’ll be anything short of amazing. In case you’d like to dive into his thoughts a little more, you can check out his tweet on the matter at the link below.
Google has snatched up ex-Rdio software engineer Ian McKellar to bolster its growing Google Glass wearable computing team, as the company readies its first “Explorer Edition” hardware for developers in early 2013. McKellar formerly worked on the API for streaming music service Rdio, but has a history in developing social-integrated browsers and for Danger, supporting the Sidekick smartphone later acquired by Microsoft.
At Danger, McKellar was responsible for a webpage server that “transcoded web pages to Danger’s hiptop smartphones” among other things, helping trim the bandwidth fat from sites and make sure they’d display correctly on the handsets themselves. That experience could well be of interest to the Glass team, since regular webpages are likely to require some reformatting to suit the wearable eyepiece of the Google headset.
Eyes-on reports from those who have tried out Google’s current prototypes suggest the user-experience as it stands sees floating iconography and other information superimposed at the top of the wearer’s line-of-sight. Glass will not only need a way to parse online data to suit that minimized display segment, but to trim the information fat so as to avoid lag or huge tethering bills.
One way around that could be heavy server-side processing, crunching the data for each individual user – whether alerts, Twitter or Google+ pings, messenger requests or websites – and then squirting it over to their eyepiece. That would reduce the amount of processing power the Glass wearable itself would require, thus cutting down on power consumption.
McKellar also worked on the Flock social networking and media sharing browser, which had heavy integration with Facebook and other services. Google is likely to focus on Google+ integration in Glass initially, but is unlikely to ignore Facebook, Twitter and other third-party services in the long term.
The first Google Glass Explorer Edition units went on sale to developers at Google IO, and are expected to be shipped out early in the new year. A consumer version is planned for within a year of that point.
As if Google’s cyber-glasses weren’t sci-fi enough, the search giant is also apparently looking into cyber-gloves. The patent “Seeing with your hand” has been granted to Google, and could be the beginning of a future Glass peripheral. More »
This week the folks in charge of social media surrounding the Google Project Glass hardware on order right this second have created a set of Hangouts for the betterment of the project – but it didn’t turn out all that great this afternoon. The first set of Google Glass Explorers hangouts ended in mass confusion and hopeful users ending up finding out little, if anything, about Google Glass that they didn’t already know. Google Glass product manager Steve Lee and community manager Sarah Price hoped to hear feedback and ideas throughout the day.
To hear feedback from users of the device which isn’t actually on the market yet, Lee and Price noted that they’d be “hopping into as many hangouts as possible.” Participants in the event were encouraged to start their own hangouts and get to chatting about Google Glass as soon and as much as possible. The result, without much if any direction, was a collection of video chats with users saying things like “what are we supposed to be doing here?”
This is the first event of many, we expect, from the Explorers project we saw launched last week. In-depth talks and informative releases will be appearing soon with what we must expect will be more direction than we’ve seen this afternoon. People invited to the Explorers club were either Google I/O participants or people who took the opportunity to purchase the early release of Google Glass that will be sent out at the start of 2013.
At the moment we’ve only to wait for more information on the project as Google distributes it. When the Google Glass units are distributed early next year, developers as well as hardcore Google lovers will be able to send out information on their own, and the final consumer-ready version will be prepped before the end of 2013 – or so tips have tipped.
News on Google’s Project Glass just keeps coming and coming. It’s no surprise that we’re extremely excited and interested in the AR tech, but now we will hopefully be learning addition details early next week. Wednesday we shared details about the VIP treatment we will be getting for pre-ordering a pair at Google IO for around $1,500 — and that treatment is about to start come Monday.
Google and their official +Project Glass Google+ account has just reached out to all the Explorer Edition buyers, confirming that we’ll be learning additional details in a private Google+ Hangout Monday. This will include other lucky pre-order customers, as well as members from Google’s Project Glass team. Hopefully while engaging in a live Google+ hangout with actual developers from Google we’ll be able to learn some neat new things about Project Glass. Obviously we will let you know the minute we hear anything worth mentioning.
Project Glass made a huge splash at Google IO, when Sergey Brin took the stage and had a pair of the AR eyewear skydiving right into the event center in San Francisco. Since then we’ve seen plenty of patents, learned a few more details, and even saw Gmail’s lead developer head to the Project Glass crew. Stay tuned for additional details and hit the timeline below for further coverage.
When he’s not trash-talking Windows 8, Valve’s Gabe Newell is pondering next-gen wearable computing interfaces and playing with $70,000 augmented reality headsets, the outspoken exec has revealed. Speaking at the Casual Connect game conference this week, Valve co-founder and ex-Microsoftie Newell presented head-up display lag and issues of input and control for wearables as the next big challenge facing mobile computing, VentureBeat reports.
“The question you have to answer is, “How can I see stuff overlaid in the world when you have things like noise?” You have weird persistence problems” Newell said, asked about the post-touch generation of computing control. “How can I be looking at this group of people and see their names floating above them? That actually turns out to be an interesting problem that’s finally a tractable problem.”
Tractable it may be, but so far it’s not cheap. “I can go into Mike Abrash’s office and put on this $70,000 system, and I can look around the room with the software they’ve written, and they can overlay pretty much anything, regardless of what my head is doing or my eyes are doing. Your eyes are actually troublesome buggers” Newell explains. The second half of the issue, though, is input, which the Valve CEO describes as “open-ended.”
“How can you be robustly interacting with virtual objects when there’s nothing in your hands? Most of the ideas are really stupid because they reduce the amount of information you can express. One of the key things is that a keyboard has a pretty good data rate in terms of how much data you can express and how much intention you can convey … I do think you’ll have bands on your wrists, and you’ll be doing stuff with your hands. Your hands are incredibly expressive. If you look at somebody playing a guitar versus somebody playing a keyboard, there’s a far greater amount of data that you can get through the information that people convey through their hands than we’re currently using. Touch is…it’s nice that it’s mobile. It’s lousy in terms of symbol rate” Gabe Newell, CEO, Valve
Google’s Glass has sidestepped the issues somewhat, not attempting to directly overlay or replace exact objects in the real world with a wearable display, but instead float more straightforward graphics just above the wearer’s eye-line. That removes the precision problem, but means Glass will be less capable of mediating reality – i.e. changing what of the real world the user actually sees – and more about augmenting it with extra data.
As for control, Google has already shown off its side-mounted touchpad on Glass, and a recently published patent application fleshed out some of the other possibilities. They include voice recognition and hand-tracking using cameras, though Google also describes using low-level artificial intelligence to reduce the amount of active navigation Glass users may have to do.
For instance, Glass could recognize – using microphones built into the headset – that the wearer is in a car, Google explains, and thus automatically show maps and a navigation interface. Those same microphones could be used to spot mentions of peoples’ names and call up contextually-relevant information about them, working as an aide-mémoire.
Somewhat more bizarre, though, is research within Valve to use the human tongue as an input method. “It turns out that your tongue is a pretty good way of connecting a mechanical system to your brain” Newell explained. “But it’s really disconcerting to have the person you’re sitting next to going, “Arglearglargle.” “You just Googled me, didn’t you?” I don’t think tongue input is in our futures.”
Folks who have placed a pre-order for Google Glass might want to look out for something interesting in their mailboxes, that would be, ‘private updates’ headed your way. In fact, the Google Glass team did send out an email to those who have placed a pre-order on the device at the Moscone West convention center sometime last month, touting that these Google Glass “explorers” will receive a copy of Google+ post as shown on the right, specially penned by Google co-founder Sergey Brin himself – accompanied by several instructions on how one is able to receive private updates concerning the Glass project with the help of one’s Google+ account.
Apart from that, ‘explorers’ of Google Glass did report that they have received a mysterious package, where Google’s Mountain View headquarters were labeled as the return address. A simple blue box with blue silk lining on the inside, accompanied by a cushion that holds up a thick glass paperweight with a 4-digit number imprinted completes the entire package. It will still take some time before the Google Glass project takes off in a big way, as it is tipped to ship some time next year.
Google’s Project Glass had a surprisingly large presence (skydivers) during its I/O event earlier this year, and now company co-founder Sergey Brin has checked in with the attendees who promised $1,500 for a set of the augmented reality eyepiece. In his message he shared a photo he took while cruising through Montana thanks to a mode it’s testing that snaps a picture every 10 seconds, no intervention needed. Unfortunately, if you’re not in that exclusive pre-ordering group you’ll have to wait for details like these to leak out secondhand since private updates, special events, Google+ Hangouts, secret handshakes and Little Orphan Annie decoder rings (perhaps not the last two) are reserved for a “unique, trusted community.” Hey, it’s not like the rest of us wanted some silly visor or etched glass blocks anyway.
This afternoon no less than Google’s Sergey Brin send out an update on Project Glass in the form of a special invite-only program in which Google+ will be a place for a limited set of future users of the hardware will be able to get special updates. The first of these updates includes Brin speaking about how he used Project Glass and a brand new app with the hardware to take photos of a trip he was on ever 10 seconds automatically. The one shot he shared was a fabulous in-car photo that might never have existed had he not been using the AR glasses in real life.
Glass is a special bit of technology made by Google that take the form of a pair of glasses with basically some extra-thick rims for all the fabulous technology you could want. Using a brand new user interface shown on a tiny piece of glass that sits up above your right eye, you’re able to control your device with a series of gestures and taps. The side of the device is touch-sensitive, while your head will do the rest.
Glass Explorers will be getting exclusive looks at updates through the future similar to the piece you’re seeing above and below here – so no worries, folks, you’ve got SlashGear on your side! That said, at the moment it appears that developers, press, and other attendees of Google I/O 2012 have received the invite thus far, but we’re seeing more and more pop up in our tip bin by the minute.
Stay tuned and hit the timeline below to follow up on all the most recent news bits surrounding Project Glass in all its greatness.
This week we had a brief chat with Will Powell, a developer responsible for some rather fantastic advances in the world of what Google has suddenly made a very visible category of devices: wearable technology. With Google’s Project Glass nearer and nearer reality with each passing day, we asked Powell how his own projects were making advances at the same time, and how he saw advances in mobile gadgets as moving forward – and possibly away from smartphones and tablets entirely.
Those of you unfamiliar with Powell’s work, you can hit up the following three links and see the videos of the projects he’s done throughout this post. Some of the products Powell uses are the Vuzix STAR 1200 AR glasses, Raspberry Pi – the fabulous miniature computer, and of course, a good ol’ fashioned ASUS Eee Pad Transformer.
SlashGear: Where you working with wearable technology before Google’s Project Glass was revealed to the world?
Powell: Yes at Keytree we were working with wearable technology before the unveiling of project glass. I was working on CEO Vision a glasses based augmented reality that you could reach out and touch objects to interact or add interactive objects on top of an iPad. I have also had lots of personal projects.
SG: What is your ultimate goal in creating this set of projects with Raspberry Pi, Vuzix 1200 Star, etc?
P: I would say that the ultimate goal is really to show what is possible. With CEO Vision at Keytree we showed that you could use a sheet of paper to interact with sales figures and masses of data using the SAP Hana database technology. Then creating my own version of project glass and now extending those ideas to cover translations as well, was just to show what is possible using off-the-shelf technology. The translation idea was to take down barriers between people.
SG: Do you believe wearable technology will replace our most common mobile tech – smartphones, laptops – in the near future?
P: Yes I do, but with an horizon of a couple of years. I think that with the desire for more content and easier simpler devices, using what we are looking at and hearing to tell our digital devices what we want to find and share is the way forward. Even now we have to get a tablet, phone or laptop out to look something up. Glasses would completely change this because they are potentially always on and are now adding full time to at least one of our fundamental senses. Also many of us already wear glasses, according to Vision Council of America, approximately 75% of U.S. adults use some sort of vision correction. About 64% of them wear eyeglasses so people are already wearing something that could be made smart. That is a huge number of potential adopters for mobile personal information delivery.
I think we still have a way to go with working out how everything will fit together and how exactly we would interact with glasses based technology. With the transition from a computer to tablets and smartphones we opened up gestured with glasses we have the potential to have body language and real life actions as interaction mechanisms. And it would be the first time that there is no keyboard. There is also the potential for specifically targeted ads that could end up with us having some parodies come true. However, I do think we will have an app store for a glasses based device in the next few years.
SG: What projects do you have coming up next?
P: I have many more ideas about what glasses based applications can be used for and am building some of them. I am creating another video around translation to show the multi lingual nature of the concept. Further to that, we are looking at what areas of everyday life could be helped with glasses based tech and the collaboration between glasses users. The translation application highlighted that glasses are even better with wide adoption because Elizabeth could not see the subtitles of what I was saying without using the TV or tablet.
Stick around as Powell’s mind continues to expand on the possibilities in augmented reality, wearable technology, and more!
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.