Google Glass developers make Mirror API simple with Cat Facts

Google’s 2013 developer conference this year didn’t give immediate attention to Glass, at least not at its one and only keynote address – but behind the scenes, development ran deep. Speaking together at a developer chat session centered on “Building Glassware” with what the company calls its Google Mirror API, Jenny Murphy and Alain Vongsouvanh made the case for the future.

IMG_8038-L

Alain Vongsouvanh is a Developer Programs Engineer on Google Glass and the Google Mirror API. Jenny Murphy is also a Developer Programs Engineer for Glass with Google and both of these folks help developers work with the code that brings Google Glass apps to life.

Timeline and Menu

“The Mirror API is one managed through requests made through connections. The main one is a Timeline text card.” This connection is separate from a Gmail connection and separate from a Map connection – it exists as its own element unique to Glass. The most basic setup here is with text and an image.

Customizing these cards are as simple as writing HTML code, but it’s not as all-inclusive as, say, a Chrome web browser-displayed webpage. Google provides a Playground where tests and development can be done, offering here basic templates for developers and allowing them to start from scratch.

timeline

This system offers a variety of basic menu items like back and send, and developers are able to create custom menu items like “Complete!” The theme here is simplicity – this development environment is as simple as writing a bit of Java – not something someone off the street will be able to pick up in no time without any knowledge of creating with code, but certainly something that’s simple for a web developer or creator of apps for smart devices.

Contacts

Contacts is a system that a user will share to – just like they are on an Android smartphone. Developers can create a Contact Resource where they’ll have to set an ID that corresponds to a user, users, or a third party app. By default, a shareable element will trigger a list of apps and elements that are compatible with sharing.

contacts

Subscriptions and Locations

With Subscriptions, developers will be bringing forward notifications about changes. Instead of you posting to the API, the API will post to the device – input rather than output, so to speak. The developer will specify elements like Collection, User Token, Token Verification, and a Callback URL where needed.

voodoo

A developer working with Subscriptions in Glass will be working with Timeline as well as Locations – this means they’ve got to account for both how the element is posted and what’s being posted, where it came from and what it’s doing.

Cat Facts

With an extremely simple Glassware app by the name of Cat Facts, Vongsouvanh showed how each of the five different elements in the Mirror API. Below you’ll see his explanation of how it’s not always necessary to work with all five of these bits and pieces, but how even something so simple as this app will be working with more than one.


Google Glass developers make Mirror API simple with Cat Facts is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass getting apps for Facebook, Twitter, Evernote, CNN and more

Google has just announced a slew of new apps that are coming to Google Glass. In an effort to expand Glass’s abilities, a handful of different apps will become available to users, including Facebook, Twitter, Evernote, CNN, Tumblr, and Elle. Previously, only Path and The New York Times were available as apps on Google Glass.

google_glass_fitting1-580x326

Each of these apps will have their own unique twist in order to seamlessly work for Google Glass. For instance, Twitter will allow users to tweet to their account, and every tweet you send through Glass will be automatically tagged with the #throughglass hashtag. You can also receive notifications for mentions, messages, and replies.

As for the Facebook app, it too will feature so Glass-centric abilities, including the ability to take and upload a photo using Glass straight to your Timeline, and even add a description to the photo using Glass’s voice dictation feature. Tumblr will also let you do the same thing, by posting a photo to your blog right through Glass.

P5084096-sg-580x326

Evernote’s Glass app focuses on two main actions that you can perform. You’ll be able to take a photo or video and send it to your Evernote account from Google Glass. You’ll also be able to choose specific notes from your Evernote account and send them directly to Glass, so that you can bring it up on the HUD when needed. A great example of this is a grocery shopping list — no having to fish your phone out of your pocket to see what the grab next off the shelves.

CNN’s app is a little self-explanatory, but it goes a little further than just being able to swipe through headlines. You’ll be able to pick the types of alerts you want, such as sports scores or breaking news at certain times. You can also have articles be read aloud to you, as well as the ability to watch video.

Of course, it’ll be a while before Google Glass will be in the hands of the mainstream public, but it’s nice to know that Google is already building up a repertoire of apps that people will be able to take advantage of right away.

VIA: The New York Times


Google Glass getting apps for Facebook, Twitter, Evernote, CNN and more is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Sergey Brin talks Glass: Camera stabilizer incoming

Walk the floors at Google I/O and if you’re lucky you’ll run into Sergey Brin, who spent some time telling us about the development process behind Google Glass as well as a teaser for the update roadmap. Surrounded by fans and sporting his own Glass, Brin explained some of the decisions around the use of a monocular eyepiece, and of its placement out of the line-of-sight rather than directly in front of the wearer, as you might expect from a true augmented-reality device. However, he also revealed that a future software upgrade will address one of our own issues with Glass: keeping video steady when you’re filming it from a wearable.

google-io-glass-sergey-brin-vincent-nguyen

We’ve already been impressed by how Glass holds up as a wearable camera, particularly during situations – like when you’re playing with your kids or demonstrating a new gadget – when you need your hands to be free. However, we also found it more than a little getting used to, keeping your head still when you’re recording a conversation. All too easily you end up with nodding video, as you unconsciously move and react to the person you’re talking to.

We mentioned that to Brin, and he confirmed that it’s something Google is actually working on addressing. “Stay tuned, we’re gonna have some software that helps you out” he told us; it’s unclear how, exactly, that will be implemented, but digital image stabilization is already available on smartphones, and Google might be using a similar system. Glass also comes equipped with various sensors and gyroscopes – some of which are only partially utilized in this early iteration – and so Google could tap into those to do image-shifting and compensate for head-shake.

As you might expect for a device named the “Explorer Edition” and aimed squarely at developers, Glass is still a work-in-progress. Google aims to translate what it learns from this relatively small-scale deployment to the eventual consumer version – tipped to arrive in 2014 – including both design and functionality refinements.

Google Glass Tangerine

We asked Brin about the style decisions Google made along the way, and at which point the aesthetics of Glass came into the process. “We did make some functional mockups,” he told us, “but mostly we made functional but uglier, heavier models – style came after that.”

Style is, when you’re dealing with device you wear, distinct in a very particular way from design. Even if the work on physical appearance followed on after function, how Glass sits on the face did not.

Glass-chat with Sergey Brin at Google I/O 2013:

“Very early on we realized that comfort was so important, and that [led to] the decision to make them monocular,” Brin explained. “We also made the decision not to have it occlude your vision, because we tried different configurations, because something you’re going to be comfortable – hopefully you’re comfortable wearing it all day? – is going to be hard to make. You have to make a lot of other trade-offs.”

We’ll have more coverage from Google I/O all week, so catch up with all the news from the epic 3.5hr keynote yesterday!

Glass Video: Controlling AR.Drone with NVIDIA Shield


Sergey Brin talks Glass: Camera stabilizer incoming is written by Vincent Nguyen & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Recon Jet hands-on

Announcing a product during a major event like Google I/O takes some real courage, especially when you’re revealing a device that’s extremely similar to a product Google is headlining with. That’s what Recon is doing with the Jet, a wearable device that’s drawn instant comparisons to Google Glass. This device works with a virtual widescreen display that sits below the left eye of the wearer and utilizes Android as a basis for its user interface.

20130515_162255

Recon Jet is not in a place where it’s able to be sold at the moment – the version we’re having a peek at here at the Google developer event is a pre-production item – but once it’s ready, it’ll be largely the same as what we’re seeing on the inside. Inside this device works with a dual-core mobile processor (the name of which we’re not allowed to speak of quite yet) powering Android 4.2 Jelly Bean with a custom Recon-made user interface over the top.

You’ll control this machine with a miniature touch-sensitive optical pad that sits on the side of the device near the display. Touching this pad as well as swiping left and right, up and down will allow you access to the device’s abilities and settings.

20130515_161831

Inside you’ll be working with GPS, wi-fi connectivity for web, Bluetooth 4.0, and ANT+. With ANT+ you’ll be able to connect to a variety of other sports sensors – this device is, after all, made for hardcore sporting enthusiasts, after all. All of this connects to an HD camera the megapixels of which are not yet available as well.

20130515_162204

You’ll be working with “gaze detection” for instant access to the machine’s abilities, its display turning off and on when you want or do not want to work with it. Your eyes will decide.

20130515_161815

Have a peek at our brief adventure with this device and note that the main aim of revealing this device this week is to find developers that want to work with the SDK for the device in advance of its final release. This machine will be released to the public before the end of the year – we’ve confirmed this specifically once again in-person with Recon – making its appearance fall well before Google Glass hits the streets in a consumer edition. Pricing and release dates will be coming soon.

20130515_162255
20130515_162302
20130515_162315
20130515_161815
20130515_161822
20130515_161831
20130515_162204
20130515_162252


Recon Jet hands-on is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Larry Page talks simplicity in future technology at Google I/O 2013

This week Larry Page stepped on stage at Google I/O 2013 during the one keynote of the multi-day event, speaking about how the company must continue to create and advance without getting distracted the negative elements that appear in competition. He made mention of the film The Internship as a good program to get the world out of the mindset that computer science is an odd, untouchable environment: “computer science has a marketing problem.”

asdfsd

He spoke on how technology should be used, specifically on how technology should be getting out of the way. Page’s mention of how “we’re just scratching the surface of what’s possible and what’s next” lead into his assurance that having to turn off multiple smartphones before he stepped onstage was absurd: it should be simpler than that.

“Technology should do the hard work, so you can get on and live your life. We’re only at one percent of what’s possible, and we’re moving slow relative to the opportunity we have.” – Larry Page

Reminding the audience that “software should run everywhere, and easily,” Page made it clear that he’s not a fan of the “trouble” they’ve had with Microsoft in the past – this referring to patent issues and licensing matters of all kinds.

“Every story I read about Google is about us vs some other company, or something else, and I really don’t find that interesting. We should be building great things that don’t exist. Being negative is not how we make progress.” – Larry Page

20130515_084446-L

This chat showed more than what was spoken about by Page. It was a show of power, or a show of what might be seen as courage in Page’s willingness to stand in front of the developer and press and take questions. Questions, in this case, not in any way pre-screened or filtered.

Page mentioned not just Microsoft, but Oracle – how it wasn’t pleasant to be in court with them. He made it clear that “the right solution to education is not randomness” with regard to Google Search making informed decisions on what people should see in search results. Page’s session was an attempt to show Google as a friendly, real, human group here in 2013.


Larry Page talks simplicity in future technology at Google I/O 2013 is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google I/O 2013 behind-the-scenes preview tour: we’re here!

It’s day zero at Google I/O 2013, the company’s developer event made for and by developer groups and Google to strengthen their world of software, services, and everything in-between. SlashGear has gotten the opportunity to step behind-the-scenes at this event on registration day – that is, the day before everything begins. Here we’ll begin to explore what’s actually at the event with the hard evidence that only comes from on-site investigation right in the midst of the big setup.

1biggo

The Moscone Center once again plays host to Google I/O with an experience on the first of three floors that’s quite similar to 2012. This year attendees are given their official badges and T-shirts in a center console where Google employees are charged with scanning QR-codes and making sure everyone is who they say they are.

3store

A massive Google I/O sign rests against the main wall of the center with a color-changing I and O, cycling through blues and pinks in a comforting haze. We’re wondering where these massive 3D letters go once the week is over – perhaps a special giveaway on a letter-by-letter basis?

store2

The ground level also holds a pop-up Google Store where attendees can purchase various Google-branded oddities. Bags, clothing, cases, and toys are in effect. This store encourages – as it did in 2012 – users to utilize their Google Wallet to purchase the goods.

On the second floor (or first floor, if you’re German), you’ll find a massive Google+ presence where users are encouraged to sign-in with the social network. A deck with Office Hours is set up for developers to learn how they might integrate Google+ into their own software. This area has a series of live hang-out portals which we’re sure will be popping up this week.

4googleplus

This level is dedicated to several Google services and Google partners, each of them set up to present to any developer – or press member, or anyone else in attendance – that wishes to learn more.

5secret

BONUS FIND: here you’ll see an unopened box of special-edition Android collectable figures from Dead Zebra. We promise we didn’t peek!

Google Glass has its own section on level 2, users able to have a peek at the current iteration of the device as well as participate in talks on the future of the device. We’re expecting more information on the future of the headset in the main keynote address in the morning as well as in more than one chat later in the week.

6glass

You’ll find Glass being set aside in a massive section all its own on this level, mind you, while items like Google Maps are part of a series of towers up the center of the room. The amount of space Glass gets here says a lot about how important the device is to the company.

7higher

Up on the top level of the center, Google has made a massive show of both Android and Chrome. To one side, attendees are greeted by flying Androids and their floor-bound kin in a display not unlike what we saw at Mobile World Congress 2012 and 2011. It seems that this location has become the heart of the Android press event presentation – and perhaps rightfully so.

8android

Turn around 360 degrees and you’ll find a fabulous display – not yet turned on, as it were – of Chrome. One setup shows the highest-end Chrome OS hardware to date in an array that’ll certainly be a sight to behold once it’s turned on.

9pixel

Three large semi-transparent displays show Chrome in an impressive display that’ll certainly play host to some shows of power for both the web browser and the operating system.

9chromebig

Androids large and small – but mostly large – litter the top level in both complete and nearly complete states. A massive pair of black-framed glasses remain wet with paint less than a day before the main event is set to begin. An eye-bursting array of pink and blue squares blasts in a checkerboard grid above the fray. It’s here that the fun will begin soon – and very soon.

glasses
boxes
googleio_checkerboard

Have a peek at SlashGear’s Google I/O tag portal for more information on this array of Google action taking place Wednesday the 15th of May, 2013, till Friday. If you’re pumped up about any specific session or event, send us a note – we’d be glad to have a peek at it and report back to you, our valued readers!

Pay close attention starting tomorrow morning at 8AM PST in-particular – the big keynote event will be covered piece-by-piece right here on SlashGear!

2downward

BONUS: We’re on-site with and through Glass as well. Have a peek at a couple videos filmed by Vincent Nguyen with Google’s headset here and let us know what you think of the method and the quality.

Above you’ll find a general layout look at the first level of Google I/O 2013 and below you’ll hear a bit of information from the BBC’s own Rory Cellan-Jones. He’ll let you know exactly what he thinks about the gadget world and how important Glass is to it – stay tuned – #throughglass!


Google I/O 2013 behind-the-scenes preview tour: we’re here! is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass vs HTC One vs Olympus OM-D video shootout

With Google Glass finally in the hands of developers, and HTC’s flagship One smartphone readily available around the globe it’s time to test the video camera capabilities a bit, while also showing off some cool new technology. Get ready for a video capture comparison from Google Glass, the HTC One, and the Olympus OM-D camera. What makes this even better is you’re getting an overload of technology, because this video shootout is done while also taking a peek at NVIDIA’s SHIELD controlling the Parrot AR Drone.

Screen Shot 2013-05-14 at 1.22.46 PM

So not only are we testing the camera capabilities of these three devices, but you’ll also get an exclusive look at NVIDIA‘s Android game-console doubling as a remote as it controls and flies the Parrot AR Drone. Talk about gadget overload. There’s a lot of different needs that come to mind when someone decides on a smartphone or camera, and here we’ll be showing three different options, as well as their pros and cons.

Obviously with the HTC One you’ll get full 1080p video capture using their Ultrapixel camera one the smartphone. Which is an experience that these days everyone is pretty familiar with. Flip on the camera and aim your smartphone at the subject. This is convenient, but then this is also where Google Glass just takes things up a notch. You’ll enjoy nearly the same video experience, only completely hands-free. Everyone has mixed feelings about Google Glass, but being able to record demo videos for you guys, hands-on video, unboxings and more without a tripod and just using Glass is quite nice.

20130504_132351-sg-580x326

Google Glass in a way has opened up an entire new experience and way we easily and quickly record video. Yes you can attach a GoPro to your chest, but this is different. Below you’ll see three videos. The first being a quick demo of the NVIDIA SHIELD recorded by Google Glass. The second will be with the HTC One. Then the third will be a back-and-forth video in a different setting comparing Glass to something like the Olympus OM-D camera.

Google Glass 720p video capture

As mentioned above, the simple and hands-free experience using Glass is nice, but you’ll instantly notice the video is a little jerky at times. Here’s where there are both pros and cons. Glass video is hands-free, easy to do, and convenient but you’ll have to learn to hold that head of yours still. It takes some getting use to, and you might want to use hand gestures instead of turning your head, or moving it at all.

Then with Glass you only get 720p video capture on that 5 megapixel lens, but the quality is pretty excellent. You’ll also notice just how wide the video is compared to the HTC One video below. Pay attention to colors, brightness, and even audio levels.

HTC One 720p video capture

To be fair we recorded this on 720p as well, just like Glass, and right away you’ll notice the stability. Some image stabilization could help Glass, but it will only do so much. In general we’re all familiar with recording video through our phones, and as a result, the end product is clear, crisp, and not all over the place. The HTC One video capture handled the changing light outdoors better, and overall the colors and contrast we’re pretty even. You will however notice the audio capture on Glass wasn’t very good, and it was much clearer from the HTC One.

Last but not least the third video we wanted to toss in for good measure has the Olympus OM-D E-M5 camera capturing some moments with NVIDIA SHIELD, then it switches to Google Glass. This might be harder to follow, but we had our head and Glass on NVIDIA as they explained SHIELD, then our Olympus OM-D on the product. So each time you see SHIELD it’s through a dedicated camera, and the rest is shot with Google Glass.

While this last video isn’t quite something you can “compare” it does however show you another set of options and opportunity with Glass. Being able to record the same situation and demo simultaneously, without having 3 arms. There’s obviously advantages and disadvantages from each, but we wanted to give you the video and let you decide.

Does the loss of 1080p capture and slightly lower audio quality throw you off, or does the convenience and endless opportunity to record with Glass make it worth the trade off? You won’t all be recording with two devices, but what about the Father holding a child in one hand, yet still capturing his daughters soccer game at the same time? That is just one example, but a good one.

Okay, okay, just get a tripod and shoot video with that Olympus instead. Like we said, pros and cons. Since Glass isn’t evenly remotely close to being consumer ready, we won’t talk about price, but that will obviously be another factor later on. So what do you guys think about it. Does the opportunity and ease of recording with Glass give it a leg up on cameras and smartphones? Not to mention you can do it all by voice, or will you still opt for a dedicated camera? These are just a few small examples of the many, but we wanted to share it with you all. Let us know what you think in the comments below.


Google Glass vs HTC One vs Olympus OM-D video shootout is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

MedRef for Glass adds face-recognition to Google’s wearable

If there’s one thing people keep asking from Google Glass and other augmented reality headsets, it’s facial-recognition to bypass those “who am I talking to again?” moments. The first implementation of something along those lines for Google’s wearable has been revealed, MedRef for Glass, a hospital management app by NeatoCode Techniques which can attach patient photos to individual health records and then later recognize them based on face-matching.

medref_for_glass_facial-recognition

Cooked up at a medical hackathon, the app is still in its early stages, though it does show how a wearable computer like Glass could be integrated into a doctor or nurse’s workflow. MedRef allows the wearer to make verbal notes and then recall them, as well as add photos of the patient and other documents to their records, all without using your hands.

However, it’s the facial-recognition which is arguably the most interesting part of the app. All the processing is done in the cloud, with the current demo using the Betaface API: first, Glass is loaded up with photos of the patient, and then a new photo is compared to the “facial ID” those source shots produce with the matching tech giving a percentage likelihood of it being the same person.

MedRef for Glass video demo:

There are still some rough edges to be worked on, admittedly. In the demo above, for instance, even with just two individuals known to Glass, the face-recognition system can only give a 55-percent probability that it has matched a person. Any commercial implementation would also need to be able to see past bruising or surgery scars, which could be commonplace in a hospital, and evolving over the course of a patient’s stay.

medref_match

Currently, the MedRef app data is limited to a single Glass. Far more useful, though, would be the planned group access, which would allow, say, multiple surgery staff or a group of doctors to more readily find notes for patients they might not have previously seen. Meanwhile, the form-factor of Glass would leave both hands free, something we’ve seen other wearables companies attempt, such as the HC1 running Paramedic Pro.

Nonetheless it’s an ambitious concept, and one which could come on in leaps and bounds as cloud-processing of face-matching gets more capable. “In the future,” NeatoCode suggests, “on more powerful hardware and APIs, facial recognition could even be written to run all the time.” That could mean an end to awkward moments at conferences and parties where someone remembers your name but you can’t recall theirs.

There’s more detail at the MedRef project page, and the code has been released as an open-source project on GitHub.

SOURCE: SelfScreens


MedRef for Glass adds face-recognition to Google’s wearable is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

First Google Glass game project StarFinder gets early demo

The first official game for Google Glass has been revealed, a gamification of Google Sky that pits users of the wearable against the heavens. The title, revealed by developers dSky9 this week at Mobile Outlook 2013, challenges Glass wearers to identify different constellations against a time limit, playing against other users to compete for the fastest time.

starfinder_google_glass

“Upon launch, the app shows a leaderboard and videogame-like interface, overlaid atop a natural view of the stars and constellations” dSky9 said of the game. “It then challenges users to identify new constellations within a pre-set time limit.”

It’s still early days for the app. The developer’s demo video, which you can watch below, is described as a “pre-visualization” and is effectively a mockup of what players will actually see, complete with iOS icons.

However, it’s a good example of something that a wearable device, like Glass, can do particularly well when you take into account mapping digital graphics over the real-world. Although we’ve seen augmented reality smartphone games before, they’ve generally required the player to hold up their phone and look “through” the digital view from the handset’s camera, a less streamlined approach than glancing at an eyepiece just above your line of sight.

dSky9 hasn’t said when StarFinder might arrive on Glass, though as a native app it will presumably side-step Google’s own Mirror API and require some tinkering on the headset itself in order to run.

[via LivingThruGlass]


First Google Glass game project StarFinder gets early demo is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass in action: the wearable camera

Google Glass isn’t solely about photography, but that’s inevitably the first thing you try out – and the first thing you demonstrate to people when they inevitably ask you questions. Right now there seem to be two approaches to wearables like Glass, either aiming to make the headset blend in, and not cause waves by avoiding being noticed in the first place, or by facing the privacy and photography concerns people have face-on, and opening up a dialog about how bodyworn tech is going to change things. Maybe the fact I picked Google’s tangerine-finish Glass Explorer Edition is an indicator, but I’m all for challenging the status-quo rather than hoping it will merely blend in.

OLYMPUS DIGITAL CAMERA

I picked up Glass a little less than a week ago, and it’s already become a must-grab gadget when I leave the house. A big part of that is how closely it weaves multimedia into your daily life, not just in how you record them, but how you can then instantly share them in a way that doesn’t take you away from the moment.

The quality of the footage Glass produces is actually pretty good. The camera snaps stills at 5-megapixels and up to 720p HD video, and while low-light performance pales in comparison to the more adept smartphones, it’s still good enough to make our your subject in the resulting clip. What takes more getting used to is the actual process of filming clips, which only serves to highlight quite how much movement we make without realizing it.

Usually, with a smartphone camera, we’re pretty adept at keeping it still during filming. When the camera is mounted on your head, though, you suddenly realize that we seldom keep our heads still, nodding and turning around and generally doing things completely at odds with capturing a stable clip. It also took a little time before I learned to plan movements with my eyes before turning my head: otherwise, you dash the frame all around. That also means no nodding when in conversations with people, no looking off to the side to make sure the dog hasn’t run off, and no glancing down at your watch unless you want everybody to look at it too.

If there are compromises to be made, though, then there are advantages to Glass-style filming too. The ability to go hands-free when you’re playing with your kids, or to quickly snap off a photo when a friend is doing something goofy, without having to dig into your pocket first and unlock your phone. It’s also surprisingly useful for documenting things as you do them, from the user’s eye view. Okay, not everybody is going to be running through opening up and applying a screen protector, like I did with Glass and this Galaxy S 4 kit, but giving remote tech support to a distant relative, or sitting in as someone shows you their favorite recipe, or piggy-backing into a meeting when you can’t be there in person.

What would make Glass better? I can’t help but imagine what an UltraPixel sensor as in HTC’s One could do for low-light shots. In fact, HTC’s Zoe system, pulling together a brief video clip (3.5s versus Glass’ default 10s) and a cluster of stills, seems like an ideal match for a wearable like Glass. That way you’d have more likelihood of picking out the best-framed image of the bunch, and with fewer headaches about low-light conditions.

That HTC also manages to fit in optical image stabilization is also tempting, since that might help iron out some of the judders we can’t help but make when wearing Glass. Most of all, though, I’d like to see more battery life: right now, with mixed use, I’m seeing around four hours before Glass is demanding I plug it in, though it’s worth pointing out that it does recharge relatively quickly.

Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery
Google Glass battery

I’m also excited about the camera potential when Glass spreads. Right now, it’s a one-way thing, since most people don’t have Explorer Edition units yet. But, when that changes – and I do believe it will, despite the Glass naysayers – it’ll mean we can effectively split ourselves between multiple places, switching between physical and virtual presence as we jump in and out of other Glass-wearers’ headsets.

20130504_132351-sg

Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots
Google Glass Product shots

Still, as I said, there’s more to Glass than its camera. Next up I’ll be looking at the practicalities of slinging an Android computer to your head, and whether wearable tech really does offer more than just a capable smartphone. Meanwhile, if you have any Glass questions, ask them in the comments!

Google Glass: Sample shots

20130506_102308_611-sg
20130508_111021_560-sg
20130508_101235_898-sg
20130504_140925_528-sg
20130504_143619_403-sg
20130504_143954_092-sg
20130504_145541_437-sg
20130504_150705_814-sg
20130504_150925_765-sg
20130504_160731_222-sg
20130504_162829_776-sg
20130504_175144_351-sg
20130504_193109_443-sg
20130504_193244_386-sg
20130506_154855_799-sg


Google Glass in action: the wearable camera is written by Vincent Nguyen & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.