Forget the iPad Mini, we want Apple’s Google Glass

Apple‘s engineers are experimenting with wearable displays that could one day present an iOS rival to Google’s Project Glass, a newly assigned patent suggests, bouncing projected light through specially created lenses. The patent, ”Peripheral treatment for head-mounted displays“, was filed back in 2006 and granted this week, and tackles what’s perhaps the most difficult element of wearables, making displays in close-proximity to the wearer’s eyes look suitably distant without causing eye-strain.

The technology Apple describes is similar to total internal reflection, where light is bounced within a lens from an origin point – such as a micro-projector, or LCD/OLED panel – through to the user’s eyes. For instance, one length of lens spanning both eyes could be supplied with different images for the left and right eye from a single miniature display:

“One advantage is that the treatment of the peripheral area of the field of view leads to increased viewing comfort compared to conventional HMDs, and may also lead to a smaller likelihood of the user experiencing “motion sickness” phenomena during extended viewing. Another advantage is that users can make individual adjustments of their HMDs to fit the distance between their eyes. Further advantages include a greater immersive experience, larger virtual field of view, and increased overall image brightness” Apple patent

Apple argues that traditional wearable displays lead users to suffer from eventual discomfort because the virtual image falls short of the field of vision (FOV) of the human eye, leading to a shortfall in peripheral vision. It’s an issue that Google has attempted to tackle with Glass, offsetting the transparent mono-display up above the eye, so that the wearer must deliberately glance up to access projected data.

Of course, as with any patent, there’s not necessarily a production project at the end of it, though we’d be very surprised if Apple’s engineers hadn’t at least played with wearable prototypes. A previous patent application from the company suggested a wearable iPhone dock for augmented reality use, though this new system could access content from a remote device, such as streamed using AirPlay Video from the iPad in your bag.

[via AppleInsider]


Forget the iPad Mini, we want Apple’s Google Glass is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Google patent filing would identify faces in videos, spot the You in YouTube

Google patent filing would identify faces in videos, spot the You in YouTube

Face detection is a common sight in still photography, but it’s a rarity in video outside of certain research projects. Google may be keen to take some of the mystery out of those clips through a just-published patent application: its technique uses video frames to generate clusters of face representations that are attached to a given person. By knowing what a subject looks like from various angles, Google could then attach a name to a face whenever it shows up in a clip, even at different angles and in strange lighting conditions. The most obvious purpose would be to give YouTube viewers a Flickr-like option to tag people in videos, but it could also be used to spot people in augmented reality apps and get their details — imagine never being at a loss for information about a new friend as long as you’re wearing Project Glass. As a patent, it’s not a definitive roadmap for where Google is going with any of its properties, but it could be a clue as to the search giant’s thinking. Don’t be surprised if YouTube can eventually prove that a Google+ friend really did streak across the stage at a concert.

Google patent filing would identify faces in videos, spot the You in YouTube originally appeared on Engadget on Tue, 03 Jul 2012 15:11:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Don’t Doubt Google’s People Skills

Google IO opened with a bang last week, spilling Jelly Beans, cheap tablets, augmented reality and more, but for all the search giant knows we’re looking for, is it still out of touch? After the buzz of Google Glass and its base jumping entrance – thoroughly milked the following day by Sergey “Iron Man” Brin – attendees have been adding up what was demonstrated and questioning Google’s understanding of exactly how people use technology. Geeks getting carried away with “what can we do” rather than “why would we do it” is the common refrain, but make no mistake, everything Google showed us is rooted in solid business strategy.

Gizmodo has led the charge in questioning Google’s social skills, wondering out loud whether Googlers are in fact “still building for robots” and demonstrate “a gaping disconnect between the way data geeks and the rest of us see the world.” I’ll admit, watching the live stream of the IO opening keynote, I caught myself wondering exactly how much of what was being shown I’d ever actually use myself.

There were, by general consensus, three questionable areas: Google+ Events, the Nexus Q, and Google Glass.

Events are, certainly, only useful to you if your social network is also on Google+. The platform’s popularity among geeks and early-adopters of a certain inclination – usually orbiting around disliking Facebook and showing various degrees of Twitter apathy – has meant it’s a good place to make new friends (as long as you like, well, geeks and early-adopters of a certain inclination) but not generally a place to find existing ones.

That’s something Google needs to address, and adding Events is a relatively easy, low-cost way of doing. Think about it: if you get an email notification saying that someone you know has invited you to a party, and you need to sign into Google+ in order to read and respond to it, you’re probably more likely to do so than if you simply see “+You” at the top of the Google homepage. It’s evidence of an existing relationship: you won’t just be wandering into a room full of strangers.

On top of that, you have the contentious – and awfully named – Party Mode, something that perhaps most won’t use but which might find a little favor among the geekier users. Again, the key part is that you don’t have to use Party Mode in order to get value out of Google+ Events; Google just added it in so that, if you want, you can better document your gathering in the same place you organized it beforehand.

Then there’s the Nexus Q. Google’s launch demonstration for the Android-based streaming orb was an awkward low-point of the keynote, spending too long on the obvious – okay, it gives you a shared playlist on multiple devices, we get it – and not enough time putting it into context with Google’s future plans and other platforms like Google TV. Again, though, it’s a first step in a process, that process being the journey of a perfectly standard home streamer and Sonos alternative.

On that level, there are some advantages. Yes, you might not necessarily sit around with friends each tapping at your Nexus 7 to put together the very best playlist ever created, but if it’s a lot better set up to handle impromptu control than, say, Sonos is. Communal control with Sonos is a difficult one: do you ask everyone to download the Sonos controller app, then pair them with your network, or do you leave your iPad or iPhone unlocked (complete with access to your email, bookmarks, documents, etc…) so that they can dip into your music collection? Or, do you have a special device solely for party controller use?

“The Nexus Q is Google’s gateway to your TV screen”

In the longer term, though, Google’s motivation is the Nexus Q as a gateway to your TV screen. That’s what, if you recall, Google TV was meant to be – a way to expand Google’s advertising visibility from the desktop browser, smartphones and tablets, to the big-screen in your lounge – but stumbles and hiccups scuppered those plans. One of the most common complaints of first-gen Google TV was simply how complex it was; in contrast, the Nexus Q looks stunning, and concentrates on doing (at the moment) just a little. But, as a headless Android phone, there’s huge potential for what it could be next – console, video streamer for Netflix and Hulu, video conferencing system – after Google has got its collective hands on your HDMI input.

Of the three, though, it’s Google Glass that’s the hardest sell to the regular user. That’s not because it’s difficult to envisage uses for, but because of the price. Still, it’s not for the end-user yet: Google has given itself eighteen months or more to reach that audience, and who knows what battery, processor, wireless and design advantages we’ll have by then?

Aspects developed on Glass will undoubtedly show up in Android on phones, and again, the mass market benefits. There are certainly elements of persistent connection and mediated reality that apply even in devices without wearable displays. If anything, Glass is the clearest demonstration of Google’s two-tier structure: one level for regular people, and another for the geeks and tinkerers. The regular crowd eventually benefit from what the geeks come up with, as it filters down, has its rough edges polished away, and becomes refined for the mass-market.

“Google is a monolithic company, sure, but it’s filled with geniuses who want to make your life easier through technology” is how Gizmodo sees the IO announcements: having intentions that are fundamentally altruistic but misguided. In reality, everything Google showed has its roots in business and platform extension.

Google isn’t Apple, it doesn’t push a one-size-fits-all agenda. That’s not necessarily a bad approach, mind; Apple’s software is consistent and approachable, doesn’t suffer the same fragmentation issues as, say, Android does, and means that iOS devices generally do what’s promised on the tin. What Google knows is its audience or, more accurately, audiences, and so everything at IO was stacked in different levels to suit those varying needs. Some people don’t want to be limited by the ingredients on the side, they want to mix up their own meal, and IO is all about fueling that. Sometimes it takes a little more time to think through the consequences – and sometimes Google does a shoddy job of helping explain them – but there’s most definitely a market out there for them.


Don’t Doubt Google’s People Skills is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Will Google Glass Help Us Remember Too Well?

When Google sent BASE jumpers hurtling from a blimp as part of the first day Google I/O Keynote presentation, I was barely impressed. The jumpers were demonstrating the Project Glass wearable computer that Google is developing, and which I and just about all of my friends are lusting over. I had seen plenty of skydivers jumping with wearable cameras strapped to them. Then the Googlers landed, and another team started riding BMX bikes on the roof of the Moscone center, where the conference is being held. Yawn. Finally, climbers rappelled down the side of the building. Ho-hum. The point seemed to be that Google Glass was real, and that the glasses would not fall off your face as you fell onto San Francisco from a zeppelin. But then Google showed something that blew my mind.

It was a simple statement. Something to the effect of ‘Don’t you hate it when you see something cute that your kids are doing and you say to yourself: I wish I had a camera.’ Sounds innocuous enough, but that one phrase changed everything, and it may shape more than the future of computing. It may shape memory as we know it.

Until now, I had imagined Project Glass as a sort of wearable cell phone. Where phones have fallen short of delivering a great augmented reality experience, a head-mounted display with a translucent screen might fare much better. Augmented reality improves navigation, local search, and even social functions almost exponentially. Project Glass seems like the first product in a broad future of wearable computing products.

But even as I have drooled over Glass in the past, it never truly occurred to me that Google might mean for Project Glass to record everything. EVERYTHING. Your entire life. Before we think about the implications, let’s discuss why this is completely possible.

How much data would it take to record a life? That depends on a lot of variables. Are you recording in 1080p? 4K? What audio bitrate? Audio and video, or location data, too? Do you record the moments when you are watching your own recordings? When you’re driving on your commute? Watching a movie or TV?

Let me offer a ballpark figure. 4.5 Petabytes. That’s my educated guess for the storage it would take to record every waking moment of my life. Forty-five Terabytes a year for 80 years. That’s based on a ‘high-profile’ video recording rate of 15 Mbps, and 6 hours of sleep every night.

Is that an insane amount of storage for anyone to possess? Not for long. I have on the tip of my finger right now a tiny microSD card with a 64GB capacity. Yesterday, this card did not exist, and a 32GB card would have cost a couple hundred dollars. Today, a 32GB card can be had for about $1 per Gigabyte. Tomorrow, we’ll have 128GB cards, and I believe the microSDXC standard tops out at 2TB or more. Within 10 years, I would bet that a Petabyte of storage, which is a million Gigabytes, will be completely affordable, either in a compact form or via a remote (cloud) storage host.

So, by the time my 3 year old is in High School, he’ll have access to the technology to record his entire life. I cannot begin to fathom the perspective he would have. It would change everything.

“When we can review a video of every memory, will that destroy nostalgia?”

Of course there are privacy concerns, and legal issues. But what has me curious at the moment are the ways such technology will shape nostalgia. I love nostalgia. I’m a big fan. Nostalgia is one of the most fun games we can play with our own lives. When we can reference a first-person video of every memory we have, will that destroy the value of nostalgia? Will the term become meaningless?

Think of your earliest memory. In your mind, how do you see yourself? Do you see your arms and hands reaching out in front of you? Or do you imagine yourself fully formed, in the third person? It’s a strange phenomenon that we remember ourselves from outside our own bodies. But technology like Project Glass may change the way we approach even our own memory storage. Is there a biological imperative, a psychological reason why we imagine ourselves this way? Is the disconnect necessary? I don’t know. But if I’m forced to imagine myself only in the first person, I know it will change the way I remember my entire life.

I’ve also heard the question raised of whether we will continue to remember at all. Certainly memory is an evolutionary trait. We are not likely to cease all memory function in a few decades simply because a technology helps us record everything we see and hear. But memory is also a learned skill. We learn to categorize and associate our memories. We learn what is useful for long-term storage, and what is best forgotten. Our mind has defense mechanisms in place to protect us from painful memories, and emotional triggers to spotlight and gild our best moments. What happens when we reduce all of these moments to a high-definition video played back on a computer screen?

One of my favorite moments from my youth is the night I met my first long-term girlfriend. We were at a party, but outside on the street, sitting on the spoiler of my car. It had just started to rain, and we were covering ourselves with a small foam floor mat that my father used in the aerobics classes he took. We talked for a few hours and really hit it off. I don’t remember anything we said, but I remember that my friends inside were impressed that I had done so well.

I hope that I will always have in my mind the feelings associated with that night. But if I played back the conversation, I’m sure it would destroy the memory. It was drivel, and melodramatic high school prattling, and the most obvious flirting nonsense. Outside of my own head, it would be embarrassing and cringe-inducing. It would be evidence against me.

Isn’t that adolescence in a nutshell? And early adulthood? And, well, all of life? Life is embarrassing. That’s why embarrassment makes us laugh so hard, because we can relate. We’re all horrible actors on our own stage. While I love the idea of Project Glass, and I can certainly see the advantage of having a camera recording all of those lost moments, there are too many moments that should stay lost. I would rather have them rattling around in my head than on my TV screen. I’d rather see myself from the outside, or remember the event from deep within, than have an accurate depiction of what my arms were doing, and how I sounded as the words spilled out of my gullet. I hope we don’t lose the ability to get it wrong, somehow, because memory is so much more interesting when it’s imperfect.


Will Google Glass Help Us Remember Too Well? is written by Philip Berne & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Google Glass Sessions teach us why we need augmented reality

Google knows that it’ll take some education before we’re all wearing Google Glass headsets, and so the company has kicked off what it’s called Glass Sessions: slices of real life augmented with Glass. First up is persistent video and camera functionality from the perspective of a parent, with Glass being used to capture fleeting moments and share milestones across continents. Check out the video after the cut.

Laetitia Gayno, the mom in the video, is the wife of a Google employee, and shows how she uses Glass to snap photos of her baby without the camera getting in the way. Meanwhile, she can use Google+ Hangouts for group video calls with her family back in France.

Google’s focus with Glass has, in public demonstrations at least, been concentrated on the photography elements of the wearable. That’s arguably the easiest thing to show off; without seeing what it’s actually like to have a small virtual display “hovering” just above your normal line of vision, justifying the value of Glass’ other mediated reality abilities is considerably trickier.

Last week, Google began taking preorders for Google Glass Explorer Edition, the first generation of developer devices, which are expected to begin shipping in early 2013. Priced at $1,500 – though with the expectation that the consumer product will be considerably cheaper – they’ll give developers the opportunity to experiment with bringing their apps to persistently-connected users.


Google Glass Sessions teach us why we need augmented reality is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Recon Instruments MOD HUD Hands-on

Earlier today we mentioned a little bit about Recon Instruments and their new MOD Heads-up Display technology. They offer something similar to Google’s Project Glass, only it is available today for just $399. These will be your ultimate companion while going skiing or snowboarding, and much more is planned for the future.

Imagine getting directions while snowboarding down a mountain. Weather conditions, time, speed MPH readings and much much more. That is exactly what you can do with Recon Instruments new MOD HUD. What makes this even better is it’s available now — not 2013 — and they’ve just dropped their Android SDK so developers can start working on companion apps. Here’s a short video explaining the product a little better:


Here at Google IO Recon has unleashed their developer SDK so those interested can start building apps to accompany them down the mountain while using Recon’s HUD. These apps will then be connected to the HUD via a smartphone or tablet and the options are limitless. While talking with Tyson Miller from Recon he explained that they are working with multiple Goggle companies like Smith and more to integrate their product into wearable units. Currently the device in the video and pictures below is just a prototype, as they only sale the HUD, not the actual Goggles too.

Recon also states that while this is currently only available for Ski and Snowboard goggles, they plan to bring to market multiple different offerings for any type of activity. Once developers dive into the SDK the options for apps and uses will greatly increase. Developers can get started here, and the Recon Instruments MOD is shipping now for just $399.

MOD-LIVE-SDK-GUI-580x409
P1090647
P1090643
P1090646
P1090644
P1090649
P1090648
P1090652
P1090651
P1090650


Recon Instruments MOD HUD Hands-on is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Can’t wait for Google Glass? Recon’s MOD Live has you covered today

Google Glass may be grabbing the headlines at IO this week, but with Explorer Edition developer devices not shipping until 2013, it might be worth looking elsewhere for your head-up display fix. Recon Instruments has just the thing, with a new Android SDK for the MOD Live eyepiece that will allow developers to create their own applications that float in the user’s eyeline.

The MOD Live is a complex little piece of kit, with various sensors integrated: an altimeter, barometer, 3-axis accelerometer, 3-axis gyro, 3-axis magnetometer, and a temperature sensor. There’s also Bluetooth Smart Ready (aka 4.0) and GPS, a d-pad for navigation, and of course the eyepiece itself.

Power is from an 800MHz TI OMAP3 Cortex-A8 processor paired with 256MB RAM and 512MB of flash space, of which 180MB shows up as mass storage. ”Because our SDK is pretty much completely the Android SDK, creating an HUD app takes about the same effort as a regular Android app” Recon claims.

The facility for the MOD Live to take advantage of basic http pull and push is due to be added in the next week or so, and Recon plans to give away ten free eyepieces (and subsidize a further 100 by 50-percent) to encourage developers to jump onboard. Normally, the MOD Live is priced at $399.99.

You can sign up to be considered as a developer here, and see an example of an app – detailed here – intended for skiers in the video below.


Can’t wait for Google Glass? Recon’s MOD Live has you covered today is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Google Glass will reach consumer in 2014 says Google Co-Founder

We already knew that developers would get their hands on the Explorer Edition of Google Glass device in 6 months or so (for $1500), but Sergei Brin has confirmed to Bloomberg at Google IO that consumers would have to wait until 2014 until they can buy the new hip augmented reality device. Introduced as Google Project Glass, the glasses are an augmented reality module that display information on top of what the user sees (terminator-style). The goal is to have a near hands-free (there’s a trackpad) experience by using voice commands. Google glass can also record what the user sees. (more…)

By Ubergizmo. Related articles: Google TV gets more (paid) content, Google Drive for iOS and PC offline editing introduced,

This Is the Secret of the Google Glasses Skydiving Demonstration [Video]

It seems that there are some people that still just don’t get it, so here’s a photo to illustrate it. This image shows what the whole Google’s skydivers-with-eyeglass-webcam demonstration is all about: antennas. Yes, it’s a great demo for wireless Cisco routers and antennas. More »

Sergey Brin talks Project Glass at IO 2012

Finishing up the Keynote here Google’s own Sergey Brin is showing us a bit more on Project Glass. In case you missed it, yesterday they jumped out of a blimp and landed right on top of the Moscone West Center here in San Francisco for Google IO, and managed to show it all live as a Google+ Hangout thanks to Google Glass.

We are still slowly learning more and more about Project Glass, and have even pre-ordered a few of our own so that’s exciting. What is about to unfold is a live Google+ Hangout and skydive all captured again with Project Glass. This time showing us how it’s all done right on the live feed. Sergey is on the roof, trying not to get hit by skydivers, and explaining the entire process. Since Google’s Project Glass doesn’t have 3G/4G capability, they are streaming via a connected WiFi device.

In case you didn’t hear yesterday. Google will be allowing IO attendees a chance at an early look and developer kit sometime next year for the tune of around $1,500. We were quick to order a few but will have to wait a little longer before getting our hands on these impressive augmented reality glasses. This same event took place yesterday and you can catch that video from the links below.

IMG5751-M
IMG5743-M
IMG5742-M
IMG5732-M
IMG5723-M
IMG5725-M


Sergey Brin talks Project Glass at IO 2012 is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.