Google patents Project Glass motion-based theft detection, locks up if it feels ‘unnatural’ movement

Google patents Project Glass motionbased theft detection, locks up if it feels 'unnatural' movement

We know that you’re never gonna take your Google glasses off, but if some nefarious lout feels differently, the boys and girls in Mountain View’s X lab have got you covered. The company has patented a system whereby the device can identify “unnatural” movements and lock the headset if it feels the violent motion of them being wrenched from your face. Even better, while your would-be assailant is making off with the $1,500 gear, it’ll be contacting the authorities to ensure that they can’t get far with their ill-gotten HMD. If nothing else, we’ll won’t worry as much when pre-order customers 782 and 788 go out of an evening.

Filed under:

Google patents Project Glass motion-based theft detection, locks up if it feels ‘unnatural’ movement originally appeared on Engadget on Tue, 17 Jul 2012 10:15:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Google Glass grabs developer outreach chief from Gmail

Google’s Glass wearable division has poached itself a new Community Manager, with former Gmail community lead Sarah Price jumping from email to augmented reality. Price’s new role, confirmed on Google+, will see her engage with bleeding-edge Glass developers, who stand to get their hands on the first Explorer Edition in early 2013, as Google attempts to encourage coders to come up with apps suitable for a wearable display.

The Google Glass Explorer Edition went up for sale at Google IO, priced at a hefty $1,500 apiece. Deliveries won’t begin until the beginning of next year, however, with a consumer version expected to drop within twelve months of that happening.

Exact details on Price’s new responsibilities haven’t been given, but the new community lead is already fielding questions from keen developers, particularly on Explorer Edition availability outside of the US. Asked whether Google will be accepting international pre-orders any time soon, Price would only confirm that Google is looking into it.

“There are a lot of developers inside of the US who want to get their hands on a pair, too” Price told one developer. “Right now we are still working with the appropriate regulatory bodies, and we aren’t ready to send Glass outside of the US.”

Price is also coy on Glass’ specific technical specifications, though recently published patent application documents suggest that Google is planning a broad range of control and interaction systems that includes touch, voice commands and more. However, Glass could also integrate some degree of AI, similar to Google Now, using context to automatically offer up information the system believes is more relevant to the current circumstances.


Google Glass grabs developer outreach chief from Gmail is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


A Man Got His Ass Kicked for Wearing Digital Eye Glasses That Looked Like Google Glasses [Google]

Google’s glasses may allow you to record images like nothing that’s come before, but they will also make you stand out. Like Dr. Steve Mann, who’s been developing wearable computing systems in the same vein as Project Glass since 1999. Sadly, though, he got beaten up in McDonald’s for wearing his own techno-glasses. More »

Broken Glass: Father of wearable computing allegedly assaulted

Wearable computing pioneer Steve Mann has allegedly been attacked by employees of a French McDonald’s after sporting his own version of Google’s Glass AR headset, with the EyeTap eyepiece grabbing snapshots of those involved. Mann, who led MIT’s Wearable Computers group and has been exploring mediated reality technologies for several decades, claims that while on holiday in Paris with his family he was challenged by staff at the fast food chain, who ripped up his medical documentation about the headset and then attempted to pull it from his head. Mann’s system is “permanently attached and does not come off my skull without special tools.”

Update: Official McDonald’s statement after the cut.

According to Mann’s account, he had taken documentation explaining this fact – and the research behind the wearable computer – to France in case of any issues with museum, restaurant or other staff. Several staff at the McDonald’s restaurant apparently reviewed the information at different times, with the first employee suggesting that there was no problem with the researcher wearing the headset.

“Because we’d spent the day going to various museums and historical landmark sites guarded by military and police, I had brought with me the letter from my doctor regarding my computer vision eyeglass, along with documentation, etc., although I’d not needed to present any of this at any of the other places I visited (McDonald’s was the only establishement that seemed to have any problem with my eyeglass during our entire 2 week trip)” Steve Mann

However, after ordering food, three further employees from the restaurant reportedly approached Mann and tried to pull the EyeTap headset off of him. He attempted to placate them by showing them the same documentation that had satisfied their colleague, but they destroyed it. ”After all three of them reviewed this material, and deliberated on it for some time, Perpetrator 2 angrily crumpled and ripped up the letter from my doctor” Mann writes. “My other documentation was also destroyed by Perpetrator 1.”

Mann’s own EyeTap system captured what could end up being evidence. The wearable is designed to store what would normally be buffered and deleted transitory images in the case of physical damage, and when the McDonald’s employee supposedly grabbed at the headset, it triggered this recording function.

“The computerized eyeglass processes imagery using Augmediated Reality, in order to help the wearer see better, and when the computer is damaged, e.g. by falling and hitting the ground (or by a physical assault), buffered pictures for processing remain in its memory, and are not overwritten with new ones by the then non-functioning computer vision system … As a result of Perpetrator 1′s actions, therefore images that would not have otherwise been captured were captured. Therefore by damaging the Eye Glass, Perpetrator 1 photographed himself and others within McDonalds” Steve Mann

Mann attempted to contact McDonald’s but has been unsuccessful at speaking to someone about the incident. ”I’m not seeking to be awarded money” he writes, “I just want my Glass fixed, and it would also be nice if McDonald’s would see fit to support vision research.”

It’s not the first time Mann’s mediated reality technology has caused problems. Airport security damaged some of the equipment back in 2002 after allegedly forcibly removing it and causing over $50,000 of damage. With Google’s Glass headed to market in the not-distant future, incidents where wearable computing bumps up against privacy concerns look likely to increase as society catches up with technology.

Update: McDonald’s has so far given the following comment: “We take the claims very seriously, are in process of gathering info & ask for patience until all facts are known.” We’ve reached out to the company for more details.

Update 2: McDonald’s has given SlashGear the following statement:

“We strive to provide a welcoming and enjoyable experience for our customers when they visit our restaurants. We take the claims and feedback of our customers very seriously. We are in the process of gathering information about this situation and we ask for patience until all of the facts are known.”

More on Steve Mann and his AR research here.

[via Avram Piltch]


Broken Glass: Father of wearable computing allegedly assaulted is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


SlashGear Morning Wrap-Up: July 16, 2012

Welcome to the start of another fabulous week of tech, gadgets, entertainment, and everything in-between here on SlashGear! Start your morning off right with a foretelling of a tablet future for Nokia, complete with a hinge for folding over! Next have a peek at the Sony Xperia NXT series as its available in the USA this week! If you’re working with Skype, you might want to keep an eye on this message leak bug that’s been creeping around your software’s insides.

The folks at Huawei are getting set up to offer a wider range of storage products for the near future. It’s seeming more and more likely that Steve Ballmer will be revealing the details of Office 15 later today. Have a peek at the Gauntlet keyboard-glove that’ll be perfect for your Google Glass early next year.

You can now order Raspberry Pi in general, pre-orders having been shipped and the online store ready for action. There’s a bit of talk surrounding iPad challengers this morning, starting with a new 10-inch Kindle Fire. Those of you that own the HTC Desire HD, you might be out of luck as far as Ice Cream Sandwich goes.

The folks at Activision and Marvel Comics have finally made way for a Deadpool-centric game that’ll be released later this year. The relationship between Anonymous and WikiLeaks has beed detailed in part. It appears that SpaceX is ready to complete its Dragon design review – kids in space next year!

Al Capone’s 1928 Caddilac will be heading to auction very soon. The New York Times has reported that they’ve got sources confirming the iPad Mini. Google’s Project Glass has had its Artificial intelligence and controls detailed in full in a patent application. Microsoft has sold its share of MSNBC.com in its entirety to Comcast.

The Nokia Lumia 900 has had its AT&T price cut in half, and we can expect Apple’s OS X Mountain Lion to launch on the 25th of July.


SlashGear Morning Wrap-Up: July 16, 2012 is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Google Glass controls and Artificial Intelligence detailed

Google’s cautious approach to allowing people to play with Project Glass means the UI of the wearable computer is something of a mystery, but a new patent application could spill some of the secrets. The wordy “Head-mounted display that displays a visual representation of physical interaction with an input interface located outside of the field of view” details a system whereby a preview of the controls of a wearable – such as the side-mounted touchpad on Google Glass – are floated virtually in the user’s line of sight. The application also suggests Glass might maintain its own “self-awareness” of the environment, reacting as appropriate without instruction from the user.

In Google’s patent application images, the side control is a 3×3 grid of buttons – which could be physical or virtual, with a single touchpad zoned to mimic nine keys – and the corresponding image is projected into the eyepiece. Different visual indicators could be used, Google suggests – different colored dots or various shapes – and the touchpad could track proximity of fingers rather than solely touch, with different indicators (or shades/brightness of colors) used to differentiate between proximity and actual contact.

“As the wearer’s hand gets closer to a pad the touch pad, a dimmed colored dot 338 [in diagram below]may become responsively brighter as shown by the colored dot 342. When the touch pad recognizes a contact with pad 340, the pad 340 may light up entirely to indicate to the wearer the pad 340 has been pressed” Google

However, that’s only a basic interpretation of Google’s ideas. The company also suggests that a more realistic use of virtual graphics could float a replica of the user’s hand – even mimicking physical characteristics such as their actual fingernails, wrinkles and hairs, which Google reckons would make it more believable – in such a way that the control pad feels like it’s actually in front of them.

“This feedback may “trick” the wearer’s brain into thinking that [the] touch pad is directly in front of the wearer as opposed to being outside of their field of view. Providing the closest resemblance of the wearer’s hand, the virtual mirror box may provide a superior degree of visual feedback for the wearer to more readily orient their hand to interact with touch pad” Google

Button control isn’t the only strategy Google has for interacting with Glass – there’s the possibility of speech, input from the camera, and a wireless keyboard of some sort, among other things – but the headset would be able to prioritize UI elements depending on context and/or wearer preference. “In the absence of an explicit instruction to display certain content,” the patent application suggests, “the exemplary system may intelligently and automatically determine content for the multimode input field that is believed to be desired by the wearer.”

Glass could also work in a more passive way, reacting to the environment rather than to direct wearer instructions. “A person’s name may be detected in speech during a wearer’s conversation with a friend, and, if available, the contact information for this person may be displayed in the multimode input field” the application suggests, useful for social gatherings or business meetings, while “a data pattern in incoming audio data that is characteristic of car engine noise” – which could even potentially differentiate, from the audio characteristics, the user’s own car – could trigger a navigation or mapping app based on the assumption that they’re likely to be traveling somewhere.

Input from various sensors could be combined, too; Google’s application gives the example of predicting what interface methods are most likely to be used based on the weather. For instance, while the touchpad or keypad might be the primary default, if Glass senses that the ambient temperature is so low as to suggest that the wearer has donned gloves, if could switch to prioritizing audio input using a microphone.

“For example, a wearer may say “select video” in order to output video from camera to the multimode input field. As another example, a wearer may say “launch” and then say an application name in order to launch the named application in the multimode input field. For instance, the wearer may say “launch word processor,” “launch e-mail,” or “launch web browser.”” Google

Although the prototype Glass headsets we’ve seen so far have all stuck to a similar design – an oversized right arm, with a trackpad, camera and single transparent display section hovering above the line-of-sight – Google isn’t leaving any design possibility unexplored. Future Glass variants could, the patent application suggests, provide displays for one or both eyes, built into a pair of eyeglasses or attached as an add-on frame. The displays themselves could use LCD or other systems, or even a low-power laser that draws directly onto the wearer’s retina.

Meanwhile more than one camera could be implemented – Google suggests integrating a second into the trackpad, so as to directly watch the user’s fingers, but a rear-facing camera could be handy for “eyes in the back of your head” – and the headset could either be fully-integrated with its own processor, memory and other components for standalone use, or rely on a tethered control device such as a smartphone. The touchpad could have raised dots or other textures so as to be more easily navigated with a fingertip.

Google has already begun taking pre-orders for the Google Glass Explorer Edition, which will begin shipping to developers in early 2013. Regular customers should get a version – much cheaper than the $1,500 for the Explorer – within twelve months of that.

google_glass_patent_1
google_glass_patent_2
google_glass_patent_3
google_glass_patent_4
google_glass_patent_5


Google Glass controls and Artificial Intelligence detailed is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Steve Wozniak speaks: Megaupload frustrations, Microsoft praise and Google Glass lust

Apple co-founder Steve Wozniak has spoken out on his frustrations around the Megaupload case, as well as praising Microsoft’s visual design as something Steve Jobs would be proud of. The outspoken exec voiced his dissatisfaction with the Kim Dotcom case while at the Entel Summit in Chile, FayerWayer reports, refusing to comment on whether he believes high-ranking politicians had a hand in the investigation, but expressing dismay at some of the techniques used to bring Dotcom to trial.

“Kim Dotcom was so successful, and he was well known for his flagrance, and his sports cars, and his racing cars, and style of life, that he was made an easy target” Wozniak said. “He was the biggest in the world, and they swamped in on him … I don’t want to take a side in this political thing, I don’t know if that’s where it came from.”

It’s not the first time Wozniak has spoken publicly on the Megaupload situation. One of the original founders of the Electronic Frontier Foundation (EFF), Wozniak compares the cloud storage system to other platforms like Apple’s own iCloud and Google Drive.

As for Apple’s rivals, Wozniak has plenty of praise for Microsoft. “A lot of people like to say that Microsoft’s had no successes in the last so many years, but the Xbox is a success, and certainly Kinect” he pointed out, highlighting the clean UI of things like Metro in Windows 8 and Windows Phone.

“They have such a strikingly good visual appearance, which is a lot of what Steve Jobs always looked for, the art in technology, the convergence of art and technology. And usually it was visual appearance of things. So, I made a joke that Steve Jobs came back reincarnated at Microsoft” Wozniak said. “But I’m glad that Microsoft is starting to show that maybe they’re a different company from before, i don’t remember this sort of thing happening in a long time from Microsoft, so I’m very happy.”

Surface isn’t the only product on Wozniak’s shopping list, either. He’s hoping to pick up a pair of Google’s Project Glass wearables, suggesting that the head-mounted display could – as long as the functionality was right – be a good example of the next-generation of portable computing.

“Google Glass is maybe the thing, but I don’t want to comment on that because I don’t have Google Glass. I would love to be able to have Google Glass and just talk to it any time I want and ask valuable questions and get those answers, that would be good too.”

[via Cult of Mac]


Steve Wozniak speaks: Megaupload frustrations, Microsoft praise and Google Glass lust is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Olympus Yells “Me Too!” With The MEG4.0 Wearable Display Prototype

meg4

Watch out, Google. Here comes Olympus with the MEG4.0 and don’t dismiss this as a Google Glass knockoff. Olympus has been researching and developing wearable displays for more than 20 years. The MEG4.0 concept, and with that, its eventual production counterpart, has been a long time coming and could be a serious competitor in the space.

Olympus made it clear in today’s announcement that the 30g MEG4.0 is both a prototype and a working name. The stem-like system sits on one side of the glasses and connects to a tablet or phone through Bluetooth. A 320 x 240 virtual screen floats above the wearer’s eye line. The MEG4.0 is designed for all-day use and should last eight hours on a charge, although Olympus states the glasses are designed for bursts of use, 15-seconds at a time.

Google isn’t the only player in the augmented reality game. In fact several companies have toyed with the concept for the last few years including Olympus. The company introduced a working set of AR glasses back in 2008. Called the Mobile Eye-Trek (shown above) the glasses were designed to be worn on a daily basis, feeding information like email to the wearer on a screen placed 50cm in front of the eyes, making it appear as a 3.8-inch screen.

While the Mobile Eye-Track never hit the retail market, Olympus indicated at the time that the prototype would lead to a production version by 2012.

However, much like Google, Olympus is not revealing the user interface yet. If the MEG4.0 is to be a success, the interface, and more importantly, the depth of the information available needs to be as mature as Google Glass. Price and availability was not announced.


Olympus announces MEG4.0 wearable display prototype, skips the skydive

Olympus announces MEG4.0

While Google may have grabbed headlines for its recent wearable tech stunt, Olympus is doggedly forging ahead with its own similar prototypes, seven years on. Unlike Project Glass, the MEG4.0 isn’t a standalone structure and needs a glasses frame to hang on, although the sub-30g unit shouldn’t tax it too much. The QVGA (320 x 240) display can connect to devices through Bluetooth 2.1, with Olympus pointing to a smartphone hook-up to provide both the processing power and internet connectivity — which sounds different to what we’re expecting from Google’s effort. The current prototype can squeeze out eight hours of intermittent use, or two hours of non-stop projection. While the device is being pitched at everyday users, Olympus isn’t offering any suggestions of launch dates or pricing, but you can check on what the company is willing to share in the (Google-translated) press release below.

Continue reading Olympus announces MEG4.0 wearable display prototype, skips the skydive

Olympus announces MEG4.0 wearable display prototype, skips the skydive originally appeared on Engadget on Thu, 05 Jul 2012 08:01:00 EDT. Please see our terms for use of feeds.

Permalink Akihabara News  |  sourceOlympus Japan (translated)  | Email this | Comments

Olympus MEG4.0 Google Glass rival revealed

Google’s Glass may not be headed to buyers until next year, but Olympus is wasting no time with its own alternative augmented reality display, the MEG4.0. The stem-like wearable features battery life of up to eight hours and floats a 320 x 240 virtual screen above the user’s regular eye-line, hooking up via Bluetooth to a nearby smartphone or tablet.

The headset weighs under 30g, though it’s worth noting that Olympus’ battery estimates aren’t based on continuous usage. Instead, the company says it expects the display to be used in fifteen second chunks every three minutes or so; under those circumstances, it can manage a maximum continuous runtime equivalent of around two hours total use, Olympus predicts.

Also integrated is an accelerometer, for using head-control features or figuring out which way the wearer is facing, though unlike Google Glass there’s no camera. While Google has so-far focused on the potential for photography and video capture with Glass, emphasizing how useful it could be to have a persistent record of your experiences, Olympus apparently believes discrete content consumption is more relevant to augmented reality adoptees.

The company is also particularly proud of the brightness of its microdisplay, which it claims is sufficiently powerful to be used even in strong daylight. Pricing and availability is unconfirmed, and it’s not clear whether Olympus will actually be commercially launching the MEG4.0 or instead pushing to license the display technology to other companies.

[via The Verge; via Akihabara News; via Newlaunches]


Olympus MEG4.0 Google Glass rival revealed is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.