Don’t Doubt Google’s People Skills

Google IO opened with a bang last week, spilling Jelly Beans, cheap tablets, augmented reality and more, but for all the search giant knows we’re looking for, is it still out of touch? After the buzz of Google Glass and its base jumping entrance – thoroughly milked the following day by Sergey “Iron Man” Brin – attendees have been adding up what was demonstrated and questioning Google’s understanding of exactly how people use technology. Geeks getting carried away with “what can we do” rather than “why would we do it” is the common refrain, but make no mistake, everything Google showed us is rooted in solid business strategy.

Gizmodo has led the charge in questioning Google’s social skills, wondering out loud whether Googlers are in fact “still building for robots” and demonstrate “a gaping disconnect between the way data geeks and the rest of us see the world.” I’ll admit, watching the live stream of the IO opening keynote, I caught myself wondering exactly how much of what was being shown I’d ever actually use myself.

There were, by general consensus, three questionable areas: Google+ Events, the Nexus Q, and Google Glass.

Events are, certainly, only useful to you if your social network is also on Google+. The platform’s popularity among geeks and early-adopters of a certain inclination – usually orbiting around disliking Facebook and showing various degrees of Twitter apathy – has meant it’s a good place to make new friends (as long as you like, well, geeks and early-adopters of a certain inclination) but not generally a place to find existing ones.

That’s something Google needs to address, and adding Events is a relatively easy, low-cost way of doing. Think about it: if you get an email notification saying that someone you know has invited you to a party, and you need to sign into Google+ in order to read and respond to it, you’re probably more likely to do so than if you simply see “+You” at the top of the Google homepage. It’s evidence of an existing relationship: you won’t just be wandering into a room full of strangers.

On top of that, you have the contentious – and awfully named – Party Mode, something that perhaps most won’t use but which might find a little favor among the geekier users. Again, the key part is that you don’t have to use Party Mode in order to get value out of Google+ Events; Google just added it in so that, if you want, you can better document your gathering in the same place you organized it beforehand.

Then there’s the Nexus Q. Google’s launch demonstration for the Android-based streaming orb was an awkward low-point of the keynote, spending too long on the obvious – okay, it gives you a shared playlist on multiple devices, we get it – and not enough time putting it into context with Google’s future plans and other platforms like Google TV. Again, though, it’s a first step in a process, that process being the journey of a perfectly standard home streamer and Sonos alternative.

On that level, there are some advantages. Yes, you might not necessarily sit around with friends each tapping at your Nexus 7 to put together the very best playlist ever created, but if it’s a lot better set up to handle impromptu control than, say, Sonos is. Communal control with Sonos is a difficult one: do you ask everyone to download the Sonos controller app, then pair them with your network, or do you leave your iPad or iPhone unlocked (complete with access to your email, bookmarks, documents, etc…) so that they can dip into your music collection? Or, do you have a special device solely for party controller use?

“The Nexus Q is Google’s gateway to your TV screen”

In the longer term, though, Google’s motivation is the Nexus Q as a gateway to your TV screen. That’s what, if you recall, Google TV was meant to be – a way to expand Google’s advertising visibility from the desktop browser, smartphones and tablets, to the big-screen in your lounge – but stumbles and hiccups scuppered those plans. One of the most common complaints of first-gen Google TV was simply how complex it was; in contrast, the Nexus Q looks stunning, and concentrates on doing (at the moment) just a little. But, as a headless Android phone, there’s huge potential for what it could be next – console, video streamer for Netflix and Hulu, video conferencing system – after Google has got its collective hands on your HDMI input.

Of the three, though, it’s Google Glass that’s the hardest sell to the regular user. That’s not because it’s difficult to envisage uses for, but because of the price. Still, it’s not for the end-user yet: Google has given itself eighteen months or more to reach that audience, and who knows what battery, processor, wireless and design advantages we’ll have by then?

Aspects developed on Glass will undoubtedly show up in Android on phones, and again, the mass market benefits. There are certainly elements of persistent connection and mediated reality that apply even in devices without wearable displays. If anything, Glass is the clearest demonstration of Google’s two-tier structure: one level for regular people, and another for the geeks and tinkerers. The regular crowd eventually benefit from what the geeks come up with, as it filters down, has its rough edges polished away, and becomes refined for the mass-market.

“Google is a monolithic company, sure, but it’s filled with geniuses who want to make your life easier through technology” is how Gizmodo sees the IO announcements: having intentions that are fundamentally altruistic but misguided. In reality, everything Google showed has its roots in business and platform extension.

Google isn’t Apple, it doesn’t push a one-size-fits-all agenda. That’s not necessarily a bad approach, mind; Apple’s software is consistent and approachable, doesn’t suffer the same fragmentation issues as, say, Android does, and means that iOS devices generally do what’s promised on the tin. What Google knows is its audience or, more accurately, audiences, and so everything at IO was stacked in different levels to suit those varying needs. Some people don’t want to be limited by the ingredients on the side, they want to mix up their own meal, and IO is all about fueling that. Sometimes it takes a little more time to think through the consequences – and sometimes Google does a shoddy job of helping explain them – but there’s most definitely a market out there for them.


Don’t Doubt Google’s People Skills is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


IBM Labs pitches the future of augmented reality shopping with mobile app prototype

IBM Labs pitches the future of augmented reality shopping with mobile app prototype

From the same company that brought you the ThinkPad and the tank of a keyboard known as the Model M, today IBM is demonstrating its latest consumer development: a mobile shopping app. As odd as that sounds, it’s no secret that Big Blue employs some rather brilliant folk, and now the company is looking to combine augmented reality with your everyday shopping habits. While still merely a prototype, the app will allow consumers to pan product aisles with their smartphone camera and view additional details on the screen. As IBM puts it, shoppers may input their own needs and preferences into the app, which can accommodate a wealth of information such as allergens, sugar content and bio-degradable packaging. Through partnerships with retailers, IBM also hopes to integrate promotions and loyalty schemes into the app, which it states will help stores better understand the buying habits of individual consumers. So there you have it, the future of shopping, as brought to you by IBM. As for the full PR, you’ll find it after the break.

Continue reading IBM Labs pitches the future of augmented reality shopping with mobile app prototype

IBM Labs pitches the future of augmented reality shopping with mobile app prototype originally appeared on Engadget on Mon, 02 Jul 2012 00:01:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Will Google Glass Help Us Remember Too Well?

When Google sent BASE jumpers hurtling from a blimp as part of the first day Google I/O Keynote presentation, I was barely impressed. The jumpers were demonstrating the Project Glass wearable computer that Google is developing, and which I and just about all of my friends are lusting over. I had seen plenty of skydivers jumping with wearable cameras strapped to them. Then the Googlers landed, and another team started riding BMX bikes on the roof of the Moscone center, where the conference is being held. Yawn. Finally, climbers rappelled down the side of the building. Ho-hum. The point seemed to be that Google Glass was real, and that the glasses would not fall off your face as you fell onto San Francisco from a zeppelin. But then Google showed something that blew my mind.

It was a simple statement. Something to the effect of ‘Don’t you hate it when you see something cute that your kids are doing and you say to yourself: I wish I had a camera.’ Sounds innocuous enough, but that one phrase changed everything, and it may shape more than the future of computing. It may shape memory as we know it.

Until now, I had imagined Project Glass as a sort of wearable cell phone. Where phones have fallen short of delivering a great augmented reality experience, a head-mounted display with a translucent screen might fare much better. Augmented reality improves navigation, local search, and even social functions almost exponentially. Project Glass seems like the first product in a broad future of wearable computing products.

But even as I have drooled over Glass in the past, it never truly occurred to me that Google might mean for Project Glass to record everything. EVERYTHING. Your entire life. Before we think about the implications, let’s discuss why this is completely possible.

How much data would it take to record a life? That depends on a lot of variables. Are you recording in 1080p? 4K? What audio bitrate? Audio and video, or location data, too? Do you record the moments when you are watching your own recordings? When you’re driving on your commute? Watching a movie or TV?

Let me offer a ballpark figure. 4.5 Petabytes. That’s my educated guess for the storage it would take to record every waking moment of my life. Forty-five Terabytes a year for 80 years. That’s based on a ‘high-profile’ video recording rate of 15 Mbps, and 6 hours of sleep every night.

Is that an insane amount of storage for anyone to possess? Not for long. I have on the tip of my finger right now a tiny microSD card with a 64GB capacity. Yesterday, this card did not exist, and a 32GB card would have cost a couple hundred dollars. Today, a 32GB card can be had for about $1 per Gigabyte. Tomorrow, we’ll have 128GB cards, and I believe the microSDXC standard tops out at 2TB or more. Within 10 years, I would bet that a Petabyte of storage, which is a million Gigabytes, will be completely affordable, either in a compact form or via a remote (cloud) storage host.

So, by the time my 3 year old is in High School, he’ll have access to the technology to record his entire life. I cannot begin to fathom the perspective he would have. It would change everything.

“When we can review a video of every memory, will that destroy nostalgia?”

Of course there are privacy concerns, and legal issues. But what has me curious at the moment are the ways such technology will shape nostalgia. I love nostalgia. I’m a big fan. Nostalgia is one of the most fun games we can play with our own lives. When we can reference a first-person video of every memory we have, will that destroy the value of nostalgia? Will the term become meaningless?

Think of your earliest memory. In your mind, how do you see yourself? Do you see your arms and hands reaching out in front of you? Or do you imagine yourself fully formed, in the third person? It’s a strange phenomenon that we remember ourselves from outside our own bodies. But technology like Project Glass may change the way we approach even our own memory storage. Is there a biological imperative, a psychological reason why we imagine ourselves this way? Is the disconnect necessary? I don’t know. But if I’m forced to imagine myself only in the first person, I know it will change the way I remember my entire life.

I’ve also heard the question raised of whether we will continue to remember at all. Certainly memory is an evolutionary trait. We are not likely to cease all memory function in a few decades simply because a technology helps us record everything we see and hear. But memory is also a learned skill. We learn to categorize and associate our memories. We learn what is useful for long-term storage, and what is best forgotten. Our mind has defense mechanisms in place to protect us from painful memories, and emotional triggers to spotlight and gild our best moments. What happens when we reduce all of these moments to a high-definition video played back on a computer screen?

One of my favorite moments from my youth is the night I met my first long-term girlfriend. We were at a party, but outside on the street, sitting on the spoiler of my car. It had just started to rain, and we were covering ourselves with a small foam floor mat that my father used in the aerobics classes he took. We talked for a few hours and really hit it off. I don’t remember anything we said, but I remember that my friends inside were impressed that I had done so well.

I hope that I will always have in my mind the feelings associated with that night. But if I played back the conversation, I’m sure it would destroy the memory. It was drivel, and melodramatic high school prattling, and the most obvious flirting nonsense. Outside of my own head, it would be embarrassing and cringe-inducing. It would be evidence against me.

Isn’t that adolescence in a nutshell? And early adulthood? And, well, all of life? Life is embarrassing. That’s why embarrassment makes us laugh so hard, because we can relate. We’re all horrible actors on our own stage. While I love the idea of Project Glass, and I can certainly see the advantage of having a camera recording all of those lost moments, there are too many moments that should stay lost. I would rather have them rattling around in my head than on my TV screen. I’d rather see myself from the outside, or remember the event from deep within, than have an accurate depiction of what my arms were doing, and how I sounded as the words spilled out of my gullet. I hope we don’t lose the ability to get it wrong, somehow, because memory is so much more interesting when it’s imperfect.


Will Google Glass Help Us Remember Too Well? is written by Philip Berne & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


The Invention of Morel: This Story Puts Hologram Tupac to Shame [Book Report]

Often, I have dreams in which people close to me who have died are brought back to life—sometimes fully living, sometimes as what my dream-self understands to be a hologram. This is probably pretty common, and I usually think nothing of it, but this morning I woke after one such dream and thought instantly of a book I’d read around this time last year. More »

Google Glass will reach consumer in 2014 says Google Co-Founder

We already knew that developers would get their hands on the Explorer Edition of Google Glass device in 6 months or so (for $1500), but Sergei Brin has confirmed to Bloomberg at Google IO that consumers would have to wait until 2014 until they can buy the new hip augmented reality device. Introduced as Google Project Glass, the glasses are an augmented reality module that display information on top of what the user sees (terminator-style). The goal is to have a near hands-free (there’s a trackpad) experience by using voice commands. Google glass can also record what the user sees. (more…)

By Ubergizmo. Related articles: Google TV gets more (paid) content, Google Drive for iOS and PC offline editing introduced,

Consumer Google Glasses due less than 12 months after developer version

Google aims to get its Google Glasses augmented reality headset shipping to consumers within a year of the $1,5000 Explorer Edition arriving with developers, the company has confirmed. That consumer version will be “significantly” cheaper than the Explorer Edition prototype hardware, Google co-founder and Glass project lead Sergey Brin told TechCrunch, though this won’t be a race to the bottom.

Instead, the team responsible for Glass has said, the priority will be balancing quality and affordability. No indication of what sort of final price will be settled upon has been given, but wearable eyepiece specialists have already – and separately – estimated that augmented reality headsets of Google Glasses’ ilk will most likely come in at around the $200-500 mark.

In the meantime, Google will be counting on developers to get up to speed with Glass. The cloud-based API they will have use of will be “pretty far along” by the time the Explorer Edition goes on sale, and Google’s own engineers are already testing Gmail, Google+ and other Android apps on the wearable.

As for battery life, Brin was overheard suggesting he had seen six hours of use from a charge, though it’s unclear what settings were enabled at that time. It’s already been confirmed that Glass will be able to locally cache content rather than upload it immediately, or indeed stream low-quality footage while caching higher-quality versions for later use.


Consumer Google Glasses due less than 12 months after developer version is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Qualcomm extends Vuforia augmented reality to the cloud

Remember Vuforia? Qualcomm’s augmented reality platform allows you to scan real world objects and create “interactive experiences” on your smartphone or tablet. The technology had its limitation though, only scanning photos against a local database of 80 images. Now Qualcomm has announced that by adding the cloud into the mix, so the platform can perform image recognition against over one million images.

That will make it much easier for developers and partners to use the platform, with American Apparal fully onboard with the program. The company demoed Vuforia at Uplinq 2012. The company has customized an app that would see customers scanning items on their smartphones to bring up the full details on the product, such as pricing and reviews. It would also lets customers buy products that they can’t find in the store.


American Apparel gave a little demo for us at the event, and everything seemed to work as advertised. The company says that what we saw was still a prototype, but that a full version “should be available soon.”

qualcomm_uplinq_2012_american_apparel_1
qualcomm_uplinq_2012_american_apparel_2
qualcomm_uplinq_2012_american_apparel_3
qualcomm_uplinq_2012_vuforia_1
qualcomm_uplinq_2012_vuforia_2
qualcomm_uplinq_2012_vuforia_3


Qualcomm extends Vuforia augmented reality to the cloud is written by Ben Kersey & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Are $1,500 Google Glasses a bargain?

Being an early-adopter is seldom cheap, but is Google having a laugh with its $1,500 Project Glass Explorer Edition? Put up for surprise pre-order at Google IO today – though not expected to ship until early next year – the search giant demands a hefty sum for those wanting to augment their reality early. Cutting edge costs, sure, but there’s the potential for significantly more affordable options that could be here just as soon as Google Glass is.

Google isn’t the only company working on wearables, after all. Back in March, eyeline display specialist Lumus confirmed to us that products using its technology were in the pipeline for 2013, with prices ranging from $200 for more basic models – perhaps just offering media playback – through to $500 for more advanced versions with what we’d think of as true augmented reality.

It’s not the only company working on AR projects, either. We caught up with Vuzix this month to take about its own smart glasses intentions, including the display technology it has been working on with Nokia Research. The company wouldn’t talk specific pricing, but did say that it was aiming more for the mass market and that Project Glass “is not the grail we are seeking.”

Of course, there’s a big difference between a developer kit and a commercial product, and there’s no telling exactly what Glass will do quite yet. Google has been playing its cards close to its chest on that front, only really showing camera use-cases, though we’re also expecting some other functionality like navigation. Still, even if Lumus’ estimates were to double by the time products reach shelves, that’s still a fair chunk less than Google is asking.

So, don’t feel too down-heartened if you’re not at Google IO to preorder a Glass Explorer Edition, or can’t muster the $1,500 Sergey Brin demands. Augmented reality and wearable tech is fast approaching its tipping point, and with that will inevitably come more affordable options.


Are $1,500 Google Glasses a bargain? is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Google IO 2012: Project Glass wrap-up

Make no mistake, Project Glass dominated the Google IO 2012 keynote, with a blockbuster entrance worthy of a James Bond film, and the shock news that the wearable is actually up for preorder. Google’s Sergey Brin interrupted the presentation with news that Glass-wearing skydivers were floating in a blimp above the Moscone Center, and would be jumping down while live-streaming through a Google+ Hangout. Check out the must-see video after the cut!

The skydivers were met by stunt bike riders, who passed a Project Glass unit to abseilers, who handed it to more bikers that delivered it to Brin on-stage. He then called up some friends from the Glass development team to flesh out Google’s vision for the headset, in what was increasingly sounding like a sales pitch.

That suspicion proved well-founded in fact, when Brin revealed that Google would be taking preorders for the Project Glass Explorer Edition at IO this week. Available for $1,500 and expected to ship in early 2013, the headset doesn’t come cheap but already developers are flocking to sign up.

Of course, no Google keynote would be complete without a little anti-Apple snark, and it was left to Project Glass to highlight quite how much better looking at data in a natural way out of the corner of your eye is, compared to stabbing frantically at a tiny phone screen.


Google IO 2012: Project Glass wrap-up is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Photos of Google’s Vic Gundotra wearing the latest, blue-hued Glass prototype

Photos of Google's Vic Gundotra wearing the latest, bluehued Glass prototype

Sergey Brin briefly pulled out a light blue prototype of Google Glass whilst on stage at Google I/O, and as it turns out, those are evidently the latest and greatest models that the company is willing to wear around. We ran into social exec Vic Gundotra after this morning’s keynote, only to find him donning precisely the same set that was teased on stage. We asked if the blue was just part of Google’s experimentation with coloring Glass, and he chuckled while confessing that he wasn’t authorized to speak further about the project or its ambitions. Still, the man looks good in blue. And something tells us you would, too.

Photos of Google’s Vic Gundotra wearing the latest, blue-hued Glass prototype originally appeared on Engadget on Wed, 27 Jun 2012 15:24:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments