Google does seem to have dipped their fingers into a slew of different projects, and this time around, we have word that their very own real-time translation project for the spoken – take note, spoken, and not written, word over smartphones are inching closer and closer to the 100% accuracy mark, which is something that ought to make everyone else sit up and take notice. In an interview with Android’s Vice President Hugo Barra, he revealed that hardware prototypes (of the Android-powered variety, of course) are currently being tested, and these have achieved “near perfect” results where some language pairs are concerned.
Of course, there is also the fact that this real-time speech translation work is a very delicate and tricky one to be part of since it is tested in a controlled environment, and the bottleneck happens to be speech recognition in real-life conditions, where you have all sorts of background noise and accents to deal with, but all of those kinks should be worked out in due time. Who knows, one might even find real-time speech translation being thrown into a future version of Android, and it might be more difficult for language teachers to earn a living after that.
Google Now is getting two new cards today: Offers and TV. Offers will notify you when you’re near a store that you’ve saved an offer for, while the latter works like Shazam but for IMDB-like content. You’d think it would tell you when shows are on, but it doesn’t. Oh well. [Google]
Each week, our friends at gdgt go through the latest gadgets and score them to help you decide which ones to buy. Here are some of their most recent picks. Want more? Visit gdgt anytime to catch up on the latest, and subscribe to gdgt’s newsletter to get a weekly roundup in your inbox.
When Google announced its plans to shutter Google Reader in March, the Internet freaked out. Twitter users raised their virtual pitchforks in outrage. Bloggers wept, scrambling to find a suitable replacement by the service’s July 1 death date.
The update to Android 4.2.2 Jelly Bean on both the HTC One and the HTC Butterfly (international version of the DROID DNA, that is), has been a long time coming. Today it’s appeared only on some international models of the HTC One with a bit of a boost to Sense 5.0 as well (without the name change) – and the aesthetic changes are what’s heading the pack. While we’re expecting this change to hit carrier models in the near future, right this minute it’s just popping up on a select few models across the sea.
Several relatively minor changes have been made to the user interface here with Android 4.2.2 coming to the HTC One – and to the Butterfly in a similar manor, we must expect soon. The first of these is the addition of an option to change what happens when you hold down the Home button in the lower right of the HTC One’s front panel – now instead of only being able to access Google Now, you’ll be able to set a long-press to access the long-lost “menu” key.
The app drawer and dock have been updated to allow the dock to be made bare, and for icons to stick to the drawer even when they’re also pushed to the dock. “Daydream” has been added, this being the screensaver oddity found in stock Android 4.2.2 for docking, charging, or whatever else you want to have it appear during.
This update adds the ability to work with Android-native sound profiles – aka EQS or equalizer controls. These controls can be found by tapping one’s EQS icon in the upper right corner of the notifications menu.
Finally you’ll now be able to show the battery level with a percentage indicator next to your battery icon – this is accessible under settings, power, show battery level.
All of this will be accessible by HTC One users without carrier ties in the near future, while the amount of time between here and the carrier-tied updates is at the moment completely unknown. We’ll continue to explore and let you know if anything else fantastically different pops up between here and your own update – stay tuned!
When I look at what Google is feeding me every day, my first reaction is to be thrilled at how cool it is that there’s an engine out there that sees what I like and give it to me. Automatic understanding, seeing what I search for and where I am, telling me things I aught to know. Things other people know.
It is easier to find something I’ve already worked with in Google than it is to find something I’ve never seen before. While data stacks up, webpages are made and emails are sent, the dominant method for organization is a search based on sameness. Because I’ve visited SlashGear.com so very many times in the past, whenever I’m logged in to Google, SlashGear-related search results appear first for essentially anything technology or science related.
If I’m logged in to Google and I want to find a review of a smartphone, the search engine will suggest I read the most popular reviews. Is this crowd-sourced and traffic-reliant system good enough to be the one single organizer of information on the internet?
With the system known as Google Now, a series of “cards” are presented on a smartphone or tablet screen. The cards are organized automatically unless I change my preferences: I can choose to include or keep out sports scores, for example. These cards show bits of information based on what I’ve searched for most in Google while logged-in to my Google account.
So while I enjoy a note about how long it will take me to get home based on my GPS location and traffic, Google Now sends me news stories automatically sourced as related to a story I clicked on earlier inside Google. These results are also based on traffic, showing the most popular stories based on clicked through Google.
Birthdays are shown as well – every single person I’ve got in Circles through Google+ is shown in a stack of cards when their birthday comes up. This situation was spoken about by my colleague Chris Davies in the column “Why nobody, not even Apple, has done mobile right.”
“And yet, our needs from a companion device are surely different from those we have of a regular computer. I don’t necessarily want every single piece of information out there delivered to the palm of my hand; I just want the right, most relevant information. You can find that on a phone, certainly, but for it to be a true companion it really should be one step ahead of what you need. Some emails, or IMs, or calls, are more important than others, but my phone beeps for all of them. Sometimes I don’t know what the most relevant information actually is, or that it’s even out there, and my digital wingman should be using everything it knows about me to fill in those gaps of its own accord.” – Chris Davies
But then again – is discovery lost? When a device like Google Now on Glass “knows” what I want and give it to me automatically, what does that leave me with?
Google can know only the information I’ve given it – be it direct with a search or indirect as my smartphone tells Google Now my GPS location on the regular. Until a system can be built that can collect all of my thoughts and experiences, no exceptions, the feedback I get from the systems I use today will be on some level arbitrary to my needs.
Until a system can be built to access my full experience, there’s always going to be a push involved that a group like Google cannot do away with. There’s always a suggestion: is this what you want? And that suggestion remains based on popular precedent to this day.
In other words: brilliance will not be found in search results until this paradigm is altered. Until true random elements are incorporated alongside an understanding of the human mind we do not have a grasp on yet, the dream that is invisible technology cannot be realized correctly.
This dream was spoken about by Larry Page at Google I/O 2013, where he suggested that “Technology should do the hard work, so you can get on and live your life.” He also mentioned that, with regard to Google search results being curated, “the right solution to education is not randomness.”
It’s a balance we must meet. At the moment, we live in an environment that continues to be dominated by popular opinion.
If you haven’t heard of Sherpa, you’re mostly likely not alone. It’s essentially a new Android app that looks to dethrone Google Now and Apple’s Siri. Sherpa plans to launch on iPhone soon, as well as make its way to Google Glass to take down Google’s own voice command software on the new spectacles.
The company unveiled plans to bring its software to Google Glass, and Sherpa CEO Xabier Uribe-Etxebarria says that their voice command app is much more suited for Google Glass than Google’s own software, which is a bold statement. Uribe-Etxebarria says that voice commands on Google Glass are limited, and “it’s not taking advantage of all the features and the potential of Google Glass.”
What separates Sherpa from the rest of the pack is its ability to understand meaning and intent. The app can build its own metalanguage with rules and semantic concepts mixed in with using Google’s speech API. From that, Sherpa can do a wide variety of actions that you wouldn’t even think it could do.
The app is still in beta, but plans to roll out for Google Glass and other wearables either in late 2013 or early 2014. Sherpa is able to do a handful of neat tricks, including the ability to play music without downloading the tracks first, and automate directions for scheduled events in your calendar. The app can also do things like turn down the volume or toggle the WiFi.
And as Google Now does, Sherpa can also essentially predict what you will want to see and hear about, like score updates from your favorite sports teams or letting you know of some popular restaurants in the area if it’s close to dinner time. As for the kinds of things that the Google Glass app will do, that’s still unknown, but from what the company says, we can expect a lot more features out of Sherpa than with Google’s built-in offering.
A series of patent infringement claims have been filed by Apple this month surrounding both the Samsung GALAXY S 4 and the Android-centric Google Now system. Two “Siri” patents have been pulled up by Apple surrounding unified search, each of them originally attached to a suit that filed against the Android Quick Search Box in devices running Android 4.0 and lower. Apple has also claimed in a similar document this week that claims against past Samsung Galaxy devices continue to hold true with the Samsung GALAXY S 4, the company asking that this device be included in a suit they’ve already got on the books.
The Samsung GALAXY S 4 is at the center of this suit, with Apple also making clear their intent to seek action against Google-made elements in the Android system such as Google Now. At the moment it appears that Apple will be calling out Samsung for implementing items like Google Now, the company having not yet pushed Google directly on the matter.
The line specifically calling out the Samsung GALAXY S 4, as located by Foss Patents, is as follows:
“Apple determined that the Galaxy S4 product practices many of the same claims already asserted by Apple, and that the Galaxy S4 practices those claims in the same way as the already-accused Samsung devices.”
This bit of text comes from the “13-05-21 Apple Motion to Amend Infringement Contentions” document which has been filed this month. Of note here is the fact that the Samsung GALAXY S 4 Google Edition is not specifically mentioned, and would therefor not be falling under this filing. It’s likely that Apple will specifically include the Google Edition once the device becomes available in stores.
It’s also worth noting that at least one of the two filings against Google’s Android Quick Search Box was overturned when filed against the Samsung Galaxy Nexus – this was done back in October of 2012, and has yet to be successfully challenged by Apple. This case is filed with Judge Koh and, because of the way this Siri-related patent was turned with the Galaxy Nexus, it’s quite possible that the Samsung GALAXY S 4 will be found innocent as well.
Google’s new “conversational search” feature for Chrome has quietly been enabled, with the new feature appearing in the latest version of Google’s browser. Announced at I/O, the new Voice Search feature builds on the existing ability for Chrome to accept spoken search terms, now listing out your query on screen as you say it, and then able to show the results in Google Now-style cards as well as reading out the answer.
That’s not the only improvement, however. The system also supports semantics across repeated searches; so, for instance, if you ask a follow-up question, Google will automatically understand that the two queries are related.
If you ask “When was Ford founded?” for instance, Google will now read out the answer. You can then ask a follow-up like “Where is its headquarters?” and, even though you did not specify you were still asking about Ford, Google will still understand that it’s the topic of inquiry.
At the heart of this contextual awareness is Google’s Knowledge Graph technology, revealed last year, and integrated with natural language processing. That way, search knows that some queries will be about people – perhaps referred to as “he” or “she” in follow-up questions – while others will be about objects or companies.
More impressive are the compound assumptions that search can now make. Ask Chrome if it will rain tomorrow, and it will tell you the forecast (as well as display it on-screen): automatically figuring out where you are, and that you may want a full forecast.
Still absent is so-called “hotword search” as on Google Glass, which allows you to wake the system with a spoken command – “OK Glass” in the case of the wearable – and then begin asking queries. That seems likely to arrive sometime soon, though, especially given Microsoft has built something similar into Xbox One.
Overall, the technology is further evidence of Google’s greater confidence in its own results, and in showing users what it believes they’re looking for rather than just a list of possibilities. That’s something Matias Duarte, director of Android user experience, described to us as a key part of Google Now back at MWC, an endeavor which has applications across Google’s range: desktop, Chromebook, Android, and Glass.
You’ll need to be running the latest version of Chrome in order to get access to the new voice search functionality, and you may have to be patient, too. Google appears to be suffering some teething problems scaling out the system, and we’re getting a lot of “No internet connection” error messages right now.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.