We’re not sure how many members of the Google army asked for it, but the second-gen, Android-friendly appBlaster’s now being delivered to all peripheral buffs. Obviously, one of the biggest improvements in v2.0 of the apptoyz accessory is its much welcomed compatibility with something other than iDevices via a new universal cage — that said, there’s also other fresh augmented reality features which, in theory, should make it a vast improvement over its predecessor. The appBlaster 2.0’s only seen a slight price bump in comparison to the first-gen, with RED5 asking for £25 (around $40) for the add-on — a small amount to pay when you consider all the attention you’re going to get. And, well, we know you love that.
Mobile graphics are clearly setting the agenda at SIGGRAPH this year — ARM’s Mali T600-series parts have just been chased up by a new Khronos Group standard that will likely keep those future video cores well-fed. OpenGL ES 3.0 represents a big leap in textures, introducing “guaranteed support” for more advanced texture effects as well as a new version of ASTC compression that further shrinks texture footprints without a conspicuous visual hit. OpenVL is also coming to give augmented reality apps their own standard. Don’t worry, desktop users still get some love through OpenGL 4.3: it adds the new ASTC tricks, new visual effects (think blur) and support for compute shaders without always needing to use OpenCL. All of the new standards promise a bright future in graphics for those living outside of Microsoft’s Direct3D universe, although we’d advise being patient: there won’t be a full Open GL ES 3.0 testing suite for as long as six months, and any next-generation phones or tablets will still need the graphics hardware to match.
Siri, Google Now and other digital personal assistants have a new rival in the shape of Saga, a mobile app that uses learns from users to provide contextual help, suggestions and more. A free app, currently iPhone-only, Saga pulls in data from Facebook, Twitter and other apps to build an understanding of the individual user, and then crunches that with schedules and preferences to produce suggestions as to nearby restaurants, when would be a good time in the day to run, where friends are (and who users might actually like to hang out with), and other recommendations. However, Saga’s future is most definitely in wearables like Google’s Glass.
That will see Saga provide its own contextual suggestions in a far more intuitive and non-distracting way: popping dialogs into the corner of your eye rather than demanding that you pull out your phone. Context – or the lack of it – in the mobile world has been a running theme for several years but still something manufacturers and developers have struggled to implement, but as the number of sensors and data sources with shared personal information grow, apps like Saga promise to pull them altogether.
Initial partnerships for Saga include Runkeeper, which can provide fitness exercise patterns to the assistant app. For instance, if Saga knows from Runkeeper that a user normally runs each Tuesday, but that they haven’t so far today, it will automatically look for a suitable timeslot in the agenda and ping up a prompt to encourage them.
Future iterations of the app will allow users to share recommendations back and forth within a sub-group of their social networks, and increase the number of notifications. Currently, Saga requires users to open the app itself to see suggestions, but as the algorithms improve the company plans to push out personalized prompts. App developers will also be able to use Saga’s APIs to feed their data into the service.
Robert Scoble sat down with CEO Andy Hickl to talk Saga and mobile context, as well as what implications the app might have for wearable devices and augmented reality. “If you sign in with Facebook we understand a little bit about you, we can understand your birthday, the name of your spouse … the other cool thing is what can we tell if we understand a little about your patterns, where you go” Hickl says.
“So f I see you at a bar at 11:30 on a Tuesday night maybe a mile and a half away from your house, what can I tell about you? Well, I can tell maybe you’ve got no children waiting for you at home … the fun thing is that we use that, not to profile you, but to be able to anticipate the queries you might have.”
The app is currently iPhone-only, though with an Android version expected in August, and is a free download. As for how Saga will make money, right now there’s no monetization but possibilities include paid app functionality or sponsored results.
This “substitutional reality system” was developed by the Japanese Laboratory for Adaptive Intelligence at the RIKEN Brain Science Institute. It was created to fuse performance art with perceived reality for its wearer. While it’s doesn’t produce the sort of directly implanted memories seen in Total Recall, the visual and audio portions are immersive enough to trick your mind anyhow.
The headgear is supposed to seamlessly meld the live video with recordings from the past as well as performances from dancers. Sounds groovy. It combines fiction and reality, making them indecipherable from each other. It’s definitely an interesting conundrum, not being able to tell if what you’re seeing is in fact real, just a recording or a hybrid of both.
The MIRAGE is a unique take on augmented reality, though I’m sure under the right circumstances, it could be used to brainwash people. You still want to try it out?
News on Google’s Project Glass just keeps coming and coming. It’s no surprise that we’re extremely excited and interested in the AR tech, but now we will hopefully be learning addition details early next week. Wednesday we shared details about the VIP treatment we will be getting for pre-ordering a pair at Google IO for around $1,500 — and that treatment is about to start come Monday.
Google and their official +Project Glass Google+ account has just reached out to all the Explorer Edition buyers, confirming that we’ll be learning additional details in a private Google+ Hangout Monday. This will include other lucky pre-order customers, as well as members from Google’s Project Glass team. Hopefully while engaging in a live Google+ hangout with actual developers from Google we’ll be able to learn some neat new things about Project Glass. Obviously we will let you know the minute we hear anything worth mentioning.
Project Glass made a huge splash at Google IO, when Sergey Brin took the stage and had a pair of the AR eyewear skydiving right into the event center in San Francisco. Since then we’ve seen plenty of patents, learned a few more details, and even saw Gmail’s lead developer head to the Project Glass crew. Stay tuned for additional details and hit the timeline below for further coverage.
When he’s not trash-talking Windows 8, Valve’s Gabe Newell is pondering next-gen wearable computing interfaces and playing with $70,000 augmented reality headsets, the outspoken exec has revealed. Speaking at the Casual Connect game conference this week, Valve co-founder and ex-Microsoftie Newell presented head-up display lag and issues of input and control for wearables as the next big challenge facing mobile computing, VentureBeat reports.
“The question you have to answer is, “How can I see stuff overlaid in the world when you have things like noise?” You have weird persistence problems” Newell said, asked about the post-touch generation of computing control. “How can I be looking at this group of people and see their names floating above them? That actually turns out to be an interesting problem that’s finally a tractable problem.”
Tractable it may be, but so far it’s not cheap. “I can go into Mike Abrash’s office and put on this $70,000 system, and I can look around the room with the software they’ve written, and they can overlay pretty much anything, regardless of what my head is doing or my eyes are doing. Your eyes are actually troublesome buggers” Newell explains. The second half of the issue, though, is input, which the Valve CEO describes as “open-ended.”
“How can you be robustly interacting with virtual objects when there’s nothing in your hands? Most of the ideas are really stupid because they reduce the amount of information you can express. One of the key things is that a keyboard has a pretty good data rate in terms of how much data you can express and how much intention you can convey … I do think you’ll have bands on your wrists, and you’ll be doing stuff with your hands. Your hands are incredibly expressive. If you look at somebody playing a guitar versus somebody playing a keyboard, there’s a far greater amount of data that you can get through the information that people convey through their hands than we’re currently using. Touch is…it’s nice that it’s mobile. It’s lousy in terms of symbol rate” Gabe Newell, CEO, Valve
Google’s Glass has sidestepped the issues somewhat, not attempting to directly overlay or replace exact objects in the real world with a wearable display, but instead float more straightforward graphics just above the wearer’s eye-line. That removes the precision problem, but means Glass will be less capable of mediating reality – i.e. changing what of the real world the user actually sees – and more about augmenting it with extra data.
As for control, Google has already shown off its side-mounted touchpad on Glass, and a recently published patent application fleshed out some of the other possibilities. They include voice recognition and hand-tracking using cameras, though Google also describes using low-level artificial intelligence to reduce the amount of active navigation Glass users may have to do.
For instance, Glass could recognize – using microphones built into the headset – that the wearer is in a car, Google explains, and thus automatically show maps and a navigation interface. Those same microphones could be used to spot mentions of peoples’ names and call up contextually-relevant information about them, working as an aide-mémoire.
Somewhat more bizarre, though, is research within Valve to use the human tongue as an input method. “It turns out that your tongue is a pretty good way of connecting a mechanical system to your brain” Newell explained. “But it’s really disconcerting to have the person you’re sitting next to going, “Arglearglargle.” “You just Googled me, didn’t you?” I don’t think tongue input is in our futures.”
Google’s Project Glass had a surprisingly large presence (skydivers) during its I/O event earlier this year, and now company co-founder Sergey Brin has checked in with the attendees who promised $1,500 for a set of the augmented reality eyepiece. In his message he shared a photo he took while cruising through Montana thanks to a mode it’s testing that snaps a picture every 10 seconds, no intervention needed. Unfortunately, if you’re not in that exclusive pre-ordering group you’ll have to wait for details like these to leak out secondhand since private updates, special events, Google+ Hangouts, secret handshakes and Little Orphan Annie decoder rings (perhaps not the last two) are reserved for a “unique, trusted community.” Hey, it’s not like the rest of us wanted some silly visor or etched glass blocks anyway.
This week we had a brief chat with Will Powell, a developer responsible for some rather fantastic advances in the world of what Google has suddenly made a very visible category of devices: wearable technology. With Google’s Project Glass nearer and nearer reality with each passing day, we asked Powell how his own projects were making advances at the same time, and how he saw advances in mobile gadgets as moving forward – and possibly away from smartphones and tablets entirely.
Those of you unfamiliar with Powell’s work, you can hit up the following three links and see the videos of the projects he’s done throughout this post. Some of the products Powell uses are the Vuzix STAR 1200 AR glasses, Raspberry Pi – the fabulous miniature computer, and of course, a good ol’ fashioned ASUS Eee Pad Transformer.
SlashGear: Where you working with wearable technology before Google’s Project Glass was revealed to the world?
Powell: Yes at Keytree we were working with wearable technology before the unveiling of project glass. I was working on CEO Vision a glasses based augmented reality that you could reach out and touch objects to interact or add interactive objects on top of an iPad. I have also had lots of personal projects.
SG: What is your ultimate goal in creating this set of projects with Raspberry Pi, Vuzix 1200 Star, etc?
P: I would say that the ultimate goal is really to show what is possible. With CEO Vision at Keytree we showed that you could use a sheet of paper to interact with sales figures and masses of data using the SAP Hana database technology. Then creating my own version of project glass and now extending those ideas to cover translations as well, was just to show what is possible using off-the-shelf technology. The translation idea was to take down barriers between people.
SG: Do you believe wearable technology will replace our most common mobile tech – smartphones, laptops – in the near future?
P: Yes I do, but with an horizon of a couple of years. I think that with the desire for more content and easier simpler devices, using what we are looking at and hearing to tell our digital devices what we want to find and share is the way forward. Even now we have to get a tablet, phone or laptop out to look something up. Glasses would completely change this because they are potentially always on and are now adding full time to at least one of our fundamental senses. Also many of us already wear glasses, according to Vision Council of America, approximately 75% of U.S. adults use some sort of vision correction. About 64% of them wear eyeglasses so people are already wearing something that could be made smart. That is a huge number of potential adopters for mobile personal information delivery.
I think we still have a way to go with working out how everything will fit together and how exactly we would interact with glasses based technology. With the transition from a computer to tablets and smartphones we opened up gestured with glasses we have the potential to have body language and real life actions as interaction mechanisms. And it would be the first time that there is no keyboard. There is also the potential for specifically targeted ads that could end up with us having some parodies come true. However, I do think we will have an app store for a glasses based device in the next few years.
SG: What projects do you have coming up next?
P: I have many more ideas about what glasses based applications can be used for and am building some of them. I am creating another video around translation to show the multi lingual nature of the concept. Further to that, we are looking at what areas of everyday life could be helped with glasses based tech and the collaboration between glasses users. The translation application highlighted that glasses are even better with wide adoption because Elizabeth could not see the subtitles of what I was saying without using the TV or tablet.
Stick around as Powell’s mind continues to expand on the possibilities in augmented reality, wearable technology, and more!
Getting started here with Qualcomm we’ve jumped right into benchmarks. Something that has increasingly been playing a large role in smartphones as a whole, and consumers purchase decisions. Overall when it comes down to it benchmarks should not only test graphics or CPU, but the overall user experience on mobile computing devices.
There’s many different options when it comes to smartphones, tablets, processors, and of course benchmarks. The Android market space for example has multiple options available. We’re not going into specifics here, nor are we naming names — but what do these really test? Mobile benchmarks need to fully test the device from all angles, not just any one scenario.
Obviously we have multiple options from SunSpider, Linpack tests for CPU, Quadrant which seems to focus on graphics, and more. Qaulcomm not only wants to make the mobile benchmark space better for consumers, but for everyone. Being able to test every aspect including user experience with things like browsing, and video playback should all be included. Along those same lines these tests need to also take advantage of the increasing power being built into devices. Apps that will truly test all 4 cores of our smartphones and tablets. Qualcomm offers an option with Vellamo, which we’ve covered in the past and will surely be hearing about more throughout the day.
Many enthusiasts and consumers alike might be hesitant to trust a benchmark built in house by any one party or SoC manufacturer, but we’ll be focusing more on Vellamo as the day continues. Another option could very well be Augmented Reality. As AR still hasn’t made a huge dent in the mobile space it surely is the future. Jon Peddie from JPR (research) briefly mentioned AR while speaking and stated it “will be the killer app,” and even went as far as to call it the mother of all benchmarks — as it stresses every aspect of a processor.
What do you guys think? Do benchmarks need to be improved for mobile devices, do they need to focus more on battery life and daily usage? Would an AR test be the ultimate benchmark? Stay tuned for more details on mobile benchmarks and Qualcomm’s new quad-core S4 processor.
Today at the Qualcomm mobile benchmarking workshop in San Francisco, Jon Peddie of Jon Peddie Research suggested that using augmented reality (AR) to test the performance of mobile devices could be “the mother of all tests.” By stressing all processors and sensors on modern smartphones and tablets — including CPU, GPU, DSP, ISP (image processor), GPS, gyro, compass, accelerometer, barometer, mic and camera — the benchmark would represent the worst case scenario in term of computing load. While AR adoption is still in its infancy amongst consumers — technology such as Project Glass still faces serious challenges — Qualcomm’s been very active in the field over the years and even provides and SDK for developers. Could this be a hint of what’s coming from the company in terms of benchmarking beyond Neocore and Vellamo? Let us know what you think in the comments.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.