The Apple iPad won’t be out for another 60 long days for us mere mortals, so we’ve got our hands on its SDK — it’s the next best thing for now, as you can see in the gallery of screenshots below. Strangely, the emulator’s bezel is a tad thinner than the real thing, but we’ll get over it. Enjoy!
Oh, Meizu, how do we love thee? Let us count the ways. The KIRF-rooted company has released a promo touting the UI for its M8 phone. If any of it seems familiar, just remember that imitation is the sincerest form of flattery. Honestly, we’re a bit surprised just how professional the video feels, and the tune’s pretty memorable to boot. Get it caught in your head all day, footage is after the break.
Okay, so HTC doesn’t own exclusive rights to create a flip-clock displays on phones, but the style is something of a hallmark of Sense UI and now here one is in Samsung’s Bada platform — though seemingly tucked away in the date setting window. That’s just one of a set of new screens uncovered at Samsung Hub showing off a media player that loves to show off album art and to truncate artist names, a photo browser full of delicious stock imagery, and that very familiar looking home screen to the left above. Things really don’t look bad at all, but we’re still having a hard time getting excited about this one.
Among the seemingly thousands of Android-powered HTC handsets rumored for the first half of 2010, little is known of the mysterious Espresso — the codename was found in a 2.1 ROM and a sketchy report claims that it’ll have a QWERTY keyboard for an MWC announcement, but other than that, we’re in the dark. Anyhow, Italian site hdblog.it now claims to have some shots ripped off the Espresso’s display, and at a glance, you can tell this isn’t quite the Sense we’re used to from the Hero. The bar along the bottom now features direct access to People — a feature we’d already heard would be revised for HTC’s next round of Android phones — and app icons have apparently been graced with translucent surrounds that are… well, not exactly pretty. We’ve got to keep our opinions in check until we actually see a shipping ROM, of course, so hopefully those talks of an MWC unveiling in February pan out.
We hate to harsh on a new phone platform — what could be more exciting, after all, than a whole new take on handset software? — but we’re pretty confused by Samsung’s Bada. Still, these leaked screenshots fill us with some hope: it looks fairly pretty, and quite a bit more intuitive than the standard Samsung UI. It also seems to be an odd visual mashup of Android and Symbian, but in a good sort of way, and we look forward to the sort of democratization of touchphones it seems to represent. There, that wasn’t very harsh-ey at all! Now check out the developer-oriented video after the break to let a new wave of confusion wash over you.
We only got the briefest of glimpses at the new UI approach in Synaptics’ collaborative Fuse concept handset, and now TAT (The Astonishing Tribe, the folks behind the original Android UI), has posted a brief clip that gives a better idea of the full UI. It’s pretty wild, with some sort of rendering engine that really emphasizes depth, lighting and motion. We’re not sure it’s the most usable UI on the planet, but it’s certainly one of the oddest we’ve witnessed. Check it out in motion after the break.
Some smart students at MIT have figured out how to turn a typical LCD into a low-cost, 3-D gestural computing system.
Users can touch the screen to activate controls on the display but as soon as they lift their finger off the screen, the system can interpret their gestures in the third dimension, too. In effect, it turns the whole display into a giant sensor capable of telling where your hands are and how far away from the screen they are.
“The goal with this is to be able to incorporate the gestural display into a thin LCD device like a cell phone and to be able to do it without wearing gloves or anything like that,” says Matthew Hirsch, a doctoral candidate at the Media Lab who helped develop the system. MIT, which will present the idea at the Siggraph conference on Dec. 19.
The latest gestural interface system is interesting because it has the potential to be produced commercially, says Daniel Wigdor, a user experience architect for Microsoft.
“Research systems in the past put thousands of dollars worth of camera equipment around the room to detect gestures and show it to users,” he says. “What’s exciting about MIT’s latest system is that it is starting to move towards a form factor where you can actually imagine a deployment.”
Gesture recognition is the area of user interface research that tries to translate movement of the hand into on-screen commands. The idea is to simplify the way we interact with computers and make the process more natural. That means you could wave your hand to scroll pages, or just point a finger at the screen to drag windows around.
MIT has become a hotbed for researchers working in the area of gestural computing. Last year, an MIT researcher showed a wearable gesture interface called the ‘SixthSense’ that recognizes basic hand movements.
But most existing systems involve expensive cameras or require you to wear different-colored tracking tags on your fingers. Some systems use small cameras that can be embedded into the display to capture gestural information. But even with embedded cameras, the drawback is that the cameras are offset from the center of the screen and won’t work well at short distances. They also can’t switch effortlessly between gestural commands (waving your hands in the air) and touchscreen commands (actually touching the screen).
The latest MIT system uses an array of optical sensors that are arranged right behind a grid of liquid crystals, similar to those used in LCD displays. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.
By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.
In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.
The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.
The idea is so novel that MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.
For now, Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares.
The bi-directional screens from MIT can be manufactured in a thin, portable package that requires few additional components compared with LCD screens already in production, says MIT. (See video below for an explanation of how it works.)
Despite the ease of production, it will be five to ten years before such a system could make it into the hands of consumers, cautions Microsoft’s Wigdor. Even with the hardware in hand, it’ll take at least that long before companies like Microsoft make software that can make use of gestures.
“The software experience for gestural interface systems is unexplored in the commercial space,” says Wigdor.
It’s Nokia Capital Market Day again which means that the boys from Espoo are fawning over investors and giving them a reason to stick around in 2010. And you know what? It sure sounds promising for gadget nerds. Why the optimism? Easy: Nokia is hell-bent on redefining the user experience of its Symbian devices. To quote CEO, Olli-Pekka Kallasvuo, “In 2010, we will drive user experience improvements, and the progress we make will take the Symbian user interface to a new level.” To bolster this proclamation, the very first bullet point listed under Nokia’s Devices and Services operational priorities is “improve our user experience” — something that would thrill us to no end if it happens.
The revamped Symbian UI is set to deliver on two “major product milestones” in the first and second halves of the year. Nokia will also deliver its first Maemo 6 “mobile computer” in the second half of 2010 flanked by a significantly increased proportion of “touch and/or QWERTY devices” in its smartphone portfolio. It’s worth noting that all the discussion is around Symbian, just a single mention of Maemo and its “iconic user experience” in the forward looking press release. Developers will be happy to hear that Nokia will also continue to scale services geographically while continuing to enhance its developer tools like QT4.6 announced yesterday. Financially speaking, Nokia expects to see the erosion of its average selling price slowed compared to recent years. That’s good as Nokia attempts to grow its margins. However, while Nokia expects mobile device volumes to be up approximately 10% in 2010 across the industry, it sees its own mobile device volume market share as flat in 2010, compared to 2009.
Be clear on this though: our incredibly frustrating S60 5th user experience was by far the biggest complaint we had when reviewing Nokia’s flagship N97 — having the most bullet points on a list of features is not what it takes to lure consumers anymore (if ever). If Nokia can better the best in class experiences carved out by Apple, Palm, and HTC with its Sense UI then consumer mindshare, and our hearts, will follow.
We’ve already gotten a glimpse of an updated on-screen keyboard seemingly set for inclusion in the next update to Windows Mobile 6.5, and it now looks like Microsoft might have even more changes on tap to keep folks satisfied in the buildup to Windows Mobile 7. Apparently, something that may or may not be called Windows Mobile 6.5 ‘second edition’ adds a number of UI updates that are supposedly designed to make it more usable with capacitive touchscreens. The biggest of those changes, it seems, is that the clickable buttons from the top bar have been removed in favor of a larger, more finger-friendly bar at the bottom — which, judging from appearances, is not quite ready for prime time. Of course, of all this is still just based on what’s been turned up in an early build of the OS, but at least one unnamed Microsoft representative has reportedly confirmed that the updated UI does indeed come from Microsoft, but he apparently wouldn’t confirm much else.
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.