Why ‘Gorilla Arm Syndrome’ Rules Out Multitouch Notebook Displays

Apple’s new MacBook Air borrows a lot of things from the iPad, including hyperportability and instant-on flash storage. But the Air won’t use an iPad-like touchscreen. Neither will any of Apple’s laptops. That’s because of what designers call “gorilla arm.”

And while Apple points to its own research on this problem, it’s a widely recognized issue that touchscreen researchers have known about for decades.

“We’ve done tons of user testing on this,” Steve Jobs said in Wednesday’s press conference, “and it turns out it doesn’t work. Touch surfaces don’t want to be vertical. It gives great demo, but after a short period of time you start to fatigue, and after an extended period of time, your arm wants to fall off.”

This why Jobs says Apple’s invested heavily in developing multitouch recognition for its trackpads, both for its laptops, on its current-generation Mighty Mouse and on its new standalone Magic Trackpad.

Avi Greengart of Current Analysis agrees it’s a smart move, borne out of wisdom gathered from watching mobile and desktop users at work.

“Touchscreen on the display is ergonomically terrible for longer interactions,” he says. “So, while touchscreens are popular, Apple clearly took what works and is being judicious on how they are taking ideas from the mobile space to the desktop.”

But Apple didn’t have to do its own user testing. They didn’t even have to look at the success or failure of existing touchscreens in the PC marketplace. Researchers have documented usability problems with vertical touch surfaces for decades.

“Gorilla arm” is a term engineers coined about 30 years ago to describe what happens when people try to use these interfaces for an extended period of time. It’s the touchscreen equivalent of carpal-tunnel syndrome. According to the New Hacker’s Dictionary, “the arm begins to feel sore, cramped and oversized — the operator looks like a gorilla while using the touchscreen and feels like one afterwards.”

According to the NHD, the phenomenon is so well-known that it’s become a stock phrase and cautionary tale well beyond touchscreens: “‘Remember the gorilla arm!’ is shorthand for ‘How is this going to fly in real use?’.” You find references to the “gorilla-arm effect” or “gorilla-arm syndrome” again and again in the scholarly literature on UI research and ergonomics, too.

There are other problems with incorporating touch gestures on laptops, regardless of their orientation. Particularly for a laptop as light as the MacBook Air, continually touching and pressing the screen could tip it over, or at least make it wobble. This is one reason I dislike using touchscreen buttons on cameras and camera phones — without a firm grip, you introduce just the right amount of shake to ruin a photo.

Touchscreens work for extended use on tablets, smartphones and some e-readers because you can grip the screen firmly with both hands, and you have the freedom to shift between horizontal, vertical and diagonal orientations as needed.

On a tablet or smartphone, too, the typing surface and touch surface are almost always on the same plane. Moving back and forth between horizontal typing and vertical multitouch could be as awkward as doing everything on a vertical screen.

This doesn’t mean that anything other than a multitouch trackpad won’t work. As Microsoft Principal Researcher (and multitouch innovator) Bill Buxton says, “Everything is best for something and worst for something else.”

We’ve already seen vertical touchscreens and other interfaces working well when used in short bursts: retail or banking kiosks, digital whiteboards and some technical interfaces. And touchscreen computing is already well-implemented in non-mobile horizontal interfaces, like Microsoft’s Surface. Diagonalized touchscreen surfaces modeled on an architect’s drafting table like Microsoft’s DigiDesk concept are also very promising.

In the near future, we’ll see even more robust implementations of touch and gestural interfaces. But it’s much more complex than just slapping a capacitative touchscreen, however popular they’ve become, into a popular device and hoping that they’ll work together like magic.

Design at this scale, with these stakes, requires close and careful attention to the human body — not just arms, but eyes, hands and posture — and to the context in which devices are used in order to find the best solution in each case.

See Also:


Apple loses, challenges patent verdict surrounding Cover Flow and Time Machine

Remember that one random company who sued Apple back in March of 2008 for ripping off its display interface patents? Turns out it was filed in the Eastern District of Texas, a hotbed for patent trolls who know that they stand a better-than-average chance of winning simply because of where their issues are being taken up. Sure enough, Cupertino’s stock of lawyers is today being forced to challenge a loss after a jury verdict led to Apple being ordered to pay “as much as $625.5 million to Mirror Worlds for infringing patents related to how documents are displayed digitally.” Ouch. Naturally, Apple has asked U.S. District Judge Leonard Davis for an emergency stay, noting that there are issues on two of the three; furthermore, Apple has claimed that Mirror Worlds would be “triple dipping” if it were to collect $208.5 million on each patent. In related news, the Judge is also considering a separate Apple request (one filed prior to the verdict) to “rule the company doesn’t infringe two of the patents” — if granted, that would “strike the amount of damages attributed to those two patents.” In other words, this whole ordeal is far from over. We can’t say we’re thrilled at the thought of following the play-by-play here, but this could definitely put a mild dent in Apple’s monstrous $45.8 billion pile of cash and securities. Or as some would say, “a drop in the bucket.”

Apple loses, challenges patent verdict surrounding Cover Flow and Time Machine originally appeared on Engadget on Mon, 04 Oct 2010 14:14:00 EDT. Please see our terms for use of feeds.

Permalink Apple Insider  |  sourceBloomberg  | Email this | Comments

Multi-robot command center built around Microsoft Surface (video)

While we’ve given up on ever winning an online match of StarCraft II, that doesn’t mean top-down unit control schemes are only for nerds in their mom’s basement with their cheap rush tactics and Cheeto fingers and obscene triple digit APMs (we’re not bitter or anything). In fact, we kind of like the look of this robot control interface, developed at UMass Lowell by Mark Micire as part of his PhD research. The multitouch UI puts Microsoft Surface to good use, with gestures and contextual commands that make operating an unruly group of robots look easy, and a console-inspired touch control setup for operating a single bot from a first person perspective as well. There are a couple videos after the break, the first is Mike operating an army of virtual robots, using Microsoft Robotics Developer Studio to simulate his soldiers and environment, but the second shows his first person UI guiding a real robot through a maze, in what amounts to a very, very expensive version of that Windows 95 maze screensaver.

Continue reading Multi-robot command center built around Microsoft Surface (video)

Multi-robot command center built around Microsoft Surface (video) originally appeared on Engadget on Sat, 28 Aug 2010 17:32:00 EDT. Please see our terms for use of feeds.

Permalink Wired  |  sourceUMass Lowell Robotics Lab  | Email this | Comments

Magical Cube From the Future Creates True 3D Light Effect

light_cube_display.jpg

Interactive designer Graham Plumb has created a stunning 3D interface that projects beams of light beaming through a transparent cube filled with water and “a specially formulated emulsion.” The effect are three-dimensional structures constructed out of pure light that he dubs the Reactive Cube (click through for video of them in action).

Like music or fashion, new tech ideas often start as some high-minded proof-of-concept exercise. While not meant for the consumption of the masses, the ideas filter down into everyday use: think Apple’s transparent and translucent iMacs of the early aughts. While this particular interface may never find its way into personal computers or mobile phones anytime soon, it seems like it could be tweaked and fiddled to perform public display duties. Imagine a mall fountain with glowing ads swimming around, or a three-dimensional interactive map encased in a cube in a museum foyer.

via Make

Intel’s mind reading computer could bring thought controlled interfaces to a whole new, frightening level

Thought controlled devices are pretty primitive at this point. Sure, everyone from Honda to the U.S. Army (of course) is conducting research, but at this point we don’t have much to show for it all besides an evening of experimental music in Prague. If the kids at Intel have their way, computers will soon be able to look at a person’s brain activity and determine actual words that they’re thinking. The idea here is that the activity generated in the average person by individual words can be mapped and stored in a database, to be matched against that of someone using the thought control interface. So far, results have been promising — an early prototype exists that can differentiate between words like screwdriver, house, and barn, by using a magnetic resonance scanner that measures something like 20,000 points in the brain. Anything more effective than that, such as dictating letters or searching Google with your mind alone is probably years in the future — though when it does come to pass we expect to see a marked increase in expletive-filled liveblogs.

Intel’s mind reading computer could bring thought controlled interfaces to a whole new, frightening level originally appeared on Engadget on Wed, 25 Aug 2010 17:04:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceTelegraph (UK)  | Email this | Comments

Happy 15th Birthday to Windows 95, the Ugly Duckling that Conquered Your Desktop [Techversaries]

You’ll be forgiven if Windows 95 doesn’t summon a burst of nostalgia. It was never pretty, often cantankerous, and, for the most part, our only option. But within two years of its release, 70% of the planet was using it. More »

Projected 3D Desk Lamp: Like a Real-Life Minority Report

3d desk lamp.jpg

Tom Cruise’s virtual info-surfing in the film Minority Report is often cited as the ideal future for information interactivity. The film is pretty horrendous, but who wouldn’t want to surf the Web (let alone some psychic’s brainwaves) like that? A true TCMR (Tom Cruise Minority Report)  has come closer to reality with the work of a researcher from from the National Taiwan University in Taipei.

The researcher, Li-Wei Chan, has created a projected tabletop interface that will interact with a user and even allow multiple users to interact with one setup. It’s like a gigantic touch-screen interface that will allow various users to extract information they want via hand gestures or other devices, created via a single projector.

Words don’t do it justice. Check the video.

Now scientists just need get in gear bringing Jurassic Park to life. It’s been like 17 years. I was expecting to have my own triceratops by now. Get on that, science!

Via newscientist

Holographic Displays, Robot Eyes Hint at Your Interactive Future

<< Previous
|
Next >>


Animatronic_Eyes_Large1


The eyes may be the window to the soul. But what do you see when you look into robotic eyes so real that it’s almost impossible to tell they are just empty, mechanical vessels?

At Siggraph, the annual conference for graphics geeks that ended last week, Disney researchers created an animatronic eye that moves in a lifelike way, makes eye contact and tracks those who pass by.

“We wanted two things from the eye,” says Lanny Smoot, senior research scientist at Disney Research. “It should be able to see or have vision, and it should move as smoothly and fluidly as the human eye.”

The animatronic eye was one of the 23 exhibits in the emerging-tech section of the conference.

“Each year there’s always been some consistent themes,” says Preston Smith, emerging-tech chair at Siggraph 2010. “But this year there hasn’t been one thing that has leapt out in front of others.”

Instead a variety of technologies jostled for attention: new 3-D display technologies, augmented reality and robotics. Siggraph 2010 showed research not just from universities but also from corporate labs, including Disney’s and Sony’s.

Above:

A Seeing Eye

Disney Research’s animatronic eye is relatively simple in its design. The eye has a transparent-plastic inner sphere with a set of magnets around it, painted to look just like a human eye. It is suspended in fluid and has a transparent outer shell. Using electromagnets from the outside, the eye is moved sideways or up and down, giving it a smooth and easy motion.

“It is as fast as the human eye and as good as the human eye,” says Smoot.

The pupil and the back of the eye are clear. A camera placed at the rear of the eye helps the eye see. Smoot hopes the mechanism can be used to create prosthetic eyes.

“The prosthetic eye based on this won’t restore sight, but it can restore cosmetic appearance to those who have lost an eye,” says Smoot. The animatronic eye won the “best in show” prize at Siggraph this year.

Photo: Daniel Reetz/Disney Research

<< Previous
|
Next >>


Entelligence: Let’s get digital

Entelligence is a column by technology strategist and author Michael Gartenberg, a man whose desire for a delicious cup of coffee and a quality New York bagel is dwarfed only by his passion for tech. In these articles, he’ll explore where our industry is and where it’s going — on both micro and macro levels — with the unique wit and insight only he can provide.

One of the more recent trends in UI design has been the attempt to make the digital appear analog. It arguably started with the NeXT OS, which had photorealistic icons and used clever grayscale techniques to give three-dimensional depth to windows, scroll bars and other elements. Today, Apple’s iPhone compass app looks like it might be more at home on an 18th-century clipper ship, and the voice recorder app looks at home in a recording studio somewhere around 1950 — tap on the “microphone” and the VU meter will react much as it would in real life. Google’s added subtle 3D effects to Android’s app scrolling. I haven’t thought that much about this trend until I recently spent some time using Windows Phone 7.

It’s perhaps a minor issue but one of the things I like about WP7 is that it’s not a digital UI pretending to be analog. The user interface is flat. There are no photorealistic depictions of real world items, no shading, and no 3D effects. Everything is conveyed through the use of fonts, shapes and color. It’s digital and it’s proud. Overall, I like it, and the more I use it, the more I prefer it. Returning to a more digital approach means Microsoft was able to rethink the nature of applications and services and create the concept of hubs, where like functions meet similar functions without the need for separate applications. It takes some getting used to, but the more I use it, the more natural it feels.

Continue reading Entelligence: Let’s get digital

Entelligence: Let’s get digital originally appeared on Engadget on Sun, 01 Aug 2010 20:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Slurp digital eyedropper sucks up, injects information wirelessly (video)

How does Jamie Zigelbaum, a former student at MIT Media Lab, celebrate freedom from tyranny, drool-worthy accents and “standing in the queue?” By creating Slurp, of course. In what’s easily one of the most jaw-dropping demonstrations of the year, this here digital eyedropper is a fanciful new concept that could certainly grow some legs if implemented properly in the market place. Designed as a “tangible interface for manipulating abstract digital information as if it were water,” Slurp can “extract (slurp up) and inject (squirt out) pointers to digital objects,” enabling connected machines and devices to have information transferred from desktop to desktop (or desktop to speakers, etc.) without any wires to bother with. We can’t even begin to comprehend the complexity behind the magic, but all you need to become a believer is embedded after the break. It’s 41 seconds of pure genius, we assure you.

Continue reading Slurp digital eyedropper sucks up, injects information wirelessly (video)

Slurp digital eyedropper sucks up, injects information wirelessly (video) originally appeared on Engadget on Mon, 05 Jul 2010 17:19:00 EDT. Please see our terms for use of feeds.

Permalink MAKE  |  sourceSpime, YouTube [zigg1es]  | Email this | Comments