Hydraulics Could Enable Fullscreen Braille Display

braille display

For most blind computer users, surfing the internet or catching up on e-mail means reading just one line at a time, because commercially available braille displays can’t show full pages of text.

Researchers from North Carolina State University now say they have devised a display that would allow visually challenged users to read a full page at a time — and at a much lower cost than existing displays.

“We have developed a low-cost, compact, full-page braille display that is fast and can be used in PDAs, cellphones and even GPS systems,” says Dr. Peichun Yang, one of the researchers working on the project, who is himself blind.

A full-page display is better because it allows readers to skip paragraphs and read the parts they want, instead of forcing them to go over it line by line. Full-page display also presents more information in a shorter time.

Braille characters, developed by Louis Braille in 1821, are created by a pattern of raised dots. Alphabets, punctuation and numerals are represented in cells. Each cell is made of six dots arranged in a 2×3 dot matrix. A dot may be raised at any of the six positions to form the characters.

“Braille is very significant, and statistically about 90 percent of blind people who have a job can read braille,” says Dr. Yang. “It’s a very important part of their ability to read.”

Braille displays on the market now use piezo-ceramics, in which a 2-inch-long lever forces up the dots, explains Dr Yang. “It’s expensive and limiting,” he says.

As a result, a typical braille display today has just one line of 80 cells, and can cost up to $8,000.

Instead, Dr Yang and his team developed a new way to create the raised dots. Each cell in their display uses what is called a “hydraulic and latching mechanism.”

“The mechanism can offer a large displacement and fast response time simultaneously, which is the key to a good commercial braille display,” says Dr. Yang.

A four-line display developed using the new system could be around $1,000, and fullscreen displays could come later.

Here’s how Dr. Yang’s technology works. Picture each cell as a rectangular cavity that is filled with liquid. The top and bottom have a small opening that is sealed with a flexible diaphragm. There are four bendable actuators made of electroactive polymers — which means they change shape when voltage is applied — on each side.

By manipulating the voltage, two facing polymers can be made to displace the fluid housed within them. This pushes the fluid up towards the top, raising the dot. Once the dots are raised, a latching mechanism would support the weight being applied by a person’s fingers as the dots are read. A refreshable braille dot has a response time of around 30 milliseconds.

Dr. Yang and his team hope to create prototype displays within a year, and if successful they can be commercially produced.

See Also:

Photo: Braille sign at the Port Museum ( reinvented/Flickr)


Coffee-Cup Collar Expands Like B-Movie Special Effect

You may remember Scott Amron from his Brush & Rinse toothbrush, which channels a jet of rinsing water into your mouth. Or his Keybrid, a split-ring key with its own keyring built-in. Both of these concept designs made it to market, which sends a chill of fear through me as I consider that one day, I may actually see his new invention in the wild.

The Heatswell is an endothermic (heat-activated) coating for a paper coffee-cup. When that cup is filled with a tasty hot beverage, the band swells into a thick, insulating cloth-like material, offering both grip and heat-protection for your fingers. It’s thin, safe and cheaper than a cup and collar together. So what’s the problem? Take a look at the video. The Heatswell does indeed swell impressively, only it swells like a diseased tree-trunk blistering under napalm.

The end of the video has even more terrifying mutations, but once we get over the accelerated cancerous growths, we can see that this cup could actually end up in a Starbucks near you (and trust me, there is a Starbucks near you). Not only is it cheaper, and made from an FDA-approved material, but it cuts out a step of the coffee-serving process and offers the opportunity for branding – although I’m not sure which company would like to see its logo ballooning like a necrotic canker.

Scott is already sending out samples. I’m hoping to get one and combine it with one of those self-heating hand-warmers for my own invention: The Heat Engine, a perpetual machine which will power the world!

Heatswell [Amron Experimental. Thanks, Scott!]


Gaming Vest Makes Virtual Fights Real and Painful

tactile-gaming-vestNext time your character gets shot while playing Call of Duty it could hurt for real. A tactile gaming vest created at the University of Pennsylvania can make wearers feel a punch or a gunfire hit in sync with what’s happening on screen.

Ouch!

“The idea is to develop a haptic interface for first person shooting games,” says Saurabh Palan, a graduate student at the university who is working on the project, on his website. “The feeling of bullet hit, body impact and vibration or a shoulder tap will enhance the gaming experience and fun.”

It’s not all play with the vest. It can be modified for real time simulation and training by the military, says Palan.

The vest uses four solenoid actuators in the chest and shoulders in front, and two solenoids in the back, explains IEEE Spectrum. Vibrating motors clustered against the shoulder blades simulate a reaction similar to getting stabbed. All the components are controlled and linked to the game such that the appropriate solenoid “fires” depending on where the character in the video game is getting hit.

IEEE Spectrum says the entire experience is  “closer to a paintball excursion, but it doesn’t hurt as much.” Still the gaming vest sounds pretty masochistic to me.  But for those who crave greater realism in their video games, this could be a good way to feel the pain without the bruises.

Photo: Gaming vest (Saurabh Palan)


3-D Tabletop Display Gets Rid of the Glasses

pcubee_wired

A handheld cube-shaped display promises to offer all the thrills of 3-D without the annoyance of the glasses. The device called pCubee arranges five LCD screens into a box-like shape so viewers can pick it up, watch content or play with virtual objects inside.

Weighing in at about three pounds, pCubee gives users a chance to poke and prod objects virtually using a stylus. You can shake the cube, tilt it or interact with a touchscreen, all while retaining the 3-D experience.

“Most people think 3-D is all about stereo and having alternating frames to help the brain perceive depth,” says Sidney Fels, who leads the Human Communication Technologies Lab at the University of British Columbia, where the project was designed. “What we wanted to offer is a fish-tank-like experience in a handheld device.”

A wave of successful 3-D movies such as Avatar and Alice in Wonderland have spurred interest in bringing the 3-D viewing experience closer to consumers. Major consumer electronics companies such as Samsung, LG and Panasonic have started selling 3-D TVs that are fundamentally based on the principle of stereoscopy. Stereoscopy involves presenting a slightly different image to each eye of the same scene so when the brain fuses those images, it perceives depth. That also means viewers have to wear glasses for the 3-D effect.

A different principle called motion parallax is at work in the pCubee. Motion parallax is the apparent change in position of an object, depending on the distance from which you view it. It’s a very effective cue for 3-D, says Fels.

“Our brains are wired to perceive motion parallax and interpret it as 3-D,” he says. “It’s one of the reasons why even if you have just one eye, you can do reasonably well with depth in the real world.”

The pCubee’s design helps the brain interpret this better.

“The fact that it is handheld greatly increases motion parallax,” says Ian Stavness, one of the researchers who worked on the project. “If it were fixed to the desk, you would have to move your head around and it would not be so comfortable.”

And as the video shows, pCubee is fun and easy to use.

The pCubee has three graphics pipelines that drive the screens on the sides of the box. A motion tracker watches the pCubee and the user’s head. The software that powers the device ensures that the user’s view of the box and the rendered perspective on each screen are in sync.

Fels says his team is looking to commercialize pCubee so it can be in the hands of consumers. The team is looking to improve the design and refine it by experimenting with OLED screens to replace the LCD panels that are being used currently.

“The pCubee can be used as a game platform, a CAD-CAM platform and in museums,” says Fels. “We imagine this as something that would be on everybody’s coffee table.”

[via Technabob]

See Also:

Photo: pCubee


High-Speed Camera Scans Books in Seconds

Professor Ishikawa Komuro’s Tokyo lab is better known for robot hands that can dribble and catch balls and spin pencils between their fingers. Now, two researchers have taken this speedy sensing tech and applied it to the ripping of paper books.

Books are different from other kinds of media, like music and movies — it’s very hard to get them into a computer. There is no equivalent of CD or DVD rippers like iTunes or Handbrake. This not only makes piracy laborious, it also stops you from turning your own books into e-books.

This high-speed scanner changes that, at least if you have the room and tech skills to build one. By using a high-speed camera that shoots at 500 frames per second, lab workers Takashi Nakashima and Yoshihiro Watanabe can scan a 200-page book in under a minute. You just hold the book under the camera and flip through the pages as if shuffling a deck of cards. The camera records the images and uses processing power to turn the odd-shaped pictures into flat, rectangular pages on which regular OCR (optical character recognition) can be performed.

The technique is unlikely to be coming to the home anytime soon (although ripping a book by flipping it in front of your notebook’s webcam would be pretty awesome), but it could certainly speed up large scanning efforts like Google’s book project.

Superfast Scanner Lets You Digitize a Book By Rapidly Flipping Pages [IEEE Spectrum]

High-Speed Robot Hand Demonstrates Dexterity and Skillful Manipulation [Hizook]

See Also:


LEDs Could Transmit Future Broadband Signals

led-lights

The light from the lamps in your house could carry a wireless signal that could power internet connectivity at home, say a group of German researchers who say they have found a way to encode the signals into visible frequency.

Though it would provide much lower speeds than Wi-Fi signals, it can offer less interference and is likely to offer great protection from hackers, say the researchers.

Currently, most homes use radio-frequency based Wi-Fi signals for broadband service. But Wi-Fi has limited bandwidth, says the researchers, and it is difficult to get more radio spectrum for it. Visible frequency would be a good alternative, they say.

Flickering the lights can generate the signal in a room. The change won’t be visible to the human eye because the rate of modulation is millions of times faster than what we can see, say the researchers. And since, visible light can’t penetrate walls there will be no interference.

Since incandescent and fluorescent bulbs can’t flicker fast enough, LEDs would be the right choice, say the researchers.

Commercial LEDs have a bandwidth of only a few MHz. But Jelena Vučić, a researcher at the Fraunhofer Institute for Telecommunications, and her colleagues who have been working on the project have found a way to increase the bandwidth by filtering out all wavelengths but blue.

Using the visible wireless system they built, the team downloaded data at up to 230 megabits per second. The researchers will present their findings at a conference in San Diego later this month.

[via Inhabitat]

Photo: (slworking/Flickr)


Personal Solar Panel Twenty Time More Powerful Than Rivals

joos_orange_angledtabletopstandalone

The Joos Orange is a solar panel that promises to make sun-power useful, rather than just a hippy’s dream. By using top-end components and some clever circuitry, the panel wrings around 20x the juice from the falling sun-rays than other chargers. Sound impressive? It is, and it manages to do it for just $100.

With just an hour in the sun, the Joos Orange will generate (and store in its li-ion battery) enough power to keep you talking on the phone for two and a half hours. This compares to 5-20 minutes for other chargers (according to the company’s figures). Let the thing lounge in the sun all day long and it will end up with enough power to charge an iPhone four time over.

The Joos Orange comes from California-based Solar Components, and apart from the circuitry which optimizes the use of the charge, it uses a very efficient mono-crystalline solar cell instead of a poly-crystalline cell. It will charge in low light, can be charged via USB if there really is no sun, and the polycarbonate and steel body is waterproof, meaning it’ll even charge underwater. When the battery finally dies after 1,000 cycles you can still power gadgets before the replacement battery turns up.

The Joos Orange will ship in June, but Gadget Lab should be getting its hand on a test unit soon. We’re pretty excited: If the panel lives up to its promise, it pretty much means the end of plugging gadgets into the mains, especially here in sunny Spain. And at just $100, 24-ounces (680 grams) and 6×8×1-inches (half the size of a legal pad) its cheap and portable, too.

Joos Orange [Solar Joos. Thanks, Dave!]

Press release [Eon]


Skinput Turns Your Arm into a Touch-Screen

Skinput uses a bio-acoustic sensing array coupled with a wrist-mounted pico-projector to turn your skin into a touch-screen. Confused? Don’t be. It’s amazingly simple.

Researchers at Carnegie Mellon University, along with Microsoft’s research lab, have come up with a way to use the skin of your arm (or any other part of your body) to act as a display and an input device, without actually implanting anything weird into you. It consists of two parts. A tiny projector beams the image onto your skin. Tapping the “buttons” causes ripples to run through your skin and bones.

These waves change depending on where you tap, as they run through bone, soft tissues and the like. Special software analyzes these waves, and uses the information to work out exactly where you touched, just as if you were tapping an iPhone screen. Specific locations can be mapped to certain functions: in the video you see somebody playing Tetris by tapping their fingers.

Both sensor and projector can be put into the same armband, but the display is unnecessary: Another use is to tap the tips of the fingers to control an MP3 player, a task simple enough to rely on the user’s memory.

Various tap-based interfaces are possible, and the thing that impresses us about all of them is the simplicity for the user. We worry a little though. We already mistake people muttering into their Bluetooth headsets for crazy people who talk to themselves. Now we have to distinguish joggers skipping tracks on their iPods from drug-fried nut-jobs who twitch and scratch at imaginary insects crawling over their flesh. Thanks, researchers.

Body acoustics can turn your arm into a touchscreen [New Scientist via Mashable]


Rumor: iPad’s A4 Chip Was Outsourced

Steve Jobs touted the iPad’s processor as “custom silicon” and the “most advanced chip” Apple has ever done, but it appears the company didn’t do much with it at all.

Dubbed the A4, the iPad’s brain is actually a system-on-a-chip (SOC) consisting of the Cortex A8 single-core processor made by ARM and Imagination Technologies’ PowerVR SGX graphics processing unit, tipsters have told ArsTechnica’s Jon Stokes.

In other words, Apple licensed chips from other providers like it did with the iPhone, and it didn’t produce the parts in-house, which many assumed to be the case when Jobs introduced the iPad and the A4 in January.

Apple has not provided official details on the A4’s specifications, which is unsurprising. Apple has traditionally been secretive about the exact components inside its previous products, leaving component analysts such as iSuppli to rip apart the gadgets and figure out the nitty-gritty details about their guts.

The Cortex A8 and the PowerVR SGX would make sense, as they’re the same technologies used in the iPhone and iPod Touch. MacRumors also spotted a clause in the iPad’s software development kit that confirmed the SGX is being used in the iPad.

If the A4 SOC’s parts were outsourced, the role of PA Semi, a semiconductor manufacturer that Apple acquired in April 2008, remains unclear. Stokes speculates that the PA Semi team may have helped optimize the A4 to extend battery life for the iPad, which Apple claims will last 10 hours with active usage and one month on standby.

See Also:


Development of Apple’s iPad Chip Estimated at $1B

a4Steve Jobs introduced not one, but two new products last month: the iPad and Apple’s custom made A4 chip. Analysts have yet to autopsy the chip to uncover its secrets, but even more interesting is what it takes for a company like Apple to manufacture its own chip: about $1 billion, according to The New York Times.

“Even without the direct investment of a factory, it can cost these companies about $1 billion to create a smartphone chip from scratch,” reports NY Times‘ Ashlee Vance.

That makes Apple’s $278 million acquisition of semiconductor manufacturer PA Semi look like pocket change. And hopefully Apple’s investment will pay off not just for the company, but also for iPad owners: The 1-GHz A4 chip, Apple promises, will help preserve the iPad’s battery for up to 10 hours of active use and one month of standby.

And considering the enormous cost of developing this chip, iPhone owners can have faith that the A4 will most likely appear in future iPhones. Maybe we’ll see an A4-powered iPhone debut this summer, and all our complaints about battery life will disappear. After that, all we’d have left to complain about is AT&T.

See Also: