Touchscreen Braille Writer Lets the Blind Type on a Tablet


One group of people has traditionally been left out of our modern tablet revolution: the visually impaired. Our slick, button-less touchscreens are essentially useless to those who rely on touch to navigate around a computer interface, unless voice-control features are built in to the device and its OS.

But a Stanford team of three has helped change that. Tasked to create a character-recognition program that would turn pages of Braille into readable text on an Android tablet, student Adam Duran, with the help of two mentor-professors, ended up creating something even more useful than his original assignment: a touchscreen-based Braille writer.

Currently a senior at New Mexico State University, Duran arrived at Stanford in June to take part in a two-month program offered by the Army High-Performance Computing Research Center (AHPCRC). The program is a competition: Participants are given research assignments, ranging in the past from aerospace modeling to parallel computing, and vie for honors awarded at the end of the summer. This year, projects aimed to solve a problem using the Android platform. Duran and his team’s project, titled “A virtual Braille keyboard,” was this year’s winner for “Best Android Application.”

Duran was challenged to use the camera on a mobile device, like the Motorola Xoom, to create an app that transforms physical pages of Braille text into readable text on the device. From the get-go, there were problems with this plan.

“How does a blind person orient a printed page so that the computer knows which side is up? How does a blind person ensure proper lighting of the paper?” Duran said in an interview with Stanford News. “Plus, the technology, while definitely helpful, would be limited in day-to-day application.”

So Duran and his mentors, Adrian Lew, an assistant professor of mechanical engineering, and Sohan Dharmaraja, a Stanford Ph.D. candidate studying computational mathematics, decided to develop a writer app, instead of a reader. Currently, the visually impaired must use desktop-based screen-reading software or specially-designed laptops with Braille displays in order to type using a computer.

Because a blind person can’t locate the keys of a virtual keyboard on a flat, glossy touchscreen, the team decided to bring the keys themselves to the user’s fingertips. Specifically, when the user sets eight fingers on the device, virtual keys align underneath each of the user’s fingers. The team’s Braille keyboard is comprised of eight keys: six that are used to compose a Braille character, a carriage return, and a backspace key. If the user gets disoriented, he or she can re-establish the keyboard layout with a lift and re-application of the hands.

“The solution is so simple, so beautiful. It was fun to see,” Lew said. Such a keyboard is also useful because it customizes itself to the user, adjusting the onscreen keys based on the user’s finger size and spacing. (I wish my iOS keyboard did that!)

Duran demoed the app blind-folded, typing out an email address as well as complicated mathematical and scientific formulas, proving the keyboard could be useful to educators, students and researchers. He also got to see a blind person use his app for the first time, which he said was an indescribable feeling, “It was the best.”

Lew said via email, “We do not yet know how exactly this will reach final users, but we are committed to make it happen.” The team has several options they will be considering over the next few weeks, so perhaps we could even see an app end up in the Android Market soon.

The tablet-based system costs 10 times less than most modern Braille typing solutions, and, based on the video below, appears to be anything but vapor.

Image and Video courtesy Steve Fyffe/Stanford University


Atom-Thick Graphene Sheets Could Make Great Camera Sensors

Some graphene, looking thin and strong. Illustration CORE-Materials/Flickr

Graphene, a one-atom-thick sheet of carbon, could end up making a pretty good camera sensor. Researchers at MIT have discovered that graphene can turn light into electricity, but not the way you’d think.

Unlike camera sensors and solar panels which rely on the photovoltaic effect, graphene creates a current because of a temperature difference. When light shines on its surface, it heats the electrons within, but “the lattice of carbon nuclei that forms graphene’s backbone remains cool.” This temperature difference produces the electricity.

Normally, this only occurs with very high energy light sources (lasers!) or very low-temperature materials. Graphene manages it with daylight and at room temperature.

The material surely has many uses (not least as a way to generate solar energy), but it turns out that graphene would make a good camera sensor. It detects infrared light, for example (good for spooky effects), and is also made from cheap, readily available carbon. Research is still young, but perhaps it could lead to a decent camera finally fitting into the supermodel-thin iPod Touch.

Graphene shows unusual thermoelectric response to light [MIT News]

See Also:


Self-Cleaning Fabric Reacts to Light

Boring laundry could be obsolete, with new tech from UC Davis researchers. Photo Paolo/Flickr

Forget washing your clothes. In the future, you may be able to clean your shirt just by taking a walk in the sun.

Students at UC Davis have worked out a way to mix cotton with a compound that reacts to light. When hit by photons, the compound — 2-anthraquinone carboxylic acid — reacts and produces hydroxyl radicals and hydrogen peroxide.

Hydrogen peroxide, you will remember, is used to bleach hair and propel rockets. The released chemical will also kill bacteria and “break down organic compounds such as pesticides and other toxins.” Perhaps you wouldn’t want to actually be wearing this when the light hits it.

Self-cleaning clothes would be great. Imagine a hiking trip. Instead of having to wash and dry your clothes, you could just leave them out on a rock to clean them. Ning Liu, who worked on bonding the chemicals to the cellulose in the cotton, says that the fabric “has potential applications in biological and chemical protective clothing for health care, food processing and farmworkers, as well as military personnel.” Whatever. I’ll settle for claiming back the space the washing machine takes up in my tiny kitchen.

Self-cleaning cotton breaks down pesticides, bacteria [UC Davis via CNET]

See Also:


‘Social Bomb’ Covertly Cuts Off Twitter, Facebook

Boom! The Social Bomb forces your friends to pay attention to you, instead of something more interesting

The very best thing about a concept design is that you don’t have to explain how it works. It’s like being a kid again, where a pair of toilet paper tubes become a telescope, or an upturned traffic cone becomes the biggest — and therefore best — ice-cream container ever.

So we won’t attempt to peek inside the black box that is Hugo Eccles’ Social Bomb, a “covert device, intended to disable technologies invisibly and without consent.” The idea is that you twist a timer on its top and it will somehow disable any social networking in a 30m (90 foot) radius. Think of it as a TV-Be-Gone, only for Twitter, Facebook and e-mail.

Eccles’ design is part of the Slow Tech exhibition at this year’s London Design Festival, curated by Wallpaper editor-at-large Henrietta Thompson. The idea behind Slow Tech is not just disconnection, but using technology in less obtrusive ways. The Social Bomb might force your friends to listen attentively to your boring anecdotes, but other designs use technology for good, not evil.

Samuel Wilkinson’s Biome, for example, is a Tamagotchi-like terrarium, a real-life bottle of flowers that you nurture using a connected phone app. And Kiwi & Pom’s Flip is an old-fashioned flip-board which will display incoming Tweets and appointments in clackety plastic characters.

I remain somewhat unconvinced. Downtime is important, if only to take a rest, but technology can enhance real life, too. My iPad became a useless chunk of glass and plastic on a recent holiday to Tunisia, thanks to no connectivity, anywhere. Contrast that with a previous vacation with fast 3G access: We were able to explore the nooks and crannies of towns, the pieces of a country that can never be found just by wandering the streets.

Plus, Instagram is like the best vacation photo tool ever. Just sayin’.

Slow Tech [Protein. Thanks, Henrietta!]

See Also:


Microsoft Patent Details Module-Based Smartphone

Microsoft patent shows how a modular smartphone could be realized. Image: RegHardware

We know Microsoft for its software chops, but the company is tinkering with some innovative hardware design concepts on the side.

A recent Microsoft patent describes a smartphone with a slide-out section that can house one of several modules, including a QWERTY keyboard, a gaming pad, a second display or a battery pack. Even better: The modules work wirelessly when they aren’t docked in the smartphone’s slider. Another useful way the modular smartphone concept could be used: The keyboard can be used as a controller while the smartphone acts as a TV-connected media hub.

Such a modular design combines capabilities normally found in different phones or accessories. For a gaming pad, your phone of choice right now would be something like the Xperia Arc. Want a slide-out QWERTY keyboard? You’re probably looking at one of several Android smartphones. If you’re looking for extra juice, you’ll need a special case or a phone with a removable battery.

How would something like this work if it came out within the next year or so?

With continued Xbox Live integration with Windows Phone 7.5 (Mango), gaming would definitely be fun with the d-pad module.

If you’re writing long emails or sending text after text, a QWERTY keyboard can be more comfortable to use but not something you necessarily need all the time. Windows Phone has tight social media integration, which would make it easy to stay connected with friends and family and keep chatting via email, Facebook or other methods.

Windows’ Live Tile-based UI looks fantastic on a single display. I can only imagine that spreading to dual screens — the ability to check status updates, weather notifications and more on one, and watch video, check email or browse the web on the other. However, dual-screened devices have largely disappointed in practice. Perhaps the slide-out, rather than the folding-style double screen, could be an improvement though.

A battery-pack module would be ideal for a long day (or weekend) traveling when you may not have access to an outlet for charging, like on a camping trip. Your phone would be alive — but would you have access to 3G or 4G? At least you’d be able to take photos and perhaps access some sort of offline map app. Along the same lines, a battery pack could keep the phone juiced up while you use the gaming pad wirelessly.

Would such a design be practical? Smartphones wear many hats these days, especially if it is being shared among members of a household (web-surfing mom or dad, text-happy kids who also suck batteries dry playing games … you get the picture). The biggest problem might be misplacing modules and the risk of dirt or debris damaging the slider.

Microsoft’s patent isn’t the first of its kind. Other modular cellphones include the Modu Phone, which featured interchangeable cases and a prototype from NTT Docomo. More recently, we’ve seen the smartphone itself work with larger accessories, like with the Motorola Atrix and its laptop dock.

RegHardware via Geek


Experimental Intel Chip Shows Future of CPU Efficiency

The Intel Developer Forum showcases the near-threshold voltage processor. Photo: Intel

Researchers at Intel debuted an experimental processor at the company’s developer forum this week, which could lead to devices with significantly lower energy consumption.

The chip — codenamed “Claremont” — is known as a near-threshold voltage processor, which allows transistors to operate at super-low, near “threshold” voltages to increase efficiency and decrease energy consumption. This level is very near the voltage at which transistors switch on and start conducting current, which is the “threshold” voltage.

In the demonstration, the experimental low-power processor was used on a PC running Linux OS, powered only off a solar cell the size of a postage stamp. The processor was used in conjunction with another experimental project, a concept DRAM called the hybrid memory cube, which is a super efficient memory interface.

“We used a solar cell in the demonstration to show how little power was required,” said Intel spokeswoman Connie Brown in an interview. “But it could run on anything that has power.” Like lemon juice, or perhaps a potato as Brown suggested. “The key message is the low power and how much more transistors would be power-efficient running at near-threshold.”

Several years of research have led to Intel’s near-threshold voltage-processor design. It’s heat-sink free, and rather than operating at those super low thresholds all the time, it switches into NTV mode (under 10 mW in power consumption) when its workload is light.

This means, rather than powering off completely, a device can stay on in an “ultra low-power state,” preserving active processes and open applications — “always-on” devices. The technology could even be used to develop “zero-power” architectures “where power consumption is so low that we could power entire digital devices off solar energy, or off the energy that surrounds us every day”, like vibrations or movements, ambient wireless signals or solar power.

NTV could find itself in a host of applications ranging from processors and mobile devices to embedded devices, appliances and automobiles.

Energy efficiency has always been a concern for device manufacturers and chipset makers, but as the hardware industry moves to mobile and more lightweight computing, it’s become a much bigger issue.

So far, Intel has had some difficulties finding its way into mobile devices because of power consumption issues. Its low-power competitor ARM has dominated in that area, even threatening to displace it as the processor in Apple laptops and desktops (according to rumor).

But Intel’s latest offerings, including its Atom processors and Oak Trail processors, have become much more efficient. Intel’s latest, the Tri-Gate 3-D “Ivy Bridge” transistor, also marks a major improvement, both in design, and in its 30 percent improvement in performance.

Photo: Intel

NTV is a significantly bigger step than these commercial processors. The technology results in a 5 to 10x improvement in energy consumption.

But it’s not without problems. When electrical noise is introduced, logic level readings can be inaccurate. So the challenge is to maintain a balance of performance versus efficiency.

“Most digital designs operate at nominal voltages — about 1V today. NTV circuits operate around 400 to 500 millivolts,” says Intel researcher Sriram Vangal in a blog post on the subject. Consistently running electronics at such low voltage levels is a challenge because the difference between a “0″ and a “1″ becomes very small (electrical signal-wise).

Intel’s experimental NTV processor may never find itself in an actual consumer product, Vangal says, but is an important stepping stone towards future processors that will.


BMW Toying With Laser Headlamps

BMW plans to give its owners more weaponry in their antisocial mission: laser headlamps. Photo BMW

Wonderful. Technology has added yet another way for BMW drivers to show complete disregard for other road users — laser headlights. Projected use: Zapping cyclists.

LEDs are obviously over already, and BMW plans to further enable its over-entitled, road-owning customers in their war on civility. The lights burn so much brighter than LEDs (170 lumens per watt instead of 100 lumens), and the diodes are so much smaller (10 microns) that they could disappear into the bodywork the way Apple hides the lights in its MacBooks. Kinda.

Of course, BMW won’t actually be ditching the headlamps, as they “play an important role in the styling of a BMW,” and let you know what kind of car it is that has been tailgating you for the last five miles (as if you even needed to check).

Instead, the biggest advantage will be to save power, as lasers are more efficient. And BMW might consider removing the indicator lights from its cars, too — it’s not that the drivers ever use them.

BMW develops laser light for the car [BMW]


Android Blasts Into Space to Work With Robots

Google’s Android platform is shooting for the moon.

NASA sent two Android-powered Nexus S smartphones into space with the last manned space shuttle, Atlantis, on the STS-135 mission. The duo of smartphones were used to test and investigate how humans and robots can coexist in space more efficiently.

In the mission, the phones were used to control SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellites), small robotic satellites that were originally developed at the Massachusetts Institute of Technology. The SPHERES are used to do things like record video and capture sensor data, errands that once required astronauts. The phones are used to help control the SPHERES, which have their own power, computing, propulsion and navigation systems. The robotic devices have built-in expansion ports that allow a variety of additional sensors and devices, like cameras, to be attached.

Another group of researchers from Great Britain hope to send a smartphone powered satellite into lower Earth-orbit before the year’s end. This experiment differs from NASA, however, in that it’s primarily testing how well the guts of the smartphone can stand in the extreme conditions of space. And last year, a pair of Nexus Ones were sent 30,000 feet into the air as the payload of a small rocket. One was destroyed when its parachute failed, but the other safely glided to Earth, capturing two and a half hours of video footage.

In the future, the phones will be used to navigate and control the SPHERES using the IOIO board and the Android Open Accessory Development Kit.

Why Android over iOS, or another smartform platform? NASA thought an Android device would be a good fit since it’s open source. Google’s engineers even wrote a sensor logging app that NASA ended up using on the mission (and it can be downloaded from the Android Market, if you’re interested).

Check out the video below to see the Nexus S and the SPHERES in action.


Aussie Startup Brings Seamless Computing Across Devices

HP’s webOS had a feature called Touch to Share which allowed information to easily sync between devices, like the TouchPad and the unreleased Pre 3. Photo by Jon Snyder/Wired

These days, accessing the same files across multiple devices can be a feat. Services like Dropbox can help transfer files from one device to another, but it’s not the most elegant solution.

We’re moving toward a world in which you can swipe, flick and tap to share data from one piece of hardware to another, effortlessly. One where you never have to worry about which device you stored that file on. We want a seamless, integrated computing experience.

Software company Nsquared is working hard to make this a reality. Using a Windows Phone 7 device, a Slate tablet, a large Kinect-controlled television display and a Microsoft Surface smart table, Nsquared came up with a model for how information can be shared and manipulated among a variety of like-branded devices.

When the smartphone is placed on the Microsoft Surface smart table, information instantly branches out onto the table around the perimeter of the phone, displaying the e-mail itself in larger text to the left while other relevant information sits above the top of the phone. The information can be touch-manipulated from either the phone or the table.

When the tablet is placed on the smart table, it renders a different, more detailed view of the floor plan that’s displayed on the table. You can pick up the tablet to provide a 3-D view of that same information, then manipulate your position on the blueprint in the tablet by touching a different point on the smart table. This could foreseeably be convenient for a contractor to show a client details of a space or project — the contractor can manipulate the client’s view on the tablet by tapping on the smart table, guiding them through the project detail by detail, without needing to zoom out on the tablet to figure out where you are in the blueprint again. All you have to do is glance down at the table to see that.

But mobile OS developers themselves have also started implementing features that are bringing us towards a completely integrated computing experience.

HP’s webOS could have offered a really convenient way to share and sync data between devices (before HP killed off its mobile hardware division, that is). The “Touch to Share” feature allowed things like open web pages to be shared between webOS devices like the HP TouchPad and the Veer smartphone with a simple nearby wave.

We’ve also seen that Apple is taking steps toward making seamless computing a reality. Apple’s iCloud service will help make data a non-issue as you switch from one device to another, and iOS 5 will have AirPlay mirroring, so you can wirelessly stream video on your iPad to a larger display. A patent for projection technology, with a feature that allows for information to be swapped from one projected display to another, is another forward-looking implementation of the concept. And if rumors prove true, Apple’s got some sort of revolutionary television up its sleeve that would have iOS integration. You could use your iPhone as a controller for games using its accelerometer and gyroscope, easily swipe what’s playing on your iPad to the TV, and then back to your phone or MacBook Air.

Samsung, which makes a variety of smartphones, televisions, and tablets, is another solid contender for developing its own in-house, completely seamless computing experience. Though it isn’t remarkably popular here in the U.S., the company could use its Bada operating system across its devices to unify the experience and allow for information to easily be shared, swiped and synced between devices.

With its new Tablet S and Tablet P, e-readers, and televisions, Sony is another company that could break into the space if a software platform was unified across its different devices.

There’s obvious incentive for companies to provide a high degree of compatibility and integration between its devices — it means you’re more likely to buy more of their products, rather than their competitors. Customer loyalty.

Currently, Apple is the only one who seems to really be taking advantage of this in-house, but as Nsquared’s video shows, it could certainly be accomplished with other-brand devices.


How Microsoft Researchers Might Invent a Holodeck

REDMOND, Washington — Deep inside Microsoft is the brain of a mad scientist.

You might not think so, given the banality of the company’s ubiquitous products: Windows, Office, Hotmail, Exchange Server, Active Directory. The days are long past when this kind of software could light up anyone’s imagination, except maybe an accountant’s.

But Microsoft has an innovative side that’s still capable of producing surprises. In fact, Microsoft spends more than $9 billion a year, and employs tens of thousands of people in research and development alone. While most of that goes toward coding the next versions of the company’s major products, a lot gets funneled into pure research and cutting-edge engineering.

Much of that work happens in Building 99 and Studio B here on Microsoft’s campus.

Building 99 is a think tank in the classic sense: It’s a beautifully-designed building packed to the gills with hundreds of scientists — about half of Microsoft’s researchers work here. In the middle is a tall, airy atrium designed by the architect to facilitate collaboration and the kind of chance meetings that can lead to serendipitous discoveries.

Many of the brainiacs who work in Building 99 are researching areas of computer science that may not have relevance to Microsoft’s bottom line for years, if ever. Heck, they may not have relevance to anything, ever, but the fundamental premise of basic research is that for every dozen, or hundred, or thousand off-the-wall projects, there’s one invention that turns out to be fabulously important and lucrative.

In fact, you only need one hit to make billions of dollars in research pay off, even if you waste the rest of the good ideas. As Malcolm Gladwell argued recently, Xerox, which is often derided for failing to take advantage of a series of amazing inventions at its Palo Alto Research Center, actually saw huge returns from just one invention: the laser printer. Against that, it’s not necessarily a bad thing that Xerox PARC was home to hundreds of useless research projects, or that Xerox never figured out what to do with some of its research, like the graphical user interface.

A few hundred yards away, in Hardware Studio B, the rubber gets a little closer to the road. An impressive, multistory curtain of LEDs hangs in the lobby, displaying some sort of interactive art that responds to movement and sounds in the space, while employees enjoy a game of pingpong. The rest of the building is more prosaic, with surplus computers stacked up in the unused back sections of long, windowless corridors.

It’s here that hardware engineers carve 3-D mock-ups, create prototypes, test and refine circuitry, and get products ready for the market. A high-concept idea that originates in the rarefied ideas of Building 99 (hey! wouldn’t it be cool if your computer were a giant touchscreen table?) may get turned into an actual product in the hardware studio (hello, Microsoft Surface).

Wired recently toured both buildings to see some of the work Microsoft scientists and engineers are doing to invent the computer interfaces of the future.