To Answer the Phone, Scratch Your Jeans

scratch-input-harrison

The sound of a fingernail raking across a table or a board may be enough to drive most people crazy. But get past that annoyance and it could become a way to answer your phone, silence a call or turn up the volume.


Scratch Input, a computer input technique developed by researchers at the Human-Computer Interaction Institute at Carnegie Mellon University, uses the sound produced when a fingernail is dragged over the surface of any textured material such as wood, fabric or wall paint. The technology was demonstrated at the Siggraph graphics conference this year.

“It’s kind of a crazy idea but a simple one,” says Chris Harrison, one of the researchers on the project. “If you have a cellphone in your pocket and want to silence an incoming call, you don’t have to pull it out of your pocket. You could just drag your fingernail on your jeans.”

As researchers study how people can interact in simpler and more innovative ways with computers and gadgets, going beyond the traditional keyboard, mouse and keypad has become important. Earlier this year, Harrison and his team demonstrated a touchscreen where pop-up buttons and keypads can dynamically appear and disappear. That allows the user to experience the physical feel of buttons on a touchscreen.

Scratch Input is another way to explore how we can interact with devices, says Harrison. Harrison, along with a colleague Julia Schwarz, and his professor Scott Hudson started working on the idea a year ago.  Scratch Input works with almost any kind of surface except glass and a few other materials that are extremely smooth.

“With this we can start to think of every flat surface as an potential input area,” says Daniel Wigdor, user experience architect at Microsoft and curator of the emerging technology demos at Siggraph. “Imagine a cellphone with a mini projector. You can now turn an entire surface into a screen for the projector and use the surface to control it.”

Scratch Input works by isolating and identifying the sound of a fingernail dragging on an area.

“All the sound happening in the environment like people putting coffee cups on the table, cars going by or children screaming, we know what frequencies they are in,” says Harrison.

A fingernail on a surface produces a frequency between 6000 Hz and 13,000 Hz. Compare that to voice, which is typically in the range of 90 Hz to 300 Hz, or noise from a refrigerator compressor or air conditioning hum, which is in the range of 50 Hz or 60 Hz.

“It makes it easy for us to throw away all the other acoustic information and just listen to what your nail sounds like,” says Harrison.

Harrison and his team used that principle to rig up a system for Scratch Input. They attached a modified stethoscope to a microphone that converts the sound into an electrical signal. The signal is amplified and connected to a computer through the audio-input jack.

“If mass produced, this sensor could cost less than a dollar,” says Harrison.

Scratch Input also supports simple gesture recognition. Tracing the letter ‘S,’ for instance produces an acoustic imprint that the system can be trained to identify.  The idea has its limitations. For instance, many letters that are written differently, sound very similar such as M, W, V, L, X or T. Scratch Input cannot accurately distinguish between these gestures. But still Harrison says the system can respond with about 90 percent accuracy.

Another problem is that the system cannot determine the spatial location of the input, says Wigdor. “For instance, with volume control, it can hear your finger spin in the appropriate gesture but the system can’t see it so sometimes it does not have enough information to react.”

Despite the limitations, the technology holds enough promise to make it into the hands of consumers, says Wigdor. “It is exciting because it is so low cost,” he says. “This idea has the potential to go beyond just a research project.”

Check out this video demo of Scratch Input:


Photo: Chris Harrison


3D Printing, Now in Stainless Steel

Shapaeways, the 3D printing shop, has added stainless steel to its lineup of materials, meaning you can now design spare parts for machines and have them made up and sent to you in the mail. Here’s how it works:

Stainless Steel printing is a completely new technology – stainless steel powder is deposited in thin layers, combined with a binding material, and built one layer at a time to the specifications of its designer. The final product is infused with bronze and oven-cured, and a variety of finish and color options are available.

Essentially, it’s like an inkjet printer, only instead of making a 2D image by laying down ink line-by-line, it makes a 3D object by laying down powdered metal one layer at a time. What could you use this for? Almost anything. The video shows a couple of ants on a Möbius Strip, but you could just as easily make low-stress parts for bikes and cars, or — well, come on, you can make anything. Have some imagination here. And add this to a 3D scanner and you can duplicate just about anything, except, sadly, tea, Earl Grey, hot.

Product page [Shapeways. Thanks, John!]


Watergate Keeps Politicians and Passengers in Line

watergate

The Watergate is a psychological barrier made physical. Instead of stringing wires or ropes across gaps to steer bleating flocks of people around, or to simply keep them where you want them, the Watergate fires jets of water across the gaps and provides a physical barrier which can easily be broken, but which – psychologically at least – is likely to be a better deterrent than more tangible solutions.

The gate could be used to replace turnstiles and has several advantages (other than the obvious fun use of shoving people into the water-jets). In an emergency, it can’t lock up, and even if the water fails to stop squirting, the worst that can happen is that you get wet. Also, they’re wider than regular gates so bikes and wheelchairs can fit through easily.

It’s certainly not high-security, but then, neither is any unmanned turnstile. I have seen people jumping barriers in London, New York and Barcelona. Ironically, the only city I have lived in where the metro doesn’t have gates is Berlin, where pretty much everybody pays for a ticket.

Oh. I almost forgot. Insert Nixon joke here: __.

Watergate – No Scandal! [Yanko]


Video: Running Toyota Robot Wobbles But It Won’t Fall Over

Remember the Big Dog, the creepy four-legged robot that ran along like two zombie-people chopped off above the waist, with a disturbing, dancing gait that couldn’t be perturbed even by beefy soldiers booting it as hard as they could?

Well, if you liked that, you’re going to love this. Toyota has souped up its lamely-named Partner Robot to run at 7kmh. Not a spectacular speed, but enough to mean it has both feet off the ground at the same time. And, like the Big Dog, a shove won’t send it to the floor. It’s not as scary or unsettling as the Big Dog, but the humanoid form is certainly uncanny. Kneel and beg before your new masters, puny fleshling!

Toyota’s running humanoid robot [Smart Machines via Cnet]

See Also:


Personal Supercomputers Promise Teraflops on Your Desk

js-personal-supercomputer

About a year ago John Stone, a senior research programmer at the University of Illinois, and his colleagues found a way to bypass the long waits for computer time at the National Center for Supercomputing Applications.

Stone’s team got “personal supercomputers,” compact machines with a stack of graphics processors that together pack quite a punch and can be used to run complex simulations.

“Now instead of taking a couple of days and waiting in a queue, we can do the calculations locally,” says Stone. “We can do more and better science.”

Personal supercomputers are available in many flavors, both as clusters of CPU and graphics processing units (GPUs). But it is GPU computing that is gaining in popularity for its ability to offer researchers easy and quick access to raw computing power. That’s opening up a new market for makers of GPUs, such as Nvidia and AMD, which have traditionally focused on high-end video cards for gamers and graphics pros.

True supercomputers, the rock stars of computing, are capable of millions of calculations per second. But they can be extremely expensive — the fastest supercomputer of 2008, IBM’s RoadRunner, costs $120 million — and access to them is limited. That’s why smaller versions, no bigger than a typical desktop PC, are becoming a hit among researchers who want access to massive processing power along with the convenience of having a machine at their own desk.

“Personal supercomputers that can run off a 110 volt wall circuit allow for a significant amount of performance at a very reasonable price,” says John Fruehe, director of business development for serve and workstation at AMD. Companies such as Nvidia and AMD make the graphics chips that personal supercomputer resellers assemble into personalized configurations for customers like Stone.

Demand for these personal supercomputers grew at an average of 20 percent every year between 2003 and 2008, says research firm IDC. Since Nvidia introduced its Tesla personal supercomputer less than a year ago, the company has sold more than 5,000 machines.

“Earlier when people talked about supercomputers, they meant giant Crays and IBMs,” says Jie Wu, research manager for technical computing at IDC. “Now it is more about having smaller clusters.”

Today, most U.S. researchers at universities who need access to a supercomputer have to submit a proposal to the National Science Foundation, which funds a number of supercomputer centers. If the proposal is approved, the researcher gets access to an account for a certain number of CPU hours at one of the major supercomputing centers at the universities of San Diego, Illinois or Pittsburgh, among others.

“Its like waiting in line at the post office to send a message,” says Stone. “Now you would rather send a text message from your computer rather than wait in line at the post office to do it. That way it is much more time efficient.”

Personal supercomputers may not be as powerful as the mighty mainframes, but they are still leagues above their desktop cousins. For instance, a four-GPU Tesla personal supercomputer from Nvidia can offer 4 teraflops of parallel supercomputing performance with 960 cores and two Intel Xeon 5500 Series Nehalem processors. That’s just a fraction of the IBM RoadRunner’s 1 petaflop speed, but it’s enough for most researchers to get the job done.

For researchers, this means the ability to run calculations faster than they can with a traditional desktop PC. “Sometimes researchers have to wait for six to eight hours before they can have the results from their tests,” says Sumit Gupta, senior product manager at Nvidia. “Now the wait time for some has come down to about 20 minutes.”

It also means that research projects that typically would have never get off the ground because they are deemed too costly and too resource and time intensive now get the green light. “The cost of making a mistake is much lower and a lot less intimidating,” says Stone.

The shift away from large supercomputers to smaller versions has also made research more cost effective for organizations. Stone, who works in a group that develops software used by scientists to simulate and visualize biomolecular structures, says his lab has 19 personal supercomputers shared by 30 researchers. “If we had what we wanted, we would run everything locally because it is better,” says Stone. “But the science we do is more powerful than what we can afford.”

The personal supercomputing idea has also gained momentum thanks to the emergence of programming languages designed especially for GPU-based machines. Nvidia has been trying to educate programmers and build support for CUDA, the C language programming environment created specifically for parallel programming the company’s GPUs. Meanwhile, AMD has declared its support for OpenCL (open computing language) this year. OpenCL is an industry standard programming language. Nvidia says it also works with developers to support OpenCL.

Stone says the rise of programming environments for high performance machines have certainly made them more popular. And while portable powerhouses can do a lot, there is still place for the large mainframe supercomputers. “There are still the big tasks for which we need access to the larger supercomputers,” says Stone. “But it doesn’t have to be for every thing.”

Photo: John Stone sits next to a personal supercomputer- a quad-core Linux PC with 8GB of memory and 3 GPUs (one NVIDIA Quadro FX 5800, and two NVIDIA Tesla C1060) each with 4GB of GPU memory/ Kirby Vandivort


Gadgets Join the Search for the Lost Tomb of Genghis Khan

<< previous image | next image >>







It’s one of the few great archaeological mysteries of the world, and now a bunch of gadget-wielding geeks are going to try and solve it.


The tomb of Genghis Khan, founder of the Mongol empire and one of the world’s greatest and most ruthless emperors, has remained hidden for nearly eight centuries. According to legend, Khan died in 1227 near the Liupan mountains of China and is thought to be buried in the northeastern region of what is currently Mongolia.

Now a group of researchers led by University of California San Diego’s Center of Interdisciplinary Science for Art, Architecture and Archaeology, with funding from National Geographic, have embarked on a quest to find this ancient grave. Their secret weapon: an array of technological gizmos ranging from unmanned aerial vehicles to sophisticated satellites and 3-D displays.

“This is a first of its kind,” says Mike Henning, a researcher at UCSD, “a large scale expeditionary-type project that promises to open up new doors for technology.”

Hennig and the entire expeditionary team left for Mongolia earlier in July and will be there until the end of the month. They will do most of their work in an 11-square mile region in Mongolia flying two UAVs, directing satellite imagery and collecting data that will be processed at home later.


Bubble-Like Touch-Screen Buttons Reconfigure On-the-Fly

What if you could take the almost infinite re-configurability of a touch-screen and marry it to the tactile, no-looking-needed interface of the old-fashioned button? Researchers at Carnegie Mellon university have done just that, using what at first looks like a big, flat balloon:

The display is made up of several layers, the topmost of which is a latex sheet. Below that lies a sheet of acrylic with holes cut in it where the buttons are to go. Pumping air in and out of the device causes the buttons to expand and stick out (or get sucked in, like an inny belly-button). And because the latex is translucent, images can be rear-projected onto these “buttons”.

It’s not quite as configurable as a touch-screen, as the design is limited to where you place the button-holes (ha!). But the rear projection offers a fair degree of on-the-fly customization and the moving buttons could prove very helpful in, say, a car where you don’t want to take your eyes from the road. Optical sensing tech inside means that there is also multi-touch functionality. Try that with a regular keypad. Finally, the semi-3D images that can be laid onto the buttons, like the global map in the clip, are just plain rad.

Next For Touchscreens: Temporary Pop-Up Buttons? [Pop Mech]


Robo-Ethicists Want to Revamp Asimov’s 3 Laws

robo2


Two years ago, a military robot used in the South African army killed nine soldiers after a malfunction. Earlier this year, a Swedish factory was fined after a robot machine injured one of the workers (though part of the blame was assigned to the worker). Robots have been found guilty of other smaller offenses such as an incorrectly responding to a request.

So how do you prevent problems like this from happening? Stop making psychopathic robots, say robot experts.

“If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath,” says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine.

For years, science fiction author Issac Asimov’s Three Laws of Robotics were regarded as sufficient for robotics enthusiasts. The laws, as first laid out in the short story “Runaround,” were simple: A robot may not injure a human being or allow one to come to harm; a robot must obey orders given by human beings; and a robot must protect its own existence. Each of the laws takes precedence over the ones following it, so that under Asimov’s rules, a robot cannot be ordered to kill a human, and it must obey orders even if that would result in its own destruction.

But as robots have become more sophisticated and more integrated into human lives, Asimov’s laws are just too simplistic, says Chien Hsun Chen, coauthor of a paper published in the International Journal of Social Robotics last month. The paper has sparked off a discussion among robot experts who say it is time for humans to get to work on these ethical dilemmas.

Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a ”legal machine language” that could help police the next generation of intelligent automated devices.

Even if robots are not entirely autonomous, there needs to be a clear path of responsibility laid out for their actions, says Leila Katayama, research scientist at open-source robotics developer Willow Garage. “We have to know who takes credit when the system does well and when it doesn’t,” she says. “That needs to be very transparent.”

A human-robot co-existence society could emerge by 2030, says Chen in his paper. Already iRobot’s Roomba robotic vacuum cleaner and Scooba floor cleaner are a part of more than 3 million American households. The next generation robots will be more sophisticated and are expected to provide services such as nursing, security, housework and education.

These machines will have the ability to make independent decisions and work reasonably unsupervised. That’s why, says Chen, it may be time to decide who regulates robots.

The rules for this new world will have to cover how humans should interact with robots and how robots should behave.

Responsibility for a robot’s actions is a one-way street today, says Hall. “So far, it’s always a case that if you build a machine that does something wrong it is your fault because you built the machine,” he says. “But there’s a clear day in the future that we will build machines that are complex enough to make decisions and we need to be ready for that.”

Assigning blame in case of a robot-related accident isn’t always straightforward. Earlier this year, a Swedish factory was fined after a malfunctioning robot almost killed a factory worker who was attempting to repair the machine generally used to lift heavy rocks. Thinking he had cut off the power supply, the worker approached the robot without any hesitation but the robot came to life and grabbed the victim’s head. In that case, the prosecutor held the factory liable for poor safety conditions but also lay part of the blame on the worker.

“Machines will evolve to a point where we will have to increasingly decide whether the fault for doing something wrong lies with someone who designed the machine or the machine itself,” says Hall.

Rules also need to govern social interaction between robots and humans, says Henrik Christensen, head of robotics at Georgia Institute of Technology’s College of Computing. For instance, robotics expert Hiroshi Ishiguro has created a bot based on his likeness. “There we are getting into the issue of how you want to interact with these robots,” says Christensen. “Should you be nice to a person and rude to their likeness? Is it okay to kick a robot dog but tell your kids to not do that with a normal dog? How do you tell your children about the difference?”

Christensen says ethics around robot behavior and human interaction is not so much to protect either, but to ensure the kind of interaction we have with robots is the “right thing.”

Some of these guidelines will be hard-coded into the machines, others will become part of the software and a few will require independent monitoring agencies, say experts. That will also require creating a “legal machine language,” says Chen. That means a set of non-verbal rules, parts or all of which can be encoded in the robots. These rules would cover areas such as usability that would dictate, for instance, how close a robot can come to a human under various conditions, and safety guidelines that would conform to our current expectations of what is lawful.

Still the efforts to create a robot that can successfully interact with humans over time will likely be incomplete, say experts. “People have been trying to sum up what we mean by moral behavior in humans for thousands of years,” says Hall. “Even if we get guidelines on robo-ethics the size of the federal code it would still fall short. Morality is impossible to write in formal terms.”

Read the entire paper on human-robot co-existence

See Also:

Photo: (wa.pean/Flickr)


Robotic Trash Collector Prowls Italian Streets

italy_paolocci_robots

A group of Italian researchers are testing a way to replace the garbage truck with a cute robot that can collect trash on demand. The robot called DustCart has been zipping through the streets of the city of Peccioli in the Tuscany region of Italy.

DustCart is part of a $3.9 million research program called DustBot  that aims to use robotics to improve urban hygiene. DustCart can not only collect trash but also gather data with on-board sensors that can monitor atmospheric pollutants such as nitrogen oxide and sulfur oxide. The DustBot project started in 2006 is expected to end later this year.

The DustCart robot vacuums streets and parks and collects trash from citizen’s doors. In a demonstration by the scientists, a quick call summoned the DustCart to the door, where it asked for the personal ID number that identifies the user and tracks the garbage. The robot also asked to classify the trash as organic, recyclable or waste. It then opened its belly bin, collected the trash and zoomed out, according to this story in the Global Post.

The robot can avoid fixed obstacles since it has pre-loaded maps about its environment and sensors that can help avoid collision with other objects.

So far, DustCart is still in the prototype stage. The robot does not have the kind of rapid response time that will make it truly effective on crowded streets, say the researchers.

The DustBot has a pear-shaped body and zips around on two wheels. Remind you of anyone?

Check out a gallery of the DustCart at work.

See also:
Hobbyists Rebuild Wall-E, One PVC Pipe at a Time

Photo:  DustCart Robot (Fulvio Paolocci/Global Post)


Intel, Nokia To Create New Mobile Architecture and Devices

Intel Atom MIDIntel and Nokia said Tuesday they will partner to create a new Intel chipset architecture targeted at mobile devices and develop products based on it.

“We want to create new capabilities and an industry that joins computing and mobile telephony,” said Anand Chandrasekher, senior vice president and general manager of the ultra mobility group at Intel.

Intel did not say when the new architecture or the new mobile devices based on it will be launched.

“We are just announcing a technology collaboration today and it is too early to talk about specific applications,” said Chandrasekher.

The partnership, however, fell short of speculation that suggested Nokia will use Intel’s Atom processor in its mobile phones.

Intel’s collaboration with Nokia is yet another attempt by the chip maker to break into the mobile phones market. Earlier this year, Intel said LG will use its Atom processors to create an upcoming line of mobile internet devices, a category that fuses smartphones and netbooks.  Intel’s Atom processor has become quite popular among netbook makers but the company hasn’t had similar success with the smartphone market.

The latest partnership with Nokia is an attempt to change that and bring a richer internet experience to smartphone users, says Intel.

“There is a lot of room for innovation that will redefine what mobile phones can do,” said Kai  Öistämö, executive vice president of Nokia. “We want to extend the computing power of these devices.”

Intel also said it will acquire a Nokia HSPA/3G modem IP license for use in future products. The license will help Intel offer chipsets for mobile devices in the future that incorporate Nokia’s modem technologies.

Intel and Nokia also plan to work together on the new Moblin operating system that is aimed at netbooks and other mobile communication devices.

Photo: Mobile Internet device with Intel Atom processor (Frank Gruber/Flickr)