New Microphone Uses Smoke — and Lasers!

Most microphones translate sound waves into electrical signals using vibrating membranes and magnets, capacitors, or other electrical components. But for decades, audio engineers have dreamed of using lasers to detect sound waves instead.

Now, audio engineer David Schwartz has succeeded. His prototype uses a laser, smoke-filled air, and a super-sensitive photocell to pick up the sonic vibrations in the air and translate them into audio signals.

The resulting recording is, well, not even as good as 100-year-old wax cylinder recordings, but Schwartz says he’s not concerned because it’s just the “talking dog” phase of the project.

“We don’t care if the dog is delivering a Shakespeare sonnet — it’s just the fact that the dog’s talking,” Schwartz told Wired.com’s Eliot Van Buskirk.

Check out Wired’s exclusive video with Schwartz, above — and to learn more, read the rest of our report on Wired’s business blog, Epicenter.

Smoke and Lasers Could Disrupt Microphone Market (Exclusive Videos) [Wired.com Epicenter]


Humanoid Robot Plays Soccer

Set aside your fears of world-dominating cyborgs and say hello to Hajime 33, an athletic robot who’s about as tall as Kobe Bryant. Granted, this bot plays soccer, not basketball (yet).

Created by Hajime Sakamoto, Hajime 33 is the latest addition to Sakamoto’s fleet of humanoid robots. Powered by batteries, the robot is controlled with a PS3 controller, and it can walk and kick a ball. Hajime 33 weighs in at just 44 pounds while overlooking his creator at more than 6 feet 5 inches tall.

At first glance, Hajime 33 isn’t exactly graceful when it comes to our standards of, well, being a human. And his soccer skills are a far cry from Beckham’s — he can barely kick the ball, let alone bend it. However, the aesthetics and intelligence of Sakamoto’s robots are advancing him closer to his dream of “building Gundam.” It’ll be at least 10 years before Sakamato creates a giant, Gundam-like robot, but soccer is a start. One can only imagine what Hajime 33 can do to groins. Ouch.

See Also:

[via Robots Dreams]


Surgical Robots Operate With Precision

<< previous image | next image >>






Dread going the doctor? It could be worse. Your next physician could have the bedside manner of a robot. In fact, your next physician could be a robot.

Scared yet?

Surgeons and medical engineers have been trying to create machines that can assist in surgery, increase a surgeon’s dexterity and support hospital staff. These aren’t humanoid robots but computer controlled systems that have been optimized for use in sensitive situations. An exhibition called Sci-fi Surgery: Medical Robots, opening this week at the Hunterian Museum of the Royal College of Surgeons of England, shows a range of robots used in medicine.

“Industrial robots appeared in factories in the early 1960s and robots have become an important part of space exploration,” says Sarah Pearson, curator of the exhibition. “But robots have been comparatively slow to be used in medicine because surgeons haven’t felt comfortable with them.”

Robots in medicine aren’t intended to replace surgeons, says Pearson, but act as companion devices. Most robots used in medicine aren’t autonomous because surgeons haven’t been comfortable giving up control, but with advances with technology, we can expect more autonomous machines.

The exhibition offers a peek into some of the most interesting surgical robots out there, from one of the earliest medical robots to a prototype camera pill.

Above: PROBOT

In 1988, Brian Davies, a medical robotics professor at the Imperial College in London, designed a robot (with help of colleagues) that could remove soft tissue from a person. It was one of the first robots to do so. What’s more, it could perform the task with a fair degree of autonomy.

Most industrial robots usually have an arm, complete with a shoulder, elbow and wrist mechanism, and a gripper tool for the hand. That’s overkill for surgical purposes, and because of the room needed to move a robot arm around, it might even be dangerous for use in very small spaces inside human bodies. That’s why Davies and his team designed a small robot that has three axes of movement, plus a fourth axis to move a cutter for prostate surgery. (See a simplified drawing of the robot’s structure.)

The geometry of this design allows the robot to hollow out a cavity from within the prostate gland. The robot is controlled by a pair of programmable embedded motor control systems. The system are directed using a i486DX2-based PC. The robot allows surgeons to specify the correct cutting sequence to remove tissue.

But the idea of having any degree of independent behavior in a robot didn’t catch on. Although its designers tested the PROBOT in the lab and in human subjects, it was never used widely in surgery.

“Doctors just didn’t feel comfortable with the idea,” says Justin Vale, a consultant neurological surgeon at Imperial College and a fellow at the Royal College of Surgeons. “The PROBOT project shut down when funding for it ran out.”

Caption: PROBOT/ Imperial College London


Digital Contacts Will Keep an Eye on Your Vital Signs

img_8791

Forget about 20/20. “Perfect” vision could be redefined by gadgets that give you the eyes of a cyborg.

The tech industry calls the digital enrichment of the physical world “augmented reality.” Such technology is already appearing in smartphones and toys, and enthusiasts dream of a pair of glasses we could don to enhance our everyday perception. But why stop there?

Scientists, eye surgeons, professors and students at the University of Washington have been developing a contact lens containing one built-in LED, powered wirelessly with radio frequency waves.

Eventually, more advanced versions of the lens could be used to provide a wealth of information, such as virtual captions scrolling beneath every person or object you see. Significantly, it could also be used to monitor your own vital signs, such as body temperature and blood glucose level.

Why a contact lens? The surface of the eye contains enough data about the body to perform personal health monitoring, according to Babak Parvis, a University of Washington professor of bionanotechnology, who is working on the project.

“The eye is our little door into the body,” Parvis told Wired.com.

With gadgets becoming increasingly mobile and powerful, the technology industry is seeing a steady stream of applications devoted to health. A few examples include a cellphone microscope used to diagnose malaria, surgeons honing their skills with the Nintendo Wiimote, and an iPhone app made for diabetes patients to track their glucose levels.

A contact lens with augmented-reality powers would take personal health monitoring several steps further, Parvis said, because the surface of the eye can be used to measure much of the data you would read from your blood tests, including cholesterol, sodium, potassium and glucose levels.

And that’s just the beginning. Because this sort of real-time health monitoring has been impossible in the past, there’s likely more about the human eye we haven’t yet discovered, Parvis said. And beyond personal health monitoring, this finger-tip sized gadget could one day create a new interface for gaming, social networking and, well, interacting with reality in general.

overlay20x15

Parvis and his colleagues have been working on their multipurpose lens since 2004. They integrated miniature antennas, control circuits, an LED and radio chips into the lens using optoelectronic components they built from scratch. They hope these components will eventually include hundreds of LEDs to display images in front of the eye. Think words, charts and even photographs. (The illustration above is a concept image showing what it would look like with the lens displaying a digital overlay of the letter E.)

Sounds neat, doesn’t it? But the group faces a number of challenges before achieving true augmented eye vision.

rabbiteyeFirst and foremost, safety is a prime concern with a device that comes in contact with the eye. To ensure the lens is safe to wear, the group has been testing prototypes on live rabbits (pictured to the right), who have successfully worn the lenses for 20 minutes at a time with no adverse effects. However, the lens must undergo much more testing before gaining approval from the Food and Drug Administration.

A fundamental challenge this contact lens will face is the task of tracking the human eye, said Blair MacIntyre, an associate professor and director of the augmented environments lab at Georgia Tech College of Computing. MacIntyre is not involved in the contact lens product, but he helped develop an augmented-reality zombie shooter game.

“These developments are obviously very far from being usable, but very exciting,” MacIntyre said. “Using them for AR will be very hard. You need to know exactly where the user is looking if you want to render graphics that line up with the world, especially when their eyes saccade (jump around), which our eyes do at a very high rate.”

Given that obstacle, we’re more likely to see wearable augmented-reality eyeware in the form of glasses before a contact lens, MacIntyre said. With glasses, we’ll only need to track where the glasses are and where the eyes are relative to them as opposed to where the eyes are actually looking.

And with a contact lens, it will be difficult to cram heavy computational power into such a small device, even with today’s state-of-the-art technologies, Parvis admits. There are many advanced sensors that would amplify the lens’ abilities, but the difficulty lies in integrating them, which is why Parvis and his colleagues have had to engineer their own components. And when the contact lens evolves from personal health monitoring into more processor-intense augmented-reality applications, it’s more likely it will have to draw its powers from a companion device such as a smartphone, he said.

Layar, an Amsterdam-based startup focusing on augmented reality, shares University of Washington’s vision of an augmented-reality contact lens. However, Raimo van der Klein, CEO of Layar, said such a device’s vision would be limited if it did not work with an open platform supporting every type of data available via the web, such as mapping information, restaurant reviews or even Twitter feeds. Hence, his company has taken a first step by releasing an augmented-reality browser for Google Android smartphones, for which software developers can provide “layers” of data for various web services.

Van der Klein believes a consumer-oriented, multipurpose lens is just one example of where augmented-reality technology will take form in the near future. He said to expect these applications to move beyond augmenting vision and expand to other parts of the body.

“Imagine audio cues through an earpiece or sneakers vibrating wherever your friends are,” van der Klein said. “We need to keep an open eye for future possibilities, and I think a contact lens is just part of it.”

See Also:

Photos: University of Washington


Britain’s Oldest Working Computer Roars to Life

harwell-witch-pb-s

The oldest original working computer in the U.K., which has been in storage for nearly 30 years, is getting restored to its former glory.

The Harwell computer, also known as WITCH, is getting a second lease on life at the National Museum of Computing at Bletchley Park. The machine is the oldest surviving computer whose programs, as well as data, are stored electronically, according to the museum.

The Harwell WITCH is a relay-based machine that used 900 Dekatron gas-filled tubes, each of which could hold a single digit in memory. It has paper tape for both data input and program storage. The computer was used in the design of Britain’s first nuclear reactors. (Read more about the computers used at Harwell in the 1940s and 1950s.)

“Its promises for reliability over speed were certainly met – it was definitely the tortoise in the tortoise and the hare fable,” says Kevin Murrell, a director and trustee of The National Museum of Computing. “In a race with a human mathematician using a mechanical calculator, the human kept pace for 30 minutes, but then had to retire exhausted as the machine carried on remorselessly. The machine once ran for ten days unattended over a Christmas and New Year holiday period.”

It was a feat for its time. Harwell was operational until 1957 and was then used in computer education until 1973. After that it was disassembled and put in storage–only to be revived now.

The Harwell will be housed alongside the rebuild of the earlier, code-breaking Colossus Mark II, the world’s first electronic computer.

Check out more photos and video of the Harwell computer below.


witch-full

witch-section

witches

B.F.H. Coleman, a lecturer in charge of digital computing at Wolverhampton College of Technology, checks a punched tape for the 1950s WITCH. The picture above was taken in1964.

Also see BBC’s video of the Harwell computer.

Top image: The machine being used at Wolverhampton and Staffordshire College of Technology in 1961

Photos courtesy of The National Computing Museum and Computer Conservation Society/UKAEA/Wolverhampton Express and Star


Humanoid Robots Share Their First Kiss

Say hello to Thomas and Janet, two humanoid machines who claim to be the first robotic pair to share a kiss.

The kiss between the robots was unveiled in December during a performance of scenes from the Phantom of the Opera at National Taiwan University of Science and Technology, says IEEE Spectrum.

Since then the robots have been working on their technique. Chyi-Yeu Lin, a mechanical engineering professor at the Taiwanese University says a kiss requires some sophisticated hand-eye co-ordination among the robots and self-balancing mechanisms.

“To make the robots smooches and expression seem realistic, the team adopted several techniques, including manual molding, non-contact 3-D face scanning and 3-D face morphing,” says Spectrum, which interviewed Chyi-Yeu recently.

The kissing robots are similar to other efforts to help robots express emotions in ways that are familiar to humans. For instance, researchers recently created an incredibly realistic Einstein robot that can smile and frown. Earlier, researchers used to have program each of the robot’s 31 artificial muscles individually but eventually trained the robot to use machine learning techniques to learn select emotions on its own.

Thomas and Janet have six expressions that are created using servos pulling at several points in the face and mouth. Eventually, they hope to be part of a group of autonomous performing robot actors.

See Also:

Video courtesy: Spectrum Magazine


If You’re Not Seeing Data, You’re Not Seeing

layar3
As you shove your way through the crowd in a baseball stadium, the lenses of your digital glasses display the names, hometowns and favorite hobbies of the strangers surrounding you. Then you claim a seat and fix your attention on the batter, and his player statistics pop up in a transparent box in the corner of your field of vision.


It’s not possible today, but the emergence of more powerful, media-centric cellphones is accelerating humanity toward this vision of “augmented reality,” where data from the network overlays your view of the real world. Already, developers are creating augmented reality applications and games for a variety of smartphones, so your phone’s screen shows the real world overlaid with additional information such as the location of subway entrances, the price of houses, or Twitter messages that have been posted nearby. And publishers, moviemakers and toymakers have embraced a version of the technology to enhance their products and advertising campaigns.

“Augmented reality is the ultimate interface to a computer because our lives are becoming more mobile,” said Tobias Höllerer, an associate professor of computer science at UC Santa Barbara, who is leading the university’s augmented reality program. “We’re getting more and more away from a desktop, but the information the computer possesses is applicable in the physical world.”

Tom Caudell, a researcher at aircraft manufacturer Boeing, coined the term “augmented reality” in 1990. He applied the term to a head-mounted digital display that guided workers through assembling electrical wires in aircrafts. The early definition of augmented reality, then, was an intersection between virtual and physical reality, where digital visuals are blended in to the real world to enhance our perceptions.

Augmented Reality Today Total Immersion is one of the most successful augmented reality providers today, having created interactive baseball cards, a 3-D tour of the Star Trek Enterprise, and now, a new line of Mattel actions figures based on the upcoming sci-fi-flick, Avatar.

Here’s a quick look at how their augmented reality technology works. Take the baseball cards. Users have to first log on to a URL (www.toppstown.com) and enter a 3-D section where they enter an interactive code found on their baseball card to activate the software. Then, they can hold the card under a webcam and Total Immersion’s software goes to work. continue reading…

Futurists and computer scientists continue to raise their standards for a perfectly augmented world. Höllerer’s dream for augmented reality is for it to reach a state in which it does not rely on a pre-downloaded model to generate information. That is, he wants to be able to point a phone at a city it’s completely unfamiliar with, download the surroundings and output information on the fly. He and his peers at UCSB call this idea “Anywhere Augmentation.”

But we have a long way to go — perhaps several years — before achieving Anywhere Augmentation, Höllerer said. Augmented reality is stifled by limitations in software and hardware, he explained. Cellphones require superb battery life, computational power, cameras and tracking sensors. For software, augmented reality requires a much more sophisticated artificial intelligence and 3-D modeling applications. And above all, this technology must become affordable to consumers. The best possible technology that is available today would nearly cost $100,000 for a solid augmented-reality device, Höllerer said.

Given the cost of creating decent augmented-reality technology, early attempts have focused on two areas. One, augmented reality for your computer is prominently appearing in attention-grabbing, big-budget advertisements. And a few consumer applications of the technology are just beginning to surface in smartphones.


A recent example of augmented reality appeared in the marketing campaign for the sci-fi blockbuster District 9. On the movie’s official website was a “training simulator” game, which asked computer users to print a postcard containing the District 9 logo and hold it in front of a webcam. The postcard contains a marker; when the game detects that marker in the webcam video, it overlays a 3-D hologram of a District 9 character on the computer screen. From there, players can click buttons to fire a gun, jump up and down or throw a human against a wall in the game. (See video above.)

Mattel is using the same type of 3-D imaging augmented reality in “i-Tag” action figures for James Cameron’s new movie Avatar. The toy includes a card containing a marker, which is projected as a 3-D action figure on a computer. This way, children can battle each other’s virtual characters on a computer screen.

But augmented reality isn’t truly useful in a static desktop environment, Höllerer said, because people’s day-to-day realities involve more than sitting around all day (outside of work, at least). And that’s why smartphones, which include GPS hardware and cameras, are crucial to driving the evolution of augmented reality.

Brian Selzer, co-founder of Ogmento, a company that creates augmented reality products for games and marketing, recognizes the need for augmented reality to go mobile. He said his company is working on several projects coming in the near future to help market mainstream movies with augmented reality smartphone apps. For example, movie posters will trigger interactive experiences on an iPhone, such as a trailer or even a virtual treasure hunt to promote the film.

“The smartphone is bringing AR into the masses right now,” Selzer said. “In 2010 every blockbuster movie is going to have a mobile AR campaign tied to it.”


On the consumer end of the spectrum, developers have recently released augmented reality apps for the Google Android-powered HTC G1 handset. Layar, a company based in Amsterdam, released an augmented reality browser for Android smartphones in June. The Layar browser (video above) looks at an environment through the phone’s camera, and the app displays houses for sale, popular restaurants and shops, and tourist attractions. The software relies on downloading “layers” of data provided by developers coding for the platform. Thus, while the information appears to display in real time, it’s not truly real-time: The app can’t analyze data it hasn’t downloaded ahead of time.

“This is the first time media, internet and digital information is being combined with reality,” said Martin Lens-FitzGerald, co-founder of Layar. “You know more, you find more, or you see something you haven’t seen before. Some people are even saying that it might be even bigger than the web.”

Cellphone giant Nokia is currently testing an AR app called Point & Find, which involves pointing your camera phone at real-world objects and planting virtual information tags on them (above). Users of the app can view each other’s tags on the phone screen, essentially crowdsourcing an augmented reality.

“This year we’re feeling a real urgency to work on augmented reality because the hardware is finally catching up to our needs,” said Rebecca Allen, director of Nokia’s research center in Hollywood.


Georgia Tech is also busy tinkering with augmented reality. The video demo above demonstrates an augmented-reality zombie shooter called ARhrrrr. The smartphone in use is a prototype containing an Nvidia Tegra, a powerful chip specializing in high-end graphics for mobile devices. How do you play? Point the phone camera at a map containing markers, and a 3D hologram of a town overrun by zombies appears on the phone’s screen. Using the phone, you can shoot the zombies from the perspective of a helicopter pilot. And you can even place (real) Skittles on the physical map and shoot them to set off (virtual) bombs.

As for the iPhone, officially there are no augmented reality apps in the App Store yet — because Apple doesn’t provide an open API to access live video from the phone’s camera. This barrier prompted augmented reality enthusiasts and professionals to write an Open Letter to Apple pleading for access to this API to make augmented reality apps possible in the App Store.

Brad Foxhoven, Selzer’s partner at Ogmento, said Apple has told him the next version of the iPhone OS (3.1) “would make [AR developers] happy,” implying the live-video API will become open, and AR apps will become available very soon.


Meanwhile, some augmented reality developers have already hacked away at the iPhone’s software development kit to code proof-of-concept augmented reality apps. The video above demonstrates an app called Twittaround, an augmented reality Twitter viewer on the iPhone. The app shows live tweets of mobile Twitter users around your location.

“We’re doing as much as we can with the current technology,” Selzer said regarding the overall augmented-reality developer community. “This industry is just getting started, and as processing speeds speed up, and as more creative individuals get involved, our belief is this is going to become a platform that becomes massively adopted and immersed in the next few years.”

See Also:

Photo: Layar


Hardware Hackers Create a Modular Motherboard

ixmachina

An ambitious group of hardware hackers have taken the fundamental building blocks of computing and turned them inside out in an attempt to make PCs significantly more efficient.

The group has created a motherboard prototype that uses separate modules, each of which has its own processor, memory and storage. Each square cell in this design serves as a mini-motherboard and network node; the cells can allocate power and decide to accept or reject incoming transmissions and programs independently. Together, they form a networked cluster with significantly greater power than the individual modules.

The design, called the Illuminato X Machina, is vastly different from the separate processor,memory and storage components that govern computers today.

“We are taking everything that goes into motherboard now and chopping it up,” says David Ackley, associate professor of computer science at the University of New Mexico and one of the contributors to the project. “We have a CPU, RAM, data storage and serial ports for connectivity on every two square inches.”

A modular architecture designed for parallel and distributed processing could help take computing to the next level, say its designers. Instead of having an entire system crash if a component experiences a fatal error, failure of a single cell can still leave the rest of the system operational. It also has the potential to change computing by ushering in machines that draw very little power.

“We are at a point where each computer processor maxes out at 3Ghz (clock speed) so you have to add more cores, but you are still sharing the resource within the system,” says Justin Huynh, one of the key members of the project. “Adding cores the way we are doing now will last about a decade.”

Huynh and his team are no strangers to experimenting with new ideas. Earlier this year, Huynh and his partner Matt Stack created the Open Source Hardware Bank, a peer-to-peer borrowing and lending club that funds open source hardware projects. Stack first started working on the X Machina idea about 10 months ago.

Computing today is based on the von Neumann architecture: a central processor, and separate memory and data storage. But that design poses a significant problem known as the von Neumann bottleneck. Though processors can get faster, the connection between the memory and the processor can get overloaded.That limits the speed of the computer to the pace at which it can transfer data between the two.

“A von Neumann machine is like the centrally planned economy, whereas the modular, bottom up, interconnected approach would be more capitalist,” says Ackley.”There are advantages to a centrally planned structure but eventually it will run into great inefficiencies.”

ixm2By creating modules, Huynh and his group hope to bring a more parallel and distributed architecture. Cluster-based systems aren’t new. They have been used in high end computing extensively. But with the Illuminato X Machina they hope to extend the idea to a larger community of general PC users.

“The way to think of this is that it is a system with a series of bacteria working together instead of a complex single cell amoeba,” says JP Norair, architect of Dash 7, a new wireless and data standard. An electrical and computer engineering graduate from Princeton University, Norair has studied modular architecture extensively.

Each X Machina module has a 72 MHz processor (currently an ARM chip), a solid state drive of 16KB and 128KB of storage in an EEPROM (electrically erasable programmable read-0nly memory) chip. There’s also an LED for display output and a button for user interaction.

Every module has four edges, and each edge can connect to its neighbors. It doesn’t have sockets, standardized interconnects or a proprietary bus. Instead, the system uses a reversible connector. It’s smart enough to know if it is plugged into a neighbor and can establish the correct power and signal wires to exchange power and information, says Mike Gionfriddo, one of the designers on the project.

The X Machina has software-controlled switches to gate the power moving through the system on the fly and a ‘jumping gene’ ability, which means executable code can flow directly from one module to another without always involving a PC-based program downloader.

Each Illuminato X Machina node also has a custom boot loader software that allows it to be programmed and reprogrammed by its neighbors, even as the overall system continues to run, explains Huynh. The X Machina creators hope to tie into the ardent Arduino community. Many simple Arduino sketches will run on the X Machina with no source code changes, they say.

Still there are many details that need to be worked out. Huynh and his group haven’t yet benchmarked the system against traditional PCs to establish exactly how the two compare in terms of power consumption and speeds. The lack of benchmarking also means that they have no data yet on how the computing power of an X Machina array compares to a PC with an Intel Core 2 Duo chip.

Programs and applications have also yet to be written for the X Machina to show whether it can be an effective computing system for the kind of tasks most users perform. To answer some of these questions, Ackley plans to introduce the Illuminato X Machina to his class at the University of New Mexico later this month. Ackley hopes students of computer science will help understand how traditional computer programming concepts can be adapted to this new structure.

So far, just the first few steps towards this idea have been taken, says Huynh.

Norair agrees. “If they can successfully get half the power of an Intel chip with a cluster of microcontrollers, it will be a great success,” he says, “because the power consumption can be so low on these clusters and they have a level of robustness we haven’t seen yet.”

See the video to hear David Ackley talk about programming the Illuminato X Machina.

Programming the Illuminato X Machina from Chris Ladden on Vimeo.

Photo: Illuminato X Machina/Justin Huynh


DNA May Help Build Next Generation of Chips

dnaorigami

In the race to keep Moore’s Law alive, researchers are turning to an unlikely ally: DNA molecules that can be positioned on wafers to create smaller, faster and more energy-efficient chips.

Researchers at IBM have made a significant breakthrough in their quest to combine DNA strands with conventional lithographic techniques to create tiny circuit boards. The breakthrough, which allows for the DNA structures to be positioned precisely on substrates, could help shrink computer chips to about a 6-nanometer scale. Intel’s latest chips, by comparison, are on a 32-nanometer scale.

“The idea is to combine leading edge lithography that can offer feature size of 25 nanometers with some chemical magic to access much smaller dimensions,” says Robert Allen, senior manager of chemistry and materials at IBM Almaden Research. “This allows us to place nano objects with 6-nanometer resolution. You don’t have a hope of doing that with lithography today.”

To keep pace with Moore’s Law, which postulates that the number of transistors on an integrated circuit will double every two years, chip makers have to squeeze an increasing number of transistors onto every chip. One way to describe how well transistors are packed is the smallest geometric feature that can be produced on a chip, usually designated in nanometers. Current lithographic techniques use either an electron beam or optics to etch patterns on chips in what is known as top-down technique.

“You pattern, mask and etch material away,” says Chris Dwyer, assistant professor at the department of electrical and computer programming at Duke University. “It is very easy to make big structures, but tough to create molecular-scale chips using this.” Dwyer compares it to taking a block of marble and chipping away from it to create the required pattern.

Newer techniques attempt to take small chips and fuse them together to create the required larger pattern in what is called as molecular self-assembly.

“What the IBM researchers have shown is a good demonstration where top-down and bottom-up techniques meet.”

At the heart of their research is an idea known as DNA origami. In 2006, Caltech researcher Paul Rothemund explained a method of creating nanoscale shapes and patterns using custom-designed strands of DNA. It involves folding a single long strand of viral DNA and smaller ’staple’ strands into various shapes. The technique has proven very fruitful, enabling researchers to create self-assembling nano machines, artworks and even tiny bridges.

Wallraff says the technique has a lot of potential for creating nano circuit boards. But the biggest challenge so far has been in getting the DNA origami nanostructures to align perfectly on a wafer. Researchers hope the DNA nanostructures can serve as scaffolds or miniature circuit boards for components such as carbon nanotubes, nanowires and nanoparticles.

“If the DNA origami is scattered around on a substrate, it makes them difficult to locate them and use to connect to other components,” says Greg Wallraff, an IBM research scientist working on the project. “These components are prepared off the chip, and the the origami structure would let you assemble them on the chip.”

It’s important for the kind of work Dwyer and his colleagues at Duke have been doing. They see IBM’s breakthrough as laying the groundwork for their research studying molecular sensors. “With this development we can look to integrate the sensors onto a chip and help build hybrid systems,” says Dwyer.

Still there are some big steps that have to be covered before circuit boards based on DNA nanostructures can hit commercial production. Researchers have to be able to get extremely precise alignment, with no room for error.

Even with the latest demonstration of alignment techniques, there is still some angular dispersion, points out Dwyer.

“If you put a transistor down on a circuit board, there is no dispersion,” says Dwyer. “Our computing systems cannot deal with that kind of randomness.”

That’s why commercial production of chips based on the DNA origami idea could be anywhere from five years to a decade away, says Allen.

“If you are going to take something from the bench-top scale to a fab, there are enormous barriers,” he says. “You really need to understand the mechanisms of defect generation. What we don’t want is to imply is that this is ready to go into a factory and make Star Trek–like chips.”

Photo: Low concentrations of triangular DNA origami bind to wide lines on a lithographically patterned surface.
Courtesy IBM.


Experimental Tech Turns Your Coffee Table Into a Universal Remote

cristal1

Stock up on coasters. A new technology combines the coffee table with a universal remote so that people sitting around the table can tap on a screen to change the channel, turn up the volume or dim the lights.

CRISTAL (Control of Remotely Interfaced Systems using Touch-based Actions in Living spaces) is a research project in user interface that attempts to create a natural way of connecting with devices. The system offers a streaming video view of the living room on a tabletop, so users can can walk up to it, see the layout of the room and interact with the TV or the photo frame.

“We wanted a social aspect to activities such as choosing what to watch on TV and we wanted to make the process easy and intuitive,” says Stacey Scott, assistant professor at the University of Waterloo in Ontario, Canada, and a member of the project. A demo of CRISTAL was shown at the Siggraph graphics conference earlier this month.

The idea isn’t completely novel. Microsoft showed off Surface, a multitouch display in 2008 that allows users to interact with it by using gestures.

Universal remotes have become popular in the last few years, but they are still difficult to use. Their greatest flaw, though, may be that they do not help quash those battles over who gets the remote.  CRISTAL solves those problems, says Christian Müller-Tomfelde, an Australian researcher who is currently writing a book on research in tabletop displays.

“It is a clever use of the tabletop as a ‘world-in-miniature’ interface to control room elements,” he says.

Scott and researchers from the Upper Austria University of Applied Sciences have been working on the idea for less than a year. It started when Michael Haller, the head of the Media Interaction Lab at the university, found himself frustrated with different remotes for each device: TV, radio and DVD player.

“Every time you get a new device into the living room, you get a new remote with it,” says Scott. “And instead of difficult programmable universal remotes, this offers intuitive mapping of the different devices and home.”

CRISTAL uses a camera to capture the living room and all the devices in it, including lamps and digital picture frames. The captured video is displayed on the multi-touch coffee table. The video image of the device itself is the interface, so a sliding gesture on the image can turn up the volume of the TV, for instance. To watch a movie, drag an image of the movie cover and drop it on to the TV on the multitouch screen.

But it will be a few years before this remote is available at Best Buy. It could take five to 10 years before affordable multitouch tabletops can be created for consumers, says Müller-Tomfelde. “The investment to get such a coffee-table display into the living room is not to be underestimated, as we can see with Microsoft’s Surface technology,” he says.

Scott estimates that a tabletop remote such as CRISTAL could cost $10,000 to $15,000. But she is confident that the idea can become viable enough for consumer production in a few years, especially if it can be combined with Microsoft’s Surface product.

Check out a video demo of CRISTAL below.

Photo and video: Media Interaction Lab

See Also: