Germanium Laser Breakthrough Brings Optical Computing Closer

germanium-laser

Researchers at MIT have demonstrated the first laser that uses the element germanium.

The laser, which operates at room temperature, could prove to be an important step toward computer chips that move data using light instead of electricity, say the researchers.

“This is a very important breakthrough, one I would say that has the highest possible significance in the field,” says Eli Yablonovitch, a professor in the electrical engineering and computer science department of the University of California, Berkeley who was not involved in the research told Wired.com. “It will greatly reduce the cost of  communications and make for faster chips.”

Even as processors become more powerful, they’re running into a communications barrier: Just moving data between different parts of the chip takes too long. Also, higher bandwidth connections are needed to send data to memory. Traditional copper connections are becoming impractical because they consume too much power to transport data at the increasingly higher rates needed by next-generation chips. Copper also generates excessive heat, and that imposes other design limits because engineers need to find ways of dissipating the heat.

Transmitting data with lasers, which can concentrate light into a narrow, powerful beam, could be a cheaper and more power efficient alternative. The idea, known as photonic computing, has become one of the hottest areas of computer research.

“The laser is just totally new physics,” says Lionel Kimerling, an MIT professor whose Electronic Materials Research Group developed the germanium laser.

While lasers are attractive, the materials that are used in lasers currently — such as gallium arsenide — can be difficult to integrate into fabs.

That’s given birth to “external lasers,” says Yablonovitch. Lasers have to be constructed separately and grafted on to the chips, instead of directly building them on the same silicon that holds the chips’ circuits. This reduces the efficiency and increases the cost.

A germanium laser solves that problem, because it could in principle be built alongside the rest of the chip, using similar processes and in the same factory.

“It’s going to take a few years to learn how to integrate this type of laser into a standard silicon process,” says Yablonovitch. “But once we know that, we can have silicon communication chips that have internal lasers.”

Eventually, MIT researchers believe germanium lasers could be used not just for communications, but for the logic elements of the chips too — helping to build computers that perform calculations using light instead of electricity.

But University of California, Berkeley’s Yablonovitch says it is unlikely that light will replace electricity entirely. “I think we will be using light in conjunction with electronic logic circuits,” he says. “Light allows internal communications much more efficiently, but the logic elements themselves are likely to remain driven by electricity.”

Graphic:Christine Daniloff/MIT


To Charge your iPod, Plug in Your Jeans

powersuitA breakthrough in wearable computing lets researchers change ordinary cotton and polyester into electronic textiles that can double as rechargeable batteries. That means powering an iPod or cell phone could become as easy as plugging it into your tee shirt or jeans and charging the clothing overnight.

“Energy textiles will change the development of wearable electronics,” Liangbing Hu, one of the researchers from Stanford University involved in the project told Wired.com. “There are not that many solutions available for energy storage for wearable devices. Electronic textiles tries to solve that problem.”

Wearable electronics is an attempt to create a new category of devices that are flexible and lightweight such as wearable displays, embedded health monitors and textiles with electronics melded in.  In case of textiles, though, most attempts, so far, to integrate electronics involve patching sensors and resistors on to existing fabric.

The latest attempt tries to bring the electronics to the molecular level. The researchers coated cellulose and polyester fibers with ‘ink’ made from single-walled carbon nanotubes. The nanotubes are electrically conductive carbon fibers barely 1/50,000 the width of a human hair.

The process of dyeing with this special ink is similar to that used for dyeing fibers and fabrics in the textile industry, they say. Details of the method were published in a paper in the ACS’ Nano Letters journal.

The coating makes the fibers highly conductive by turning them into porous conductors. The treated textiles can then be used as electrodes and standard textiles used as separators to creates fully stretchable supercapacitors. Ordinary capacitors are used to store energy. Supercapacitors can offer turbocharge that principle such that the capacitor can be charged and discharged virtually an unlimited number of times.

“If you have a high surface area, you can store a high amount of charges,” says Hu. “Since we coat carbon nanotubes on textile fibers, it increases the surface and allows for charge and discharge cycles up to one million times,” says Hu.

The electronic textiles produced by this method retain the flexibility and stretchability of regular cotton and polyester. They also kept their electronic properties despite simulated repeated laundering, say the researchers.

The next step is to combine it with inks of other materials that could help turn the fabric into wearable solar cells and batteries.

The researchers are also looking to use graphene, a form of carbon derived from graphite oxide, instead of carbon nanotubes. “Graphene can be much cheaper than nanotubes,” says Hu, “so alternative materials like that could significantly reduce the cost of energy textiles.”

See Also:

Photo: E-ink treated fabrics could help charge electronics/ Stanford University


Corset Reacts to Carbon Dioxide Levels in the Air

co2-corset

Take a deep breath and exhale. Feeling a little tight around the middle? Your corset could be sending you a message about air pollution.

Designer Kristin O’Friel has created a garment that reacts to the carbon dioxide levels in the environment and offers physical feedback by tightening the bodice in relation to air quality.

“I wanted to create an experience that changed our perception of environmental data,” says O’Friel, “by making a wearable device that engaged with this information in a direct and tangible way.”

The CO2RSET has a carbon dioxide sensor sewn into the garment. It responds to CO2 readings by tightening or loosening itself when the levels of the gas in the atmosphere increase or decrease, respectively. O’Friel designed it as a student in the Interactive Telecommunications Program at New York University’s Tisch School of Arts.

O’Friel says she chose a corset because it cinches the waist and forces wearers to breathe shallowly. “It’s contextually appropriate as the wearable interface to air quality,” she says.

The corset uses a TGS4161 sensor from a company called Figaro and mini gear motors from Solarbotics for the actuation.

The garment may not be very practical, but its a fun way to introduce the idea of wearable computing and open it up to possibilities.

Take a closer look at the corset:

corset1

2395996612_06756409a4_b

More at Kristin O’Friel’s Flickr stream

See Also:

[via UberGizmo]

Photos: Kristin O’Friel


Company Offers Free Robots for Open Source Developers

willow-garageRobotics company Willow Garage is giving 10 of its robots free to researchers in return for a promise that they will share their development efforts with the open-source community.

“The hardware is designed to be a software developer’s dream with a lot of compute power inside and many of the annoying problems with general robotic platforms taken care of,” says Steve Cousins, CEO of Willow Garage. “We have created a platform that is going to accelerate the development of personal robotics.”

Despite hundreds of researchers working worldwide in the area of robotics, their development efforts tend to be proprietary. Researchers may be working on similar problems but they rarely share code or hardware.

Willow Garage was founded in 2006 with the idea of creating an open-source hardware and software platform. In addition to its hardware prototype, Willow Garage has also developed the Robot Operating System (ROS), which originated at Stanford’s Artificial Intelligence Laboratory. ROS is based on Linux and can work with both Windows and Mac PCs.

Cousins says Willow Garage’s giveaway is targeted at research labs, rather than the DIY hobbyist.

“Utilization is an important criteria for us,” he says. “Rather than give the robots away to someone in a garage somewhere, we would prefer to give it to a lab where a lot of students can work on it.”

To get their free robot, interested labs and researchers have to submit a letter of intent to the company by the end of the month, and follow up with a full proposal by March 1. Ultimately, they will have to make their software code available as open source.

Here’s what the researchers will get with the PR2 robot.

PR2 has two eight-core Xeon system servers on-board, each with 24 GB of RAM; a 500GB internal hard drive; and a 1.5TB external removable drive.

The robot has accelerometers and pressure sensors distributed across its head, arms and base. Its head contains two stereo camera pairs coupled with an LED projector, a 5MP camera and a tilting laser range finder. The forearms each have an Ethernet-based wide-angle camera.

The robot’s two arms have almost the same range of motion as human arms, says Willow Garage, and its spine is extensible so it can reach objects on countertops. (More details of the PR2 hardware.)

PR2 comes with a 1.2 kWh battery pack that has on-board chargers and the capacity for about two hours of run-time.

Check out a video of the PR2 robot navigating through eight doors and plugging its power cord into nine different outlets.

See Also:

Photo: PR2 robot/Willow Garage


Toyota Sees Robotic Nurses in Your Lonely Final Years

toyota_partner_robot_trumpet

Before Toyota made cars, it made robots. It’s making them again, and wants to use them in a most unusual place.

When it was founded in 1926, Toyoda Automatic Loom Works (as it was then known) manufactured automatic fabric looms that could detect problems and shut down automatically. It marketed these revolutionary devices as having “autonomation” — automation with human intelligence.

Now Toyota, looking ahead at the second half of this century, sees a mounting health care crisis and aging population coming to Japan. It sees a future where manufacturing robotic workers is the hot new industry and “autonomation” takes on a whole new meaning.

And the first place we might see these robots is in hospitals.

Japan’s aging population and low birthrate point to a looming shortage of workers, and Japan’s elder care facilities and hospitals are already competing for nurses. This fact has not escaped Toyota, which runs Toyota Memorial Hospital in Toyota City, Japan. Taking a lead from Honda, Toyota in 2004 announced plans to build “Toyota Partner Robots” and begin selling them in 2010 after extensive field trials at Toyota Memorial.

Toyota doesn’t see these machines serving only as nurses. They’re also being designed to provide help around the house and do work at the factory. But it’s the idea of robotic nurses that drew support when Japan’s Machine Industry Memorial Foundation estimated Japan could save 2.1 trillion yen (about $21 billion) in health care costs each year using robots to monitor the nation’s elderly.

This is more than some futuristic fantasy. The government is drafting safety regulations for service robots, which would include nursing droids. A new agency, the Energy and Industrial Technology Development Organization, has launched a five-year project to improve safety standards for the machines. The South Korean Government has even drawn up a code of ethics for how robots should treat humans and, perhaps ironically, how humans should treat robots.

Toyota's 'partner robot' makes a little music.

Toyota's 'partner robot' makes a little music.

“As aging of the population is a common problem for developed countries, Japan wants to become an advanced country in the area of addressing the aging society with the use of robots,” Motoki Korenaga, a ministry of trade and industry official, told Agence France-Presse.

It isn’t so far-fetched. Japan leads the world in building robots, and the bots show remarkable skill. Honda’s famous android, Asimo, has served tea, conducted the Detroit Symphony Orchestra and freaked-out James May of the BBC program Top Gear. Toyota’s robots have even played the violin and the trumpet.

Of course, there’s a huge difference between waving a conductor’s baton and providing aid and comfort to grandma. But Japan’s biggest automakers are determined to make this work. Honda has spent hundreds of millions of dollars developing its human-like robots, and Toyota has 200 people working on the project full-time. To put that in perspective, it might assign 500 engineers to developing a new car platform. Toyota also is working with at least 10 corporate suppliers and 11 universities.

Toyota’s experience building cars, particularly hybrids, will be invaluable. It makes all of its own motors, batteries and power electronics, and it has worked with electronics giant NEC to develop specialized computer vision processors. All are critical components for robots. And like Honda, Toyota’s robot and autonomous vehicle programs are sharing sensing, mapping and navigation technologies. And the automotive giant has the added advantage of running a hospital where it can test its robo-nurses. Toyota says the first of them could be in service next year, and their descendants could be working on the moon by 2020. Seriously.

Toyota and Honda aren’t going to stop building cars, but both see a big market for robots. Toyota is so bullish on bots, it sees them becoming a core business by 2020 (.pdf). Some may see these machines as a threat to our jobs, if not our safety — particularly if they’re serving as nurses. The last thing people want is T-100 checking their IV drip. But the Japanese seem to be thinking of bots like Astroboy — loyal creations willing to sacrifice themselves to save their humans friends.

Either way, Japan’s biggest automakers are doing what they can to make robots a reality.

Photos: Toyota

See Also:


Former Seagate CEO Bill Watkins Turns to the Light Side

100112_bill_watkins_002Bill Watkins might soon have to insert an extra “t” in his last name. The ex-CEO of Seagate is hoping to earn billions in a new venture: reinventing the light bulb.

Watkins today assumed the chief executive position at Bridgelux, a clean-tech company striving to pioneer light-emitting diode (LED) technology in a streamlined package to catalyze widespread adoption of the energy-saving light.

“We think of lights right now as old eight-tracks,” Watkins said in an interview with Wired.com. “Just as people digitized music we’re going to digitize the light.”

Watkins served as CEO of Seagate, one of the world’s largest storage companies, for five years before he was removed from his post in 2009. Often described as famously outspoken, Watkins was once quoted by Fortune as stating Seagate was in the business of helping “people buy more crap — and watch porn,” which landed him in a PR mess prior to his ousting.

What drew Watkins to the light industry? The huge lucrative opportunity presented by energy-saving LEDs during a period of economic and environmental crisis. The global lighting market is estimated to be worth as much as $100 billion. The LED market represents a very small portion, estimated to be worth $1.6 billion by 2012 — a space Watkins hopes to dominate.

“The opportunity is phenomenal right now,” Watkins told Wired.com.

In development since the 1960s, LEDs are semiconductor devices that convert electricity to light. They’re also called solid-state lights because they emit light from a solid object, as opposed to a vacuum or gas tube seen in traditional incandescent or fluorescent light bulbs.

LED technology offers brighter light with lower energy consumption and longer overall life compared to incandescent light sources. The Department of Energy’s goal is to completely replace light bulbs with LEDs over the next 20 years. According to Cree, the current leader in LED technology, LED lighting can eliminate the need to build 133 coal-fired power plants, thereby saving 258 million tons of greenhouse gases, which could equate to powering 12 million American homes a year.

However, general-purpose, residential LEDs are still a relatively small market. We mostly see LEDs used in store signs, computer displays, digital clocks, mobile devices and plenty of other everyday applications. But they have yet to replace the everyday incandescent and fluorescent light bulbs illuminating our homes. General-purpose adoption is stifled by high costs and fragmentation of providers who produce the components for LED systems, Watkins said.

Watkins’ company, Bridgelux, is employing a vertical-integration strategy, in which the company produces and sells all the components needed for a complete LED system. Other LED lighting providers use a more modular approach, so you need to buy components separately.

Bridgelux on Wednesday is beginning to ship its LED Array products, which include all the parts necessary to make them work — substrate layers, optics, lenses, arrays, chips and modules — so clients won’t have to purchase components separately; they’ll get the full product.

“We take all this stuff and give you the finished product,” Watkins said.

100112_bill_watkins_049

The company demonstrated a sample unit to Wired.com (above) containing each of the different configurations. Different lighting colors including warm white, natural white and cool white are available, with luminosity options ranging between 400 lumens (lm) and 2,000 lm. Each system contains an array on which you screw a bulb.

The major benefit of LED? Big energy savings. For example, an LED Array system is capable of emitting 800 lm with just 5 watts. Getting that much light would require 60 watts from an incandescent bulb, Watkins said.

The largest issue remaining for LED technology is cost. Generally, LEDs are dropping in price, but they’re still several times more expensive than traditional lights. Watkins declined to disclose exact figures on costs of Bridge Lux’s LED systems, but he said on average, LED bulbs are worth upward of $40 apiece, and he can see it driven down to $10 per bulb in the next few years.

Watkins added that Bridgelux’s vertical strategy could make it easier not only to purchase LED technology, but also to more wisely evaluate the cost of implementing the technology (because everything you need to buy is included in a single unit). That could help expedite the growth of the LED market while driving prices down, so we could eventually see these devices in hardware stores and homes.

See Also:

Photos: James Merithew/Wired.com


Suspension of Disbelief: Cannondale’s ‘Smart’ Bike

simon_schematic-2

Cannondale has pulled out the old-fashioned mechanical suspension inside its famous Lefty front-fork and stuffed in some electronics. The internal skunkworks project, called Simon, uses accelerometers and electromagnets to give a fast-responding, almost infinitely adjustable suspension.

Still a prototype, Simon essentially computerizes the ride of your bike. You can dial in various stiffnesses using a joystick and small computer up top on the handlebars — a softer ride for downhills and a rock-hard, stiff ride when on the smooth asphalt. This can, of course, be done with mechanical suspension, but introducing electronics makes things faster and smarter.

The electromagnets that control where and how much the fork can move act almost instantly, in around six milliseconds, and allow the fork to collapse from its maximum length right down to zero. These are informed by the accelerometers – from Analog Devices – and this is where the magic comes in. For instance, you could have the suspension dialed-up to act like a solid road fork. If you were to hit a bump in the road, though, the accelerometer would detect this before you even feel it and soften things up, allowing the bike to completely cushion the impact and return you to a good hard ride before you realize there was a pothole.

More, the response curves can be tuned to push back just how you like it, and you can switch configurations at the flick of a thumb. The Simon does add some weight, but as you are also tossing out a tube full of mechanical components the net gain is just a couple pounds (and this is a prototype, so that should improve). The other problem is battery life. Riding on the road you should be good for all day, but if you’re riding hard down a mountainside you could be out of juice in as little as two hours. Still, an extra battery pack could nestle next to the energy bar and the “emergency” hip-flask in your jersey pocket.

No pricing yet, nor even a launch date, but Cannondale seems pretty stoked about this tech, so we expect to see something soon. To see a somewhat in-depth and nerdy demo of the tech (the kind of demo we like), watch this video from bike blog Cycling Dirt:


First Functional Molecular Transistor Comes Alive

molecular-transistorNearly 62 years after researchers at Bell Labs demonstrated the first functional transistor, scientists say they have made another major breakthrough.

Researchers showed the first functional transistor made from a single molecule. The transistor, which has a benzene molecule attached to gold contacts, could behave just like a silicon transistor.

The molecule’s different energy states can be manipulated by varying the voltage applied to it through the contacts. And by manipulating the energy states, researchers were able to control the current passing through it.

The transistor, or semiconductor device that can amplify or switch electrical signals, was first developed to replace vacuum tubes. On Dec. 23, 1947, John Bardeen and Walter Brattain (who’d built on research by colleague William Shockley) showed a working transistor that was the culmination of more than a decade’s worth of effort.

Vacuum tubes were bulky and unreliable, and they consumed too much power. Silicon transistors addressed those problems and ushered in an era of compact, portable electronics.

Now molecular transistors could escalate the next step of developing nanomachines that would take just a few atoms to perform complex calculations, enabling massive parallel computers to be built.

The team, which includes researchers from Yale University and the Gwangju Institute of Science and Technology in South Korea, published their findings in the Dec. 24 issue of the journal Nature.

For about two decades — since Mark Reed, a professor of engineering and applied science at Yale, showed that individual molecules could be trapped between electrical contacts — researchers have been trying to create a functional molecular transistor.

Some of the challenges they have faced include being able to fabricate the electrical contacts on such small scales, identifying the molecules to use, and figuring out where to place them and how to connect them to the contacts.

“There were a lot of technological advances and understanding we built up over many years to make this happen,” says Reed.

Despite the significance of the latest breakthrough, practical applications such as smaller and faster molecular computers could be decades away, says Reed.

“We’re not about to create the next generation of integrated circuits,” he says. “But after many years of work gearing up to this, we have fulfilled a decade-long quest and shown that molecules can act as transistors.”

Photo: A benzene molecule can be manipulated to act as a traditional transistor
Courtesy: Hyunwook Song and Takhee Lee

See Also:


World Map Etched on a Tiny Silicon Chip

smallworld_optical

Researchers at Ghent University in Belgium have etched a tiny world map–on a scale of 1 trillion—on to a optical silicon chip. They reduced the earth’s 25,000-mile circumference at the equator down to 40 micrometers or about half the width of a human hair to fit it on the chip.

The map is put in a corner of a chip designed for a project at the University’s Photonics Research Group.

The idea is to successfully demonstrate scale reduction so complex optical functions can be included in a single chip. Such a chip could find applications in telecommunications, high-speed computing, biotechnology and health care.

The world map was defined on a silicon photonics test chip using 200mm processing. The smallest features resolved on the map are about 100 nanometer. The fabrication consisted of a 30-step process and involved creation of four different layers with differing thicknesses, each of which had to be created separately.

Photonics involves generation, modulation, transmission and processing of light. Silicon photonics technology is an emerging area of research that integrates optical circuits onto a small chip. Light can be manipulated on a submicrometer scale in tiny strips of silicon called photonic wires. These silicon photonic circuits can pack a million times more components when compared to the glass-based photonics available currently, say the researchers.

The circuits developed on this chip carrying the world map were used to demonstrate photonic wires with the lowest propagation losses.

Photo: The small world as seen through an optical microscope. The different colors are caused by interference effects in the different layer thicknesses of the silicon (Photonics Research Group at Ghent University)


Gestural Computing Breakthrough Turns LCD Into a Big Sensor

mit-gestural-computing

Some smart students at MIT have figured out how to turn a typical LCD into a low-cost, 3-D gestural computing system.

Users can touch the screen to activate controls on the display but as soon as they lift their finger off the screen, the system can interpret their gestures in the third dimension, too. In effect, it turns the whole display into a giant sensor capable of telling where your hands are and how far away from the screen they are.

“The goal with this is to be able to incorporate the gestural display into a thin LCD device like a cell phone and to be able to do it without wearing gloves or anything like that,” says Matthew Hirsch, a doctoral candidate at the Media Lab who helped develop the system. MIT, which will present the idea at the Siggraph conference on Dec. 19.

The latest gestural interface system is interesting because it has the potential to be produced commercially, says Daniel Wigdor, a user experience architect for Microsoft.

“Research systems in the past put thousands of dollars worth of camera equipment around the room to detect gestures and show it to users,” he says. “What’s exciting about MIT’s latest system is that it is starting to move towards a form factor where you can actually imagine a deployment.”

Gesture recognition is the area of user interface research that tries to translate movement of the hand into on-screen commands. The idea is to simplify the way we interact with computers and make the process more natural. That means you could wave your hand to scroll pages, or just point a finger at the screen to drag windows around.

MIT has become a hotbed for researchers working in the area of gestural computing. Last year, an MIT researcher showed a wearable gesture interface called the ‘SixthSense’ that recognizes basic hand movements.

But most existing systems involve expensive cameras or require you to wear different-colored tracking tags on your fingers. Some systems use small cameras that can be embedded into the display to capture gestural information. But even with embedded cameras, the drawback is that the cameras are offset from the center of the screen and won’t work well at short distances. They also can’t switch effortlessly between gestural commands (waving your hands in the air) and touchscreen commands (actually touching the screen).

The latest MIT system uses an array of optical sensors that are arranged right behind a grid of liquid crystals, similar to those used in LCD displays. The sensors can capture the image of a finger when it is pressed against the screen. But as the finger moves away the image gets blurred.

By displacing the layer of optical sensors slightly relative to the liquid crystals array, the researchers can modulate the light reaching the sensors and use it capture depth information, among other things.

In this case, the liquid crystals serve as a lens and help generate a black-and-white pattern that lets light through to the sensors. That pattern alternates rapidly with whatever the image that the LCD is displaying, so the viewer doesn’t notice the pattern.

The pattern also allows the system to decode the images better, capturing the same depth information that a pinhole array would, but doing it much more quickly, say the MIT researchers.

The idea is so novel that MIT researchers haven’t been able to get LCDs with built-in optical sensors to test, though they say companies such as Sharp and Planar have plans to produce them soon.

For now, Hirsch and his colleagues at MIT have mocked up a display in the lab to run their experiments. The mockup uses a camera that is placed some distance from the screen to record the images that pass through the blocks of black-and-white squares.

The bi-directional screens from MIT can be manufactured in a thin, portable package that requires few additional components compared with LCD screens already in production, says MIT. (See video below for an explanation of how it works.)

Despite the ease of production, it will be five to ten years before such a system could make it into the hands of consumers, cautions Microsoft’s Wigdor. Even with the hardware in hand, it’ll take at least that long before companies like Microsoft make software that can make use of gestures.

“The software experience for gestural interface systems is unexplored in the commercial space,” says Wigdor.

Photo/Video: MIT