Super-Sized Memory Could Fit Into Tiny Chips

RAM

North Carolina State University engineers have created a new material that could allow a fingernail-sized chip to store the equivalent of 20 high-definition DVDs or 250 million pages of text — fifty times the capacity of current memory chips.

“Instead of making a chip that stores 20 gigabytes, we have a created a prototype that can [potentially] handle one terabyte,” says Jagdish Narayan, a professor of materials science and engineering at NC State. That’s at least fifty times the capacity of the best current DRAM (Dynamic Random Access Memory) systems.

The key to the breakthrough is selective doping, the process by which an impurity is added to a material to change its properties. The researchers added nickel, a metal, to magnesium oxide, a ceramic. The result has clusters of nickel atoms no bigger than 10 square nanometers that can store data. Assuming a 7-nanometer magnetic nanodot could store one bit of information, this technique would enable storage density of more than 10 trillion bits per square inch, says Narayan.

Expanding current memory systems is a hot topic of research. At the University of California Berkeley, Ting Xu, an assistant professor of materials science, has also developed a way to guide the self-assembly of nano-sized elements in precise patterns. Xu is trying to extend the technique to create paper-thin, printable solar cells and ultra-small electronic devices.

Other researchers have shown a way to develop a carbon nanotube-based technique for storing data that could potentially last more than a billion years, thereby improving on the lifespan on storage.

A big challenge for Narayan and his team, who have been working on the topic for more than five years, was the creation of nanodots that can be aligned precisely.

“We need to be able to control the orientation of each nano dot,” says Narayan, “because any information that you store in it has to be read quickly and exactly the same way.” Earlier, the researchers could make only one-layer structures and 3-D self assembly of nano-dots wasn’t possible. But using pulsed lasers they have been able to achieve greater control over the process.

Unlike many research breakthroughs, Narayan says, his teams’ work is ready to go into manufacturing in just about a year or two. And memory systems based on doped nano-dots won’t be significantly more expensive than current systems.

“We haven’t scaled up our prototype but we don’t think it should cost a lot more to do this commercially,” he says. “The key is to find someone to start on the large-scale manufacturing process.”

See Also:

Photo: RAM (redjar/Flickr)


Cheetah, Gecko and Spiders Inspire Robotic Designs

cheetah

A cheetah can run faster than any other animal. A gecko’s feet can stick to almost any surface without using liquids or surface tension. And some roaches scurry at nearly 50 times their body length in one second, which, scaled up to human levels, can be around 200 miles an hour.


The wonders of the animal kingdom are not just for fans of National Geographic. Robotic designer Sangbae Kim, a professor at the Massachusetts Institute of Technology, is trying to understand how he can take some of the mechanisms animals use and replicate them in robots.

The animal kingdom provides the best ideas for creating mobile robots, says Kim. Locomotion and movement are the core parts of an animal’s life. “Animals have to find food, shelter; move towards water or away from a predator,” he says.

“Moving is one of their biggest functions, and they do it very well. That’s why ideas from nature are very important for a robotic designer like me.”

Mechanical design derived from biological models is something Kim has been working on for years, first at Stanford University and now at MIT. The simplification and adaptation of the fundamental design principles seen in animals has led to the creation of his bio-inspired robots.

Among the robots Kim and his team have designed are the Stickybot, a robot that has foot pads based on a gecko’s feet, and iSprawl, a robot whose motion is inspired from cockroaches.

Kim’s latest project is a robot inspired by the cheetah. The idea is build a prototype robot from a lightweight carbon-fiber-foam composite that can run at the cheetah’s speed of 70 miles per hour.

It’s an ambitious project. Current wheeled robots are efficient, but can be slow in rough terrains. For instance, iRobot’s PackBot, which is used by the U.S. military, can only travel at speeds of up to 5.8 miles per hour.

“Most wheeled robots today can do very well on flat surfaces, but they are slow,” says Kim. That’s why he’s looking to the cheetah for ideas. The cheetah has an extremely flexible backbone that gives extra speed or force to its running motion.

Over the next 18 months, Kim and four MIT graduate students will start building and testing prototypes. The first step will be to create a computer model to calculate the optimal limb length, weight, gait and torque of the hip and knee joints.

The biggest challenge in this project won’t be the structure, but getting enough power from a motor to get to the desired speed quickly, says Kim.

sangbae-kim-with-stickybot

Before the robotic cheetah came Stickybot, a mechanical lizard-like robot that takes its inspiration from the gecko. Geckos can climb walls at almost the same speed — of about 1 meter per second — at which they run on the ground. This remarkable ability makes it the perfect animal to draw upon to create a climbing robot, says Kim.

The secret to the gecko’s agility is that it uses a phenomenon called directional adhesion, or stickiness in just one direction, to adhere to walls.

“The gecko’s feet can detach very easily as it moves forward,” says Kim. “If you take normal sticky tape and press it to the wall, you will find it is tough to detach it quickly. Directional adhesion solves that problem.”

The pads of a gecko’s feet are covered with tiny hairs called setae and spatulae that can be up to one-thousandth the width of a human hair. The hairs cling to surfaces using molecular interactions known as the Van der Waals force. The force helps support the gecko’s weight as it scrambles up vertical surfaces.

Kim has tried to recreate that idea for the Stickybot. The Stickybot’s feet is covered with hairs made of rubber silicone. The rubber is thicker than those on a gecko’s paw, however, which limits the robot’s abilities. It can only climb extremely smooth surfaces such as glass, acrylic or a whiteboard.

Kim says his team is working on refining the Stickybot so that it can adapt to climbing on walls with uneven textures.

If the Stickybot can be improved, there are plenty of applications for it, such as repairing of underwater oil pipelines or even window washing.


Scarab: A Roomba For The Mean City Streets

robot-harvester-for-pedestrian-areas

Could street-cleaners someday be replaced by robots? Olga Kalugina thinks so, and has designed the Scarab, an oversized, outdoor Roomba, to do it.

The Scarab would first be deployed in shopping malls where it could easily cruise, clean and polish the smooth floors, but we see a day when robots scour the sidewalks for trash and keep our streets sparkling clean.

Looking like a giant vacuum cleaner, the Scarab uses a pair of webcams to seek out mess and then brushes the trash into an internal tank, which it can empty by itself. It also has a grabber-arm to pick up larger items — discarded Slurpee cups, for example, and runs on electricity instead of an engine like many manually operated street-sweepers.

The big problem, though, is that while a Roomba is safe inside your house, Kalugina’s concept design would be out amongst ranks of terrifying teenagers, bent on teasing the poor machine or even just kicking its face in. Stick this out into the real mean city streets and you’d lose the entire fleet in days, stolen and repurposed or just sold. No, a real street-smart robot would need some kind of defense. A taser, perhaps, or at the very least an electrified shell.

And there you have it. The perfect street cleaning robot would in fact be R2D2. We welcome the future.

Product page [Coroflot via Treehugger]


Video: Swift, Indestructible Cockroach-Robots. The End Is Nigh

DASH is a cheap, featherweight robot based on a cockroach. And like the cockroach, it is both quick and almost indestructible.

Dynamic Autonomous Sprawled Hexapod (we’re sure the name was made to fit the acronym) is made from cardboard laminated with flexible polymer using a 3D printer. Because it weighs just 16 grams, it can survive falls of indefinite distance, and a single DC motor inside the rectangular body is cleverly hooked up to the six legs so that they spin together like the oars of a boat. Thus the row-bot skitters across the floor in a spookily insectoid manner at 1.5 meters per second, or 15 times it’s own body length. That’s like me crawling along at more than 90 feet per second.

The DASH, a design by the Biomimetic Millisystems Laboratory at the University of California, will perhaps morph into a stiffer, more powerful carbon-fiber version. All we know is that the end of days is nigh. Equip a swarm of these with lasers and it’s all over for mankind. For best effect, listen to the chillingly HAL 9000-like voice of the video’s narrator along with Brian Eno’s 2001 album Drawn From Life. Shiver.

DASH: Resilient High-Speed 16-gram Hexapedal Robot [YouTube via the Giz]


Inside the Nobel Prize: How a CCD Works

43402544_b2d298714d_o

This year’s Nobel Prize for Physics has been awarded, with the inventors of the CCD getting recognition for the invention which enabled modern digital photography. It has taken a while: Whilst the invention took just one hour, the prize took 40 years to arrive.

The true fathers of digital photography, Willard S. Boyle and George E. Smith, invented the CCD, or Charge-Coupled Device, while working at Bell Laboratories, New Jersey. What will surprise you is that this invention was made way back in 1969, when everybody else was looking Moon-ward. The CCD was the first practical way to let a light-sensitive silicon chip store an image and then digitize it. In short, it is the basis of today’s digital camera.

The CCD was based on “charge bubbles”, an idea inspired by another project going on in Bell Laboratories at the same time. The sensor is made up of pixels, each of which is a MOS (metal-oxide semiconductor) capacitor. As the light falls on each pixel, the photons become electrons due to the photoelectric effect (the same thing that permits solar power). The photoelectric effect happens when photons of light hit the silicon of the pixel and knock electrons out of place. On a CCD, these electrons are stored in a “bucket”: the pixel’s capacitor.

At this stage, the “image” is still in analog form, with the charge, or amount of electron in the bucket, on each pixel directly corresponding to the amount of light that has hit it. The genius of Boyle and Smith’s CCD was the reading of the information stored.

Essentially, the charge in each row is moved from one site to the next, a step at a time. This has been likened to a “bucket row” or human chain, passing buckets of water down a line. As these buckets of electrons reach the end of the line they are dumped out and measured, and this analog measurement is then turned into a digital value. Thus, a digital grid is made which describes the image.

The image from a CCD is black and white, but by placing a red, green or blue colored filter over the top of each pixel, color information can be read directly from each pixel — but only for one primary color per pixel. Subsequently, software can also extrapolate the color of adjacent pixels based on their brightness, so that each pixel winds up with its own red, green and blue values. If you ever wondered just what a RAW file was, it is the “raw” color data from the chip before any of the post-processing extrapolation has been done. Cameras usually do all of this processing for you and spit out the result as a JPEG. With the RAW file you actually have all the original sensor data, which is much more information-rich.

As an interesting aside, early, primitive patterns for the color filters over the pixels soon gave way to the Bayer pattern still found in almost all sensors today, and developed by Kodak back in 1975.

Today, the CMOS (Complementary Metal Oxide Semiconductor) sensor is becoming more popular, as it reads information directly from each photo-site instead of row by row. It also uses less power, making it better suited to the multi-megapixel chips popular in modern cameras. CMOS sensors have also been around since the 60s, but their complex design, physically larger chips, higher noise and lower sensitivity meant that Boyle and Smith’s CCD triumphed, at least until recently.

But the most amazing thing about the invention is that Boyle and Smith came up with the design so quickly. With Bell Labs threatening to take the funds from their department and transfer the money to bubble memory research, Boyle had to come up with a competing semiconductor design. He got together with Smith and they came up with the idea and sketched it all out on a blackboard in just one hour. Instant photography indeed.

Press release [Nobel Prize]

Photo credit: jurvetson/Flickr


‘DNA Transistor’ Could Revolutionize Genetic Testing

ibm-dna-transistor

Researchers at IBM have found a way to meld biology and computing to create a new chip that could become the basis for a fast, inexpensive, personal genetic analyzer. The DNA sequencer involves drilling tiny nanometer-size holes through computer-like silicon chips, then passing DNA strands through them to read the information contained in their genetic code.


“We are merging computational biology and nanotechnology skills to produce something that will be very useful to the future of medicine,” Gustavo Stolovitzky, an IBM researcher, told Wired.com.

The “DNA transistor” could make it faster and cheaper to sequence individuals’ complete genomes. In so doing, it could help facilitate advances in bio-medical research and personalized medicine. For instance, having access to a person’s genetic code could help doctors create customized medicine and determine an individual’s predisposition to certain diseases or medical conditions.

Such a device could also reduce the cost of personalized genome analysis to under $1,000. In comparison, the first complete sequencing of a human genome, done by the Human Genome Project, cost about $3 billion when it was finally completed in 2003. Since then, other efforts have attempted to achieve something similar for a much lower cost. Stanford researcher Stephen Quake recently showed the Heliscope Single Molecule Sequencer that can sequence a human genome in about four weeks at a cost of $1 million. Services such as 23&me offer DNA testing for much less, but only do partial scans, identifying markers for specific diseases and genetic traits rather than mapping the entire genome.

Because of the expense, so far only seven individuals’ genomes have been fully sequenced. IBM’s personalized DNA readers, if successful, could extend that privilege to many more people.

“If there’s a chance that this could go behind the counter at hospitals, clinics and someday even a black bag then it would change how we approach medicine,” says Richard Doherty, research director at consulting firm Envisioneering Group. “All it would take is a simple test to look at anyone genes.”

DNA, or Deoxyribonucleic acid, contains the instructions needed for an organism to develop, survive and reproduce. A gene comprises the set of instructions needed to make a single protein. For humans, the complete genome contains about 20,000 genes on 23 pairs of chromosomes.

schematics-of-the-dna-transitorIBM scientists hope to change that by taking advantage of current chip-fabrication technology. Researchers took a 200-millimeter silicon-wafer chip and drilled a 3-nanometer-wide hole (known as a nanopore) through it. A nanometer is one one-billionth of a meter or about 100,000 times smaller than the width of a human hair.

The DNA is passed through the nanopore. To control the speed at which it flows through the pore, researchers developed a device that has a multilayer metal and dielectric structure, says Steve RossNagel, a researcher at IBM’s Watson lab in New York.

This metal-dielectric structure holds the nanopore. A modulated electric field between the metal layers traps the DNA in the nanopore. Since the molecule is easily ionized, voltage drops across the nanopore help “pull” the DNA through. By cyclically turning on and off these gate voltages, scientists can move the DNA through the pore at a rate of one nucleotide per voltage cycle –- a rate the researchers believe would make the DNA readable. IBM hasn’t specified how fast a strand of DNA can be read, though researchers say a fully functional device could sequence the entire genome in “hours.”

Ultimately, several such nanopores can run parallel on a chip to create a complete genomic analyzer.

Though researchers have figured out the basics, it could still take up to three years to get a working prototype. The challenge now is to slow and control the motion of the DNA through the hole so the reader can accurately decode what is in the DNA.

They also need to determine exactly how the DNA will be decoded when it passes through the nanopore. It’s an area of “intense research” within and outside of IBM, says Stolovitzky. One way to do it would be to measure the electrical properties of the different DNA bases such as capacitance and conductivity.

“This is a knowledge that most people would like to have,” says RossNagel. “If we could have a big enough database of human genomes then you can see the interplay of genetics. That would change how we approach medicine.”

Top Photo: A cross section of IBM’s DNA Transistor simulated on the Blue Gene supercomputer shows a single stranded DNA moving in the midst of invisible water molecules through the nanopore/ IBM

See Also:

In the video below, IBM researchers explain how they came up with the idea for the DNA Transistor. The video includes an animated simulation showing a DNA strand moving through the nanopore.


Gyrowheel, Or How To Teach Your Kid to Ride In One Afternoon

When a kid learns to ride a bike, it normally goes like this: First, training wheels. These keep you upright, but make it impossible to bank into turns and stop the bike from handling like a lean-able two-wheeler. They do, however, build confidence.

Second, the “dad sessions”. This involves dad running along behind the now training wheel-free bike and holding the saddle. He lets go for increasingly longer moments until the kid doesn’t wobble anymore. Kid rides off down the street, looks back to see dad 100 yards away, kid falls off. Repeat.

But take a look at the Gyrowheel, which promises to let kids learn on their own, and in as little as half an hour. The wheel replaces the front wheel of a tiny child’s’ bike and has a spinning disc inside. Powered for up to three hours by a rechargeable battery, the disk spins fast enough to keep the wheel upright, even when joined to a child-bearing bike. If you have ever taken a standard bike wheel off a bike and spun it fast, you’ll know the effect. It can be balanced by one finger under one side of the axle, and if you try to turn the wheel by the axle with both hands, it twists off in a completely different direction.

As you can see in the demo video shot at this year’s Interbike show, the wheel has an almost spooky ability to resist even kicks and shoves. We are especially creeped out by the bike when it makes its way across the show floor, riderless but upright. And check out the slow-motion fall when the wheel rolls to a halt and ever-so-gently lets itself down.

The speed of the internal disk can be adjusted to alter the gyroscopic effect, letting you turn it down as the kid gets better at riding. And because of the gyroscopic precession, you actually get the stability felt at high speed at the low puttering speeds at which scared kids will actually ride. And did we mention the glowing light inside which, although it is there to indicate charge and power status in the battery, is in fact baby’s first low-rider accessory, giving a whirling rim-light to his her sweet, sweet ride.

We love the idea, especially as the bike stays as a two wheeler and will handle accordingly. The only downside is that the kids can learn to ride in an afternoon, meaning that it might be better to rent these rather than sell them. The 12-inch Gyrowheel should be in US shops on December 1st, with the 16-inch arriving son after. International rollout (ahem) should be complete by the end of 2010. Prices are still a secret. Rumors that the company will be making a hydraulic kit to let the bike jump up and down are completely made up.

Product page [Gyrobike via Bike Commuters]


20 Years of Moving Atoms, One by One

<< previous image | next image >>






Sometimes genius looks like an elegant equation written in chalk on a blackboard. Sometimes it’s a hodgepodge of wires, canisters and aluminum-foil-wrapped hoses, all held together by shiny bolts.

Despite its homebrew appearance, this device, a scanning tunneling microscope, is one of the most extraordinary lab instruments of the last three decades. It can pick up individual atoms one by one and move them around to create supersmall structures, a fundamental requirement for nanotechnology.

Twenty years ago this week, on Sept. 28, 1989, an IBM physicist, Don Eigler, became the first person to manipulate and position individual atoms. Less than two months later, he arranged 35 Xenon atoms to spell out the letters IBM. Writing those three characters took about 22 hours. Today, the process would take about 15 minutes.

“We wanted to show we could position atoms in a way that’s very similar to how a child builds with Lego blocks,” says Eigler, who works at IBM’s Almaden Research Center. “You take the blocks where you want them to go.”

Eigler’s breakthrough has big implications for computer science. For instance, researchers are looking to build smaller and smaller electronic devices. They hope, someday, to engineer these devices from the ground up, on a nanometer scale.

“The ability to manipulate atoms, build structures of our own, design and explore their functionality has changed people’s outlook in many ways,” says Eigler. “It has been identified as one of the starting moments of nanotech because of the access it gave us to atoms, even though no product has comes out of it.”

On the 20th anniversary of Eigler’s achievement, we look at the science, art and implications of moving individual atoms.


Androids Dance, Slide and Fight at Robo-One Competition

roboone

Gladiatorial matches between bipedal humanoid robots is just one of the reasons to get excited about Robo-One, an event last weekend in Toyama City, Japan.

This year’s event showed some interesting new robots such as a thought-controlled robot, a robot that can flip its head back so you can ride it, and a mini-Gundam robot.

Check out these videos of these robots that kicked up a storm at Robo-One. Got any other great videos or photos from Robo-One? Let us know in the comments.

The Omni Zero 9

Takeshi Maeda is known to robot lovers as the man who designed the red, bi-pedal Omni Zero robots. Maeda showed the latest version, the Omni Zero 9, at Robot-One. It’s an eerily humanoid robot that can autonomously walk a few steps. Among the stunning features of this robot is it ability to lie flat on the ground and roll up a ramp using the two wheels that make up its shoulders, kind of like a slow, mechanical Jean-Yves Blondeau. It’s a sight worth watching!

The Omni Zero 9 also competed at the Robo-One Championship, as shown in the following video:

The robot’s head also flips back so if you are small enough and brave enough to sit in the gap, you can actually ride the robot. If you are wondering how big the robot is, then here are the stats: The Omni Zero 9 is just about 3.4 feet tall and weighs 55 lbs. The robot won one of the three prizes at the championship.

Thought-controlled robot

Brain interfaces are becoming popular among videogamers who use electrodes hooked up to their skulls to control the movement of characters on the screen.

Taku Ichikawa, a fourth-year student at the University of Electro-Communications in Tokyo, is trying to do something similar with a robot. Ichikawa uses 12 electrodes to measure his neural activity, which in turn issues commands via a wireless connection to a robot that is about 20 inches tall and weighs 4.4 lbs.

Ichikawa’s robot can perform three types of movement: walking forward, rotating right and using its single arm for stabbing attacks, says Japanese newspaper Mainichi Daily News. The thought-to-action process is not instantaneous though. It takes a total of about 1.5 seconds for the robot to begin doing what Ichikawa is thinking.


Gallery: Tablet Computing From 1888 to 2010

collage

The word “tablet” used to refer to a flat slab for bearing an inscription. Leave it to the tech industry to make it into something far more complicated and confusing.

Scores of products marketed as “tablets” have come and gone, and now — with rumors of imminent tablet computers from Apple, Dell, Microsoft and others — the category seems ripe for a rebound.

“If people can figure out a new device category that consumers will want to buy that isn’t a laptop or a phone, that opens a whole new possibility in markets to conquer,” explains Michael Gartenberg, a tech strategist with Interpret. “That’s why companies continue to invest in this space, and we have a large number of bodies that are littered in this space.”

Let’s take a look at tablets past, present and future. If the upcoming tablets are to succeed, they’ll need to learn from hideous mistakes like the Apple Newton and the Tablet PC.

Origins
picture-21 The origins of the tablet computer can be traced as far back as the 19th century. Electrical engineer Elisha Gray registered an 1888 patent (.pdf) describing an electrical-stylus device for capturing handwriting. Famous for his contributions to the development of the telephone, Gray’s idea with a “tablet” was not for drawing, but rather a method of using telegraph technology to transmit handwritten messages. (Think of it as a primitive form of instant messaging or e-mailing.)

Gray’s concept wasn’t merely a flat slab. His patent depicts two instruments: a transmitter and a receiver. The transmitter is a pen-like device connected to two electric circuits acting as interruptors. Current interruptions are used to translate the transmitter pen’s movements into signals transmitted to the receiver pen to mimic the movements, thereby reproducing the message on a piece of paper.

This description hardly sounds anything like a tablet, but later electronic-handwriting-recognition patents built from the idea of transmitting and receiving instruments, eventually combining them into one slab-shaped device like the tablets we see today.

The Apple Newton
applenewton
The Newton MessagePad (above) was the first attempt by a major computer company at producing a commercial tablet-type computer for the mass market. Weighing in at about two pounds, Apple’s 1993 foray into tablet computing sported a powerful-for-its-time 20 MHz processor and a pen-centric interface. Writing recognition in the first version was so bad that it was famously mocked in a Doonesbury cartoon, and though it subsequently improved, the Newton never recovered from the initial PR blow. In 1998, Apple discontinued the Newton when Steve Jobs retook the helm as CEO, leaving a small coterie of true believers to keep the product’s memory alive.

PDAs and Smartphones
9423_screensource1
While no one refers to their iPhone as a “pocket tablet,” these devices are an important stage in the development of tablet computers.

Palm founder Jeff Hawkins learned from Apple’s mistakes and set out to build a pocket-sized computer that was smaller, cheaper, more modest in its ambitions and ultimately more useful than the Newton. He succeeded wildly with the 1996 launch of the Palm Pilot, spawning a long line of pen-based personal digital assistants from Palm, HP, Dell and others.

When Apple returned to the touchscreen world with the iPhone in 2007, it showed that it had paid close attention during the decade since the Newton flopped. The iPhone was simple, small, elegant and did a handful of things — make calls, browse the web, handle e-mail — very well. The fact that it wasn’t an all-purpose portable computer didn’t seem to matter so much compared to its usability and design.

Graphics tablets

bambooGraphics tablets are computer input devices with a stylus-controlled interface. The technologies used vary, but generally all graphic tablets use the received signal to determine the horizontal and vertical position of the stylus, distance of the stylus from tablet surface and the tilt (vertical angle) of the stylus. Popular among digital illustrators, tablets facilitate a natural way to create computer graphics, especially 2-D illustrations.

Given their specialty, graphics tablets fill a niche for digital artists. Some consumer applications include writing Chinese, Japanese or Korean characters, working with handwriting recognition software to transfer them onto the computer. The stylus can also be used as a mouse.

However, for other languages, including English, the majority of consumers prefer typing on a keyboard for speedier writing, according to Gartenberg. Thus, the graphics tablet fills a niche in the design industry, but it is not a major product category in the consumer market. Wacom is the most prominent manufacturer producing graphics tablets today. (Example above: Wacom Bamboo Fun)