Roadrunner Supercomputer Gets Decommissioned

Supercomputers are incredible pieces of technology with thousands of processors, terabytes of memory, and often hundreds of terabytes of storage as well. Back in 2008, the fastest supercomputer in the world was installed at Los Alamos National Laboratory, called the Roadrunner. At the time, it was the first computer to ever pass the petaflop barrier.

rrsc

A petaflop is 1 million billion calculations per second. But that’s no longer considered that fast, and this aging supercomputer was officially decommissioned yesterday after being operational for five years. Roadrunner was a hybrid supercomputer featuring 6563 dual-core AMD processors each linked to a special PowerXCell 8i processor that was originally designed for the PlayStation 3.

roadrunner supercomputer 2

One of the biggest projects that Roadrunner was used for was the National Nuclear Security Administration’s Advanced Simulation and Computing program. When used in that program, Roadrunner provided key computer simulations for the Stockpile Stewardship Program that had to do with the US nuclear deterrent.

While the computer has been officially decommissioned, for the next month the supercomputer will be used to perform experiments on operating system memory compression techniques and that helped design the future of capacity cluster computers. After that month, Roadrunner will be taken apart.

Why don’t they just put it up on eBay?

World’s first petaflop supercomputer gets decommissioned

IBM has been producing some of the best performing supercomputers in the world for a number of years. In fact, back in 2008 and 2009 IBM developed and launched a supercomputer called Roadrunner. This supercomputer was the first to be able to operate at sustained performance in the petaflop range.

IBM_Roadrunner

The computer was installed at the Los Alamos National Laboratory where it has been in use for the last five years. Yesterday, the laboratory officially decommissioned Roadrunner. The supercomputer has 12,960 IBM PowerXCell 8i processors and 6480 AMD Opteron dual-core processors.

Those processors shared 114 TB of memory and about 1.09 million TB of storage. The supercomputer isn’t being completely dismantled, researchers will continue to utilize the machine and its impressive power for various experiments. These experiments will include things such as determining the methods for compressing operating system memory and optimizing data routing.

With Roadrunner being decommissioned from research duties, scientists and other researchers can now use the computer for projects that couldn’t have been done while the supercomputer was being used for research projects. The computer is housed in 6000 ft.² of space and cost $125 million to build. Roadrunner may not be fast enough for the scientists and researchers at Los Alamos, but the computer is still incredibly fast and sits at number 22 on the list of the world’s most powerful supercomputers. The computer gulps power needing 2345 kW when running full tilt. Modern supercomputers need significantly less power to achieve significantly more performance.

[via PCMag]


World’s first petaflop supercomputer gets decommissioned is written by Shane McGlaun & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

IBM Roadrunner retires from the supercomputer race

IBM Roadrunner retires from the supercomputer race

For all the money and effort poured into supercomputers, their lifespans can be brutally short. See IBM’s Roadrunner as a textbook example: the 116,640-core cluster was smashing records just five years ago, and yet it’s already considered so behind the times that Los Alamos National Laboratory is taking it out of action today. Don’t mourn too much for the one-time legend, however. The blend of Opteron and Cell processors proved instrumental to understanding energy flow in weapons while also advancing the studies of HIV, nanowires and the known universe. Roadrunner should even be useful in its last gasps, as researchers will have a month to experiment with the system’s data routing and OS memory compression before it’s dismantled in earnest. It’s true that the supercomputer has been eclipsed by cheaper, faster or greener competitors, including its reborn Cray arch-nemesis — but there’s no question that we’ll have learned from Roadrunner’s brief moment in the spotlight.

Filed under:

Comments

Via: NBC

Source: Los Alamos National Laboratory

University of Illinois’ Blue Waters supercomputer now running around the clock

University of Illinois' Blue Waters supercomputer now running around the clock

Things got a tad hairy for the University of Illinois at Urbana-Champaign’s Blue Waters supercomputer when IBM halted work on it in 2011, but with funding from the National Science Foundation, the one-petaflop system is now crunching numbers 24/7. The behemoth resides within the National Center for Supercomputing Applications (NCSA) and is composed of 237 Cray XE6 cabinets and 32 of the XK7 variety. NVIDIA GK110 Kepler GPU accelerators line the inside of the machine and are flanked by 22,640 compute nodes, which each pack two AMD 6276 Interlagos processors clocked at 2.3 GHz or higher. At its peak performance, the rig can churn out 11.61 quadrillion calculations per second. According to the NCSA, all that horsepower earns Blue Waters the title of the most powerful supercomputer on a university campus. Now that it’s cranking away around-the-clock, it’ll be used in projects investigating everything from how viruses infect cells to weather predictions.

Filed under: ,

Comments

Source: National Center for Supercomputing Applications

NVIDIA-powered computers break Pi calculation record

Yesterday was Pi Day, and to celebrate the yearly occasion, you no doubt tried your hardest to recite Pi to as many decimal places as you could. Of course, most of us probably couldn’t get past the first few decimal places, but there was one person who could, thanks to a set of computers powered by a handful of NVIDIA graphics cards.

pie

Santa Clara University researcher Ed Karrels ended up breaking the world record for computing digits of Pi to eight quadrillion places to the right of the decimal point. Karrels used graphics cards to do the work rather than CPUs, and he spread the work across three different computers: one with four NVIDIA GTX 690 cards, one with two NVIDIA GTX 680 cards, and 24 computers at the Santa Clara University Design Center with one NVIDIA GTX 570 card each.

The calculation took 35 days to complete, from December 19 to January 22, beating out the previous held by a team at Yahoo, who used 1,000 CPU-only computers, which took 23 days to compute Pi to two-quadrillion places, just a quarter of what Karrels’s setup achieved. After the 35-day run, Karrels conducted a second run to double-check the math, which took just 26 days using newer versions of his programming tools.

Karrels will speak at the GPU Technology Conference in San Jose, California next Tuesday, where he’ll be explaining the math behind the Pi calculation achievement, as well as the programming tricks he used, as well as the logistics of conducting supercomputing tasks on a budget.


NVIDIA-powered computers break Pi calculation record is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Watson ponders careers in cooking, drug research as IBM makes it earn its keep

Watson ponders culinary, drug research careers as IBM insists it make something of itself

While mad game show skills are nice and all, IBM has started to nudge Watson toward the door to begin paying its own freight. After a recent foray into finance, the publicity-loving supercomputer has now brought its number-crunching prowess to the pharmaceutical and pastry industries, according to the New York Times. If the latter sounds like a stretch for a hunk of silicon, it actually isn’t: researchers trained Watson with food chemistry data, flavor popularity studies and 20,000 recipes — all of which will culminate in a tasting of the bot’s freshly devised “Spanish Crescent” recipe. Watson was also put to work at GlaxoSmithKline, where it came up with 15 potential compounds as possible anti-malarial drugs after being fed all known literature and data on the disease. So far, Watson projects haven’t made Big Blue much cash, but the company hopes that similar AI ventures might see its prodigal child finally pay back all those years of training.

Filed under:

Comments

Source: New York Times

NVIDIA unveils GTX Titan GPU with supercomputer performance

Remember the Titan supercomputer? Back in November, it became the world’s fastest supercomputer, and it’s powered by NVIDIA chips. Now you can get a piece of Titan in your own home because NVIDIA has announced the GTX Titan graphics card, a $1,000 GPU that sports 2,688 CUDA cores, 6GB of GDDR5 RAM, and 7.1 billion transistors.

Screen Shot 2013-02-19 at 8.22.44 AM

NVIDIA says that the new GTX Titan graphics card is “powered by the fastest GPU on the planet,” which we certainly can’t refute at this point. The graphics card itself is huge, measuring in at 10.5-inch long, and it’s capable of pushing 4,500 Gigaflops, which is quite impressive if we do say so ourselves.

Screen Shot 2013-02-19 at 8.24.24 AM

However, the GTX Titan falls just a tad short of NVIDIA’s current top-tier offering, the GTX 690, as far as raw specs and computing power are concerned, but efficiency is where the Titan really shines. The GTX Titan features over a thousand more CUDA cores than the GTX 690, but it requires less power, as well as generates less heat and runs quieter overall.

As far as availability goes, NVIDIA The Titan GPU will be available starting on February 25 from various partners, including ASUS, eVGA, Gigabyte, and MSI, at a price of around $1,000, which certainly isn’t going to want you to make an impulse purchase, but if you’re looking for supercomputer-like speeds with your gaming rig, this card may be well worth it.


NVIDIA unveils GTX Titan GPU with supercomputer performance is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Eurora supercomputer cuts back on CO2 emissions with help of NVIDIA GPUs

The world obviously needs its supercomputers, but with a growing energy crisis, efficiency is becoming a big priority. The problem is that supercomputers require a lot of power, but that’s an issue Eurotech and NVIDIA are trying to solve in the new Eurora computer. Not only is this beast powerful, but NVIDIA has announced that it’s also breaking efficiency records for supercomputers.

Eurora1

In a NVIDIA Blog post, the company explains that Eurora managed to reach “3,150 megaflops per watt of sustained performance,” which just so happens to beat the efficiency of the highest-ranked supercomputer on the Green500 list by 26%. That isn’t bad at all, and it’s thanks to Eurotech’s Aurora hot water cooling system.

By using hot water cooling, the Eurora doesn’t need to be cooled by air conditioning, obviously saving on energy costs. NVIDIA describes the benefit of using hot water by pointing out that it can be “re-purposed to heat buildings or drive absorption chillers, and then returned back to the supercomputer at a cooler temperature.” The Eurora is equipped with 64 compute nodes, which are each made up of two Intel Xeon E5-series CPUs and two NVIDIA Tesla K20 GPU accelerators. Since the water cooling system allows Eurotech to save space in the Eurora, NVIDIA says it’s able to fit 256 CPUs and GPUs into a single rack.

TeslaK20-w1000

While the water cooling system helped the Eurora meet its efficiency goals, it didn’t do all the work along. The Telsa K20 GPUs are also quite efficient too, with NVIDIA pointing out that they’re four times more efficient than x86 CPUs. The Eurora made its way to the Cineca supercomputer facility in Bologna, Italy this week, with NVIDIA and Eurotech predicting that it will save 2.5 million kilowatt hours and eliminate 1,500 tons of CO2 emissions over the course of its 5-year life. That, ladies and gentlemen, is one efficient machine.

Eurora2
Eurora1
TeslaK20-w1000


Eurora supercomputer cuts back on CO2 emissions with help of NVIDIA GPUs is written by Eric Abent & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

IBM’s Watson heading to its first university

IBM‘s infamous Watson supercomputer is making its way to the classroom after appearing on Jeopardy! a while back. IBM announced today that they’ll be building another Watson supercomputer and will be giving it to the Rensselaer Polytechnic Institute to be the first university to receive a Watson supercomputer. Other universities are planned to get one in the future.

Jeopardy_watson_IBM

Rensselaer will receive the Watson system thanks to a grant that allows the university to invest more resources to research and development of big data, analytics, and cognitive computing. However, in return, IBM is asking the university to send its findings their way so that they can improve Watson even more.

Rensselaer’s private Watson supercomputer will have 15 terabytes of storage, which is actually more than even the Jeopardy! version had. Plus, the room that Watson will be stored in will allow 20 people at a time to work inside, including faculty, graduate students, and a few undergraduate students.

So what will the supercomputer be used for at the university? Artificial intelligence researchers at Rensselaer want to improve Watson’s mathematical ability and help it figure out the meaning of newer words. They also want to improve the computer’s ability to analyze all of the images, videos, and emails floating around on the internet, something that will prove to be no easy task for the folks at the university.


IBM’s Watson heading to its first university is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Stanford seizes 1 million processing cores to study supersonic noise

Stanford commandeers 1 million processing cores to study supersonic noise

In short order, the Sequoia supercomputer and its 1.57 million processing cores will transition to a life of top-secret analysis at the National Nuclear Security Administration, but until that day comes, researchers are currently working to ensure its seamless operation. Most recently, a team from Stanford took the helm of Sequoia to run computational fluid dynamics simulations — a process that requires a finely tuned balance of computation, memory and communication components — in order to better understand engine noise from supersonic jets. As an encouraging sign, the team was able to successfully push the CFD simulation beyond 1 million cores, which is a first of its kind and bodes very well for the scalability of the system. This and other tests are currently being performed on Sequoia as part of its “shakeout” period, which allows its caretakers to better understand the capabilities of the IBM BlueGene/Q computer. Should all go well, Sequoia is scheduled to begin a life of government work in March. In the meantime, you’ll find a couple views of the setup after the break.

Stanford scientists commandeer 1 million processing cores to study supersonic noise

Stanford scientists commandeer 1 million processing cores to study supersonic noise

Filed under:

Comments

Via: TechCrunch, EurekAlert

Source: Stanford