TITAN sees unprecedented demand for supercomputing science projects

Today the folks at Oak Ridge National Laboratory, NVIDIA, and Cray have brought on the next generation of accelerated computing with not just a re-naming of the Jaguar supercomputer, but integration with NVIDIA’s solutions for GPU-powered greatness. This update turns the Titan (as it is now called) into the flagship accelerated computing system – the flagship for the whole world, that is. This is now a 200-cabinet Cray XK7 supercomputer working with 18,688 notes – AMD 16-core Opteron plus NVIDIA Tesla K20 GPUs – enough to change the way we work.

This project is a next-level teaming of the Cray XK7, the “most scalable supercomputer” on the planet, and the NVIDIA Tesla K20 GPU, aka the “world’s fastest accelerator.” This combination beings on CUDA and Open ACC programming and new features that expand programmability far beyond what’s been available before, and with the NVIDIA GPU units being used now, they’re working with 3x higher performance per watt. This means one whole heck of a lot less power consumed for the same tasks as were being performed before.

This supercomputer is currently in the acceptance process for a series of scientific applications. The program that surrounds this unit is made to expand the access groups have to supercomputing, judging each application for a program individually and giving them time based on the percentages allotted to each of the following: Plasma, Nuclear, Materials, Engineering, Earth Science, Computer Science, Chemical Science, Biology, and Astrophysics. This is all done through the US Department of Energy’s INCITE: Innovative and Novel Computational Impact on Theory and Experiment.

This program has seen a record number of proposals, with demand being approximately three times larger than they’re actually able to supply. Have a peek at the gallery below to see a few examples of what these applicants are proposing:

gomang
proagesae
proafods
projessasd

This also brings the Jaguar – again, now called Titan – up to a whole new specifications set. The compute notes remain the same at 18,688, but the Login and I/O nodes go up from 256 all the way to 512. The memory per node was at 16Gb and is now at 32GB + 6GB. Number of Opteron cores jumps from 224,256 to 299,008, and the total system memory was at 300TB and is now at 710TB. With the addition of 18,688 NVIDIA K20 Kepler accelerators, this beast’s former peak performance at 2.3 Petaflops is dwarfed by its current peak at 20+ Petaflops.

2012-P02904
2012-P02909
2012-P03100
2012-P03132R
2012-P03133R
2012-P02910
2012-P03134R
2012-P03135R
2012-P02907
2012-P02901
2012-P02873
2012-P02849
2012-P02847
2012-P02843
proadfa


TITAN sees unprecedented demand for supercomputing science projects is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Titan supercomputer goes live with potent CPU/GPU tag team

The Titan supercomputer at the Oak Ridge National Laboratory has been upgraded, tackling complex climate change calculations with 20 petaflops worth of new processors. Under the (considerable) hood its NVIDIA’s “Kepler” GPUs and AMD Opetron 6274 processors doing the heavy lifting, though NVIDIA can’t resist pointing out that its graphics chips are in fact carrying 90-percent of the overall load. The GPUs, more commonly found powering gaming rigs, help make Titan “the world’s fastest supercomputer for open scientific research.”

That research will include simulating physical systems, such as weather patterns, or progressions in energy, climate change, efficient engines, materials, and other fields. However, unlike most supercomputers, where access is jealously guarded, Titan takes a more open approach to access.

Researchers from schools and universities, government labs, and private industry can access Titan – by arrangement, of course – to crunch their own data. Final testing is still underway by the laboratory and Cray, and the supercomputer’s first year will be dominated by work on the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

“The improvements in simulation fidelity will accelerate progress in a wide range of research areas such as alternative energy and energy efficiency, the identification and development of novel and useful materials and the opportunity for more advanced climate projections” James Hack, director of ORNL’s National Center for Computational Sciences, said of the new machine.

In total, there are 299,008 CPU cores, sixteen to each of 18,688 nodes; each node also has an NVIDIA Tesla K20 graphics accelerator. The cores are used to guide the simulations, while the GPUs are relied upon to do the actual data crunching; altogether, it’s more than 10x faster and 5x more power efficient than the Jaguar supercomputer Titan replaces.

In fact, Titan can simulate 1-5 years per day of computing time, whereas Jaguar took a day to work through around three months worth of data. ORNL says it’s the equivalent of “the world’s 7 billion people being able to carry out 3 million calculations per second.”


Titan supercomputer goes live with potent CPU/GPU tag team is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Cray’s Jaguar supercomputer upgraded with NVIDIA Tesla GPUs, renamed Titan

Cray's Jaguar supercomputer upgraded with NVIDIA Tesla GPUs, renamed Titan

Cray’s Jaguar (or XK7) supercomputer at Oak Ridge National Laboratory has been loaded up with the first shipping NVIDIA Telsa K20 GPUs and renamed Titan. Loaded with 18,688 of the Kepler-based K20s, Titan’s peak performance is more than 20 petaflops. Sure, the machine has an equal number of 16-core AMD Opteron 6274 processors as it does GPUs, but the Tesla hardware packs 90 percent of the entire processing punch. Titan is roughly ten times faster and five times more energy efficient than it was before the name change, yet it fits into the same 200 cabinets as its predecessor. Now that it’s complete, the rig will analyze data and create simulations for scientific projects ranging from topics including climate change to nuclear energy. The hardware behind Titan isn’t meant to power your gaming sessions, but the NVIDIA says lessons learned from supercomputer GPU development trickle back down to consumer-grade cards. For the full lowdown on the beefed-up supercomputer, hit the jump for a pair of press releases.

Continue reading Cray’s Jaguar supercomputer upgraded with NVIDIA Tesla GPUs, renamed Titan

Filed under: ,

Cray’s Jaguar supercomputer upgraded with NVIDIA Tesla GPUs, renamed Titan originally appeared on Engadget on Mon, 29 Oct 2012 03:01:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Math Has Never Looked as Pretty as This [Image Cache]

When you were at high school, math was probably an uninspiring string of algebra you had to crunch through. Get to the cutting edge of computational fluid dynamics, though, and it all starts to look a hell of a lot more pretty. More »

Alt-week 10.6.12: supercomputers on the moon, hear the Earth sing and the future of sports commentary

Alt-week peels back the covers on some of the more curious sci-tech stories from the last seven days.

Altweek 10612 supercomputers on the moon, hear the Earth sing and the future of sports commentary

Normally we try to encourage you to join us around the warm alt-week campfire by teasing you about what diverse and exotic internet nuggets we have for you inside. Sadly, this week that’s not the case. There’s nothing for you here we’re afraid. Not unless you like totally mind-blowing space videos, singing planets and AI / sports commentary-flavored cocktails, that is. Oh, you do? Well what do you know! Come on in… this is alt-week.

Continue reading Alt-week 10.6.12: supercomputers on the moon, hear the Earth sing and the future of sports commentary

Filed under: , ,

Alt-week 10.6.12: supercomputers on the moon, hear the Earth sing and the future of sports commentary originally appeared on Engadget on Sat, 06 Oct 2012 17:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Insert Coin: The Parallella project dreams of $99 supercomputers

In Insert Coin, we look at an exciting new tech project that requires funding before it can hit production. If you’d like to pitch a project, please send us a tip with “Insert Coin” as the subject line.

Insert Coin: The Parallella project dreams of $99 supercomputers

Parallel computing is normally reserved for supercomputers way out of the reach of average users — at least at the moment, anyway. Adapteva wants to challenge that with its Parallella project, designed to bring mouth-watering power to a board similar in size to the Raspberry Pi for as little as $99. It hopes to deliver up to 45GHz (in total) using its Epiphany multicore accelerators, that crucially, only chug 5 watts of juice under normal conditions. These goliath speeds currently mean high costs, which is why they need your funds to move out of the prototype stage and start cheap mass production. Specs for the board are as follows: a dual-core ARM A9 CPU running Ubuntu OS as standard, 1GB RAM, a microSD slot, two USB 2.0 ports, HDMI, Ethernet and a 16- or 64-core accelerator, with each core housing a 1GHz RISC processor, all linked “within a single shared memory architecture.”

An overriding theme of the Parallella project is the openness of the platform. When finalized, the full board design will be released, and each one will ship with free, open-source development tools and runtime libraries. In addition, full architecture and SDK documentation will be published online if-and-when the Kickstarter project reaches its funding goal of $750,000. That’s pretty ambitious, but we’re reminded of another crowd-funded venture which completely destroyed an even larger target. However, that sum will only be enough for Adapteva to produce the 16-core board, which reportedly hits 13GHz and 26 gigaflops, and is expected to set you back a measly $99. A speculative $3 million upper goal has been set for work to begin on the $199 64-core version, topping out at 45GHz and 90 gigaflops. Pledge options range from $99 to $5,000-plus, distinguished mainly by how soon you’ll get your hands on one. Big spenders will also be the first to receive a 64-core board when they become available. Adapteva’s Andreas Olofsson talks through the Parallella project in a video after the break, but if you’re already sold on the tiny supercomputer, head over to the source link to contribute before the October 27th closing date.

Continue reading Insert Coin: The Parallella project dreams of $99 supercomputers

Filed under:

Insert Coin: The Parallella project dreams of $99 supercomputers originally appeared on Engadget on Fri, 28 Sep 2012 12:19:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceKickstarter  | Email this | Comments

IBM’s Mira supercomputer tasked with simulating an entire universe in a fortnight

IBM's Mira supercomputer tasked with simulating an entire universe in a fortnight

A universe that only exists in the mind of a supercomputer sounds a little far fetched, but one is going to come to live at the Argonne National Laboratory in October. A team of cosmologists is using IBM’s Blue Gene/Q “Mira” supercomputer, the third fastest in the world, to run a simulation through the first 13 billion years after the big bang. It’ll work by tracking the movement of trillions of particles as they collide and interact with each other, forming structures that could then transform into galaxies. As the project’s only scheduled to last a fortnight, we’re hoping it doesn’t create any sentient characters clamoring for extra life, we’ve seen Blade Runner enough times to know it won’t end well.

Filed under: ,

IBM’s Mira supercomputer tasked with simulating an entire universe in a fortnight originally appeared on Engadget on Wed, 26 Sep 2012 21:17:00 EDT. Please see our terms for use of feeds.

Permalink Gizmodo Australia  |  sourceThe Atlantic  | Email this | Comments

Supercomputer built from Raspberry Pi and Lego, managed by humans rather than Minifigs

Supercomputer built from Raspberry Pi and Lego, managed by humans rather than Minifigs

If you’re a computational engineer, there’s no question about what you do with the Raspberry Pi: you make a supercomputer cluster. Researchers at the University of Southampton have followed their instincts and built Iridis-Pi, a tiny 64-node cluster based on the Raspberry Pi’s usual Debian Wheezy distribution and linked through Ethernet. While no one would mistake any one Raspberry Pi for a powerhouse, the sheer number of networked devices gives the design both some computing grunt and 1TB worth of storage in SD cards. Going so small also leads to some truly uncommon rackmounting — team lead Simon Cox and his son James grouped the entire array in two towers of Lego, which likely makes it the most adorable compute cluster you’ll ever see. There’s instructions to help build your own Iridis-Pi at the source link, and the best part is that it won’t require a university-level budget to run. Crafting the exact system you see here costs under £2,500 ($4,026), or less than a grown-up supercomputer’s energy bill.

Filed under:

Supercomputer built from Raspberry Pi and Lego, managed by humans rather than Minifigs originally appeared on Engadget on Thu, 13 Sep 2012 14:03:00 EDT. Please see our terms for use of feeds.

Permalink TechEye.net  |  sourceUniversity of Southampton  | Email this | Comments

Future Supercomputers Will Be Powered By Your Phone [Science]

Cluster computing—the concept of stringing together devices to act as a single processing unit—isn’t a new idea. But soon your phone could be acting as a node in just such a device, helping to crack tough computational problems. More »

IBM pushing System z, Power7+ chips as high as 5.5GHz, mainframes get mightier

IBM pushing System z, Power7 chips as high as 55GHz, mainframes get mightier

Ten-core, 2.4GHz Xeons? Pshaw. IBM is used to the kind of clock speeds and brute force power that lead to Europe-dominating supercomputers. Big Blue has no intentions of letting its guard down when it unveils its next generation processors at the upcoming Hot Chips conference: the company is teasing that the “zNext” chip at the heart of a future System z mainframe will ramp up to 5.5GHz — that’s faster than the still-speedy 5.2GHz z196 that has led IBM’s pack since 2010. For those who don’t need quite that big a sledgehammer, the technology veteran is hinting that its upcoming Power7+ processors will be up to 20 percent faster than the long-serving Power7, whose current 4.14GHz peak clock rate may seem quaint. We’ll know just how much those extra cycles mean when IBM takes to the conference podium on August 29th, but it’s safe to say that our databases and large-scale simulations won’t know what hit them.

Filed under:

IBM pushing System z, Power7+ chips as high as 5.5GHz, mainframes get mightier originally appeared on Engadget on Sat, 04 Aug 2012 02:17:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceWall Street Journal, Hot Chips  | Email this | Comments