NVIDIA-powered Titan becomes world’s fastest Supercomputer

It’s been revealed this morning that the Titan Supercomputer is not just one impressive beast in and of itself, it’s now officially the fastest on the planet. According to the TOP500 list update released this morning at the SC12 Supercomputing Conference, NIVIDA Tesla K20 GPU-accelerated Titan has indeed become the fastest supercomputer on Earth, and has out-done the rest of the supercomputers by a massive amount. Titan works with a massive 18,688 NVIDIA Tesla K20X GPU accelerators and has topped the previous record holder here near the end of 2012, that being the Lawrence Livermore National Laboratory’s Sequoia system.

The performance record that this beast now holds is a 17.59 petaflop mark as measured by the benchmark system known as Linpack – a system that measures all manner of devices all the way down to smartphones (with GPUs packed inside as well.) The Titan makes this massive stride into the future with the Tesla K20X accelerator, the “flagship of NVIDIA’s accelerated computing line.” NVIDIA notes that this new solution provides “the highest computing performance ever available in a single processor.”

NVIDIA’s claims are backed with two more benchmark results: 3.95 teraflops singleprecision and 1.31 teraflops double-precision peak floating point performance – beastly. Those come from a setup as follows: CPU results: Dual socket E5-2687w, 3.10 GHz, GPU results: Dual socket E5-2687w + 2 Tesla K20X GPUs. NVIDIA also notes that the family of processors being used here also includes the K20 (without the X) which has busted out 3.52 teraflops of single precision and 1.17 teraflops of double-precision peak performance.

The Tesla K20X and K20 GPU accelerators have brought on more than 30 petaflops of performance over the past 30 days – that’s big. It’s so big, in fact, that it’s equivalent to the computational performance of the top 10 fastest supercomputers from 2011 combined.

In addition to being the fastest, the Tesla K20X GPU accelerator has been revealed to be three times more energy efficient than previous generation GPU accelerators – so says NVIDIA. The Titan has achieved 2,142.77 megaflops of performance per watt, this surpassing the previous most energy-efficient supercomputer on the planet as well – this according to the official Green500 list.

Have a peek at the timeline below to get more information on Titan as well as the K20 family of GPUs from NVIDIA – it’s big time computing action for all!


NVIDIA-powered Titan becomes world’s fastest Supercomputer is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


ARM chief tosses Moore’s Law out with the trash, says efficiency rules all

ARM chief kicks Moore's Law to the curb, says efficiency rules all

ARM CEO Warren East already has a tendency to be more than a bit outspoken on the future of computing, and he just escalated the war of words with an assault on the industry’s sacred cow: Moore’s Law. After some prompting by MIT Technology Review during a chat, East argued that power efficiency is “actually what matters,” whether it’s a phone or a server farm. Making ever more complex and power-hungry processors to obey Moore’s Law just limits how many chips you can fit in a given space, he said. Not that the executive is about to accept Intel’s position that ARM isn’t meant for performance, as he saw the architecture scaling to high speeds whenever there was a large enough power supply to back it up. East’s talk is a bit long on theory and short on practice as of today — a Samsung Chromebook isn’t going to make Gordon Moore have second thoughts — but it’s food for thought in an era where ARM is growing fast, and even Microsoft isn’t convinced that speed rules everything.

Filed under: , , , ,

ARM chief tosses Moore’s Law out with the trash, says efficiency rules all originally appeared on Engadget on Fri, 09 Nov 2012 20:27:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Technology Review  | Email this | Comments

Intel launches 8-core Itanium 9500, teases Xeon E7-linked Kittson

Intel launches Poulsonbased Itanium 9500, teases Xeon E7linked Kittson

Intel’s Itanium processor launches are few and far between given that only so many need its specialized grunt, but that just makes any refresh so much larger — and its new Itanium 9500 certainly exemplifies that kind of jump. The chip centers around much more up-to-date, 32-nanometer Poulson architecture that doubles the cores to eight, hikes the interconnect speeds and supports as much as 2TB of RAM for very (very, very) large tasks. With the help of an error-resistant buffer, Intel sees the 9500 being as much as 2.4 times faster as the Tukwila-era design it’s replacing. The new Itanium also ramps the clock speeds to a relatively brisk 1.73GHz to 2.53GHz, although there will be definite costs for server builders wanting to move up: the shipping roster starts at $1,350 per chip in bulk and climbs to an eye-watering $4,650 for the fastest example.

Anyone worried that Poulson might be the end of the road for Intel’s EPIC-based platform will also be glad to get a brief reminder that Itanium will soldier on. The next iteration, nicknamed Kittson, will be framed around a modular design that shares traces of silicon and the processor socket with the more familiar Xeon E7. Intel casts it as a pragmatic step that narrows its server-oriented processors down to a common motherboard and should be cheaper to make. It’s likely that we’ll have to be very patient for more details on Kittson knowing the long intervals between Itanium revamps, but fence-sitting IT pros may just be glad that they won’t have to consider jumping ship for awhile yet.

Continue reading Intel launches 8-core Itanium 9500, teases Xeon E7-linked Kittson

Filed under: ,

Intel launches 8-core Itanium 9500, teases Xeon E7-linked Kittson originally appeared on Engadget on Thu, 08 Nov 2012 18:41:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceIntel  | Email this | Comments

If Apple can ditch Intel, it will

The Apple rumor-mill is cyclical, and one tale refuses to die: Apple ousting Intel from its MacBooks, and replacing x86 chips with ARM-based alternatives. The story surfaces periodically, just as it has done today, with titters of increasing “confidence” within Apple’s engineering teams that Intel will be eventually ditched in favor of the company’s own A-series SoCs as currently found within the iPad and iPhone. Not today, so the whispers go, but eventually, and what’s most interesting is that we’re likely already seeing the signs of the transition in Apple’s newest models.

Apple has arguably pushed tablet processors as far as they need to go, at least for today’s market. There’s a sense that the Apple A6X chipset in the latest, fourth-generation iPad with Retina display was a near-meaningless improvement on the A6 its predecessor sported; far more important was the change from old-style Dock Connector to new Lightning port. Sure, the newest iPad is faster in benchmarks, but in day to day use there’s hardly a noticeable difference.

Those benchmarks give some hints, however, as to where ARM chips might make sense on the desktop. The iPad 4 did particularly well in SunSpider, a browser-based test of JavaScript performance that gives a good indication of how fast the web experience will be. Considering most of us live online when we’re using our computers, that’s an increasingly important metric.

The iPad 4 scored under 880ms in our SunSpider testing (the lower the number, the better), making it one of the fastest tablets around in that particular benchmark. Now, admittedly, a current-gen MacBook Pro is capable of scores a quarter of that. But, more importantly, the iPad 4 can run for more than ten hours of active use delivering its level of performance, on a 43 Whr battery. Inside the new 13-inch MacBook Pro with Retina, in contrast, Apple finds room for a 74 Whr pack.

“Intel may make a fast processor but it’s behind the curve in efficiency”

The allure of an ARM-based MacBook, then, is the combination of that growing performance and the power frugality of the chips that deliver it. Intel may make a fast processor, but it’s behind the curve when it comes to efficiency compared to ARM; the company’s struggles with Atom in the mobile market are evidence of that. And, while there’ll always be a cadre of performance-demanding Mac users, the regular cohort with more everyday needs might be more than wiling to sacrifice a little top-end grunt for the longevity to make it through a transatlantic flight with plenty of juice to spare.

In the end, though, Apple is notoriously self-reliant. The company has bought or invested in specialists in chip components, displays, aluminum casing production, optically-laminated displays, component assembly, and more. Anything, in short, that contributes to Apple’s supply chain or its competitive advantage in the market place (or preferably both). Sometimes the fruits of those investments go relatively unused for years, at least as far as the public can see; Apple’s perpetual and exclusive license to use Liquidmetal in its range – something so far mostly limited to a SIM-removal tool – is a good example of that.

We’ve also seen how it won’t shy from distancing itself from vendors when they either won’t toe the line or let the company down. NVIDIA’s time in the doghouse after the faulty MacBook GPU saga is good evidence of that, while AMD has long been tipped as attempting to curry Apple’s favor but never quite delivering the goods in internal testing.

If Apple can rid itself of reliance on another third party – and further extend the distance between its range and Windows-based PCs, blurring the lines of direct comparison – then it will undoubtedly jump at that chance. It’s unlikely to be shy in flexing its checkbook to do so, either, betting on long-term investment over short-term gains.

Apple, if time has taught us anything, will do what’s best for Apple: that means it demands the biggest advantage from those it works with, and isn’t afraid of taking a hit if it needs to change in order to achieve greater returns. In the past, Intel has given it early access to new processors, as well as the collaborative spoils of Thunderbolt ahead of PC rivals. If Intel can’t meet the grade on the sort of processors Apple sees as pivotal to its vision of future computing, however, all that shared history will be for naught. As far as Apple goes, it’s the Cupertino way or the highway.


If Apple can ditch Intel, it will is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


AMD Opteron 6300 Series slots a 16-core Piledriver in your server rack

AMD has launched its latest next-gen Opteron 6300 Series processors, aiming to power the server you buy tomorrow, and the more mainstream branch of its twin enterprise chip strategy. The new chips – which promise up to 24-percent higher performance versus the Opteron 6200 processors the new range replaces – use AMD’s Piledriver core technology for reduced power consumption: that means cooler, faster servers that are cheaper to run.

The Opteron 6300 Series line-up maxes out at 3.5GHz base frequency, though there’s up to 3.8GHz on offer in AMD Turbo CORE mode. 4-, 6-, 8-, 12-, and 16-core versions are offered, with TDPs ranging from 85W in the 6366 HE low-power model, through to 140W for the 16-core, 2.8GHz 6386 S top of the line chip.

Up to four 1866MHz memory channels are supported, and AMD claims the 6300 Series is the only x86 processor to work with ultralow voltage 1.25v memory. Each CPU can handle up to 384GB of memory – spread over up to 12 DIMMs – and up to four x16 HyperTransport links (each up to 6.4GT/s).

However, AMD isn’t solely relying on x86 for its future server chip strategy. The company recently confirmed that it was developing 64-bit ARM-based server processors, borrowing architecture more commonly associated with tablets and smartphones, and repurposing it for frugal use in enterprise server rooms.

The first servers to use the Opteron 6300 Series chips are on sale today, with Dell and HP both signed up to produce systems using AMD’s new CPU by the end of the year.


AMD Opteron 6300 Series slots a 16-core Piledriver in your server rack is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


AMD unveils Opteron 6300, hopes to put servers in a Piledriver

AMD Opteron in hand

AMD’s advantage these days most often rests in datacenters that thrive on the chip designer’s love of many-core processors, so it was almost surprising that the company brought its Piledriver architecture to the mainstream before turning to the server room. It’s closing that gap now that the Opteron 6300 is here. The sequel to the 6200 fits into the same sockets and consumes the same energy as its ancestor, but speeds ahead through Piledriver’s newer layout and instructions — if you believe AMD, as much as 24 percent faster in one performance test, 40 percent in performance per watt and (naturally) a better deal for the money than Intel’s Xeon. Whether that’s true or just marketing bluster, there’s a wide spread of chips that range from a quad-core, 3.5GHz example to a 16-core, 2.8GHz beast for massively parallel tasks. Cray, Dell, HP and others plan to boost their servers before long, although the surest proof of the 6300’s success from our perspective may be that everything in the bacrkoom runs just as smoothly as it did yesterday.

Continue reading AMD unveils Opteron 6300, hopes to put servers in a Piledriver

Filed under: ,

AMD unveils Opteron 6300, hopes to put servers in a Piledriver originally appeared on Engadget on Mon, 05 Nov 2012 00:01:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

iPad 4th gen GPU innards revealed

The newest iPad has had its guts revealed once again, showing off not just the pieces we knew about, the A6X processor included, but the GPU and its abilities as well. The folks at AnandTech have gone in for a deep-dive into the A6X as it exists on the iPad 4, showing a brand new PowerVR SGX 554 GPU. The A6X processor retains many elements from the A5X processor the iPad 3 worked with, including the memory controller interface sitting adjacent to the GPU cores instead of the CPU cores, as it was in the A5 and the standard A6 (on the iPhone 5).

The A6X has been revealed to also retain the 128-bit wide memory interface that the A5X worked with, integrating here two of Apple’s Swift cores running right up to 1.4GHz right out of the box. The PowerVR SGX 554 GPU living here in the iPad 4 is far and away more advanced than the units used in the iPad 3 or the iPhone 5, doubling the # of SIMDs of the iPad 3′s GPU, the PowerVR SGX 543MP4, essentially one generation back.

This new GPU appears to have double the ALU per core that the iPad 3′s unit works with (8 Vec4 ALUs per core vs. 4 Vec4) and brings on what Chipwork‘s analysis suggests is 2 sets of 4 identical sub-cores and one central core. That analysis shows 9 sub-cores, that is, as shown below, again from Chipworks. Anand suggests that this architecture points toward a theoretical performance greater than 77 GFLOPS – hot stuff!

You’ll find in benchmarks galore that this iPad – surprise – beats the older iPad models by a significant margin regardless of the test. The GPU appears to be clocking at least 15% higher than the iPad 3′s greatest hits, while tests like GLBenchmark are showing 65% benchmark performance improvement in some cases – in other words: really, really good. Have a peek at our full iPad 4th gen review for more everyday testing and sweet daily action.


iPad 4th gen GPU innards revealed is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Take that linear algebra to go: Intel’s 48-core chip targeting smartphones and tablets

Take that linear algebra to go Intel's 48core chip targeting smartphones and tablets

Intel’s taking its 48-core processor and applying it to a field beyond academia: the world of mobile electronics. The company this morning announced intentions to slip the 48-core bad boy into future tablets and smartphones (emphasis on future), with CTO Justin Rattner saying the mobile implementation could arrive “much sooner” than the 10-year window predicted by researchers.

Aside from the thrilling world of linear algebra and fluid dynamics that the chipset is currently used for, Intel says it could offload processor-intensive functions across several cores, effectively speeding up various functions (say, video streaming). The availability of so many cores also means faster multitasking possibilities than the current dual- or quad-core offerings in modern smartphones and tablets — just imagine a world where two Angry Birds games can run simultaneously in the background without affecting the paradoxical game of Tiny Wings you decided to play instead. Hey, we understand — it’s just a better bird game. No big. Sadly, few software developers are crafting their wares (warez?) to take advantage of multi-core processing as is, so it’s gonna take more than just the existence of Intel’s 48-core chip to make its vision a reality.

Filed under: , , , ,

Take that linear algebra to go: Intel’s 48-core chip targeting smartphones and tablets originally appeared on Engadget on Tue, 30 Oct 2012 14:38:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceComputerworld  | Email this | Comments

ARM’s Cortex-A50 chips promise 3x performance of current superphones by 2014, throw in 64-bit for good measure

ARM's CortexA50 chips promise 3x performance of current superphones by 2014, throw in 64bit for good measure

We knew this was coming, not least because someone let the cat out of the bag (or at least a paw) last night. Nevertheless, it’s only today that we’re getting the full picture of ARM’s “clean sheet” v8 architecture, and you know what? It’s pretty astounding. Top billing goes to the Cortex-A57, which is said to deliver “three times the performance of today’s top smartphones” without guzzling any additional power. Alternatively, the chip could be designed to deliver the same performance as a current smartphone or tablet but make the battery last five times as long — which would make that Surface RT just about five times nicer than it is already. How’s all this possible? Read on for more.

Continue reading ARM’s Cortex-A50 chips promise 3x performance of current superphones by 2014, throw in 64-bit for good measure

Filed under: ,

ARM’s Cortex-A50 chips promise 3x performance of current superphones by 2014, throw in 64-bit for good measure originally appeared on Engadget on Tue, 30 Oct 2012 12:20:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceARM  | Email this | Comments

AMD FX-8350 review roundup: enthusiasts still won’t be totally enthused

DNP AMD's refreshed FX 'Vishera' processor benchmarked, enthusiasts not enthused

Now that AMD’s fresh new FX processors based on the Piledriver architecture are out in the wild, the specialist hardware sites have seen fit to benchmark the top-lining FX-8350. Overall, the group feels that AMD has at least closed the gap a bit on Intel’s Core juggernaut with a much better FX offering this time around, but overall the desktop CPU landscape remains unchanged — with Intel still firmly at the top of the heap. Compared to its last-gen Bulldozer chips, “in every way, today’s FX-8350 is better,” according to Tom’s Hardware: cheaper, up to 15 percent faster and more energy efficient. Still, while the new CPUs represent AMD’s desktop high-end, they only stack up against Intel’s mid-range Core i5 family, and even against that line-up they only edge ahead in heavily threaded testing. But if you “look beyond those specific (multithreaded) applications, Intel can pull away with a significant lead” due to its superior design, says Anantech. As for power consumption, unfortunately “the FX-8350 isn’t even the same class of product as the Ivy Bridge Core i5 processors on this front,” claims The Tech Report.

Despite all that, Hot Hardware still sees several niches that AMD could fill with the new chips, as they’ll provide “an easy upgrade path for existing AMD owners and more flexibility for overclocking, due to its unlocked multipliers.” That means if you already have a Socket-AM3+ motherboard, you’ll be able to do a cheap upgrade by swapping in the new CPU, and punching up the clock cycles might close the performance gap enjoyed by the Core i5. Finally, AMD also saw fit to bring the new chip in at a “very attractive” $195 by Hexus‘ reckoning, a much lower price than an earlier leak suggested. Despite that, however, the site says that AMD’s flagship FX processor still “cannot tick as many desirable checkboxes as the competing Intel Core i5 chips.” Feel free to scope all the sources below to make your own conclusions.

Read – Tom’s Hardware
Read – Hot Hardware
Read – AnandTech
Read – Hexus
Read – The Tech Report

Filed under: , ,

AMD FX-8350 review roundup: enthusiasts still won’t be totally enthused originally appeared on Engadget on Tue, 23 Oct 2012 17:58:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments