Intel’s next-gen Pine Trail Atom processors officially announced

Get ready for the next generation of netbooks and nettops: Intel’s just officially announced the Pine Trail Atom N450, D410, and D510, along with the NM10 Express chipset, and we should see over 80 machines with the 45-nanometer chips at CES 2010. Nothing too surprising about the 1.66GHz chips themselves, which integrate the memory controller and Intel graphics directly onto the CPU die: the N450 is targeted at netbooks, while the single-core D410 and dual-core D510 are designed for nettops, and each chip should use about 20 percent less power than its predecessor. That was borne out in our review of the N450-based ASUS Eee PC 1005PE, which got 10 hours of battery life in regular use, but unfortunately we didn’t experience any performance improvements over the familiar N270 and N280. That jibes with other reports we’ve heard, but we’ll wait to test some more machines before we break out the frowny face permanently — for now, check out the full press release below.

Continue reading Intel’s next-gen Pine Trail Atom processors officially announced

Intel’s next-gen Pine Trail Atom processors officially announced originally appeared on Engadget on Mon, 21 Dec 2009 00:01:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

University of Antwerp stuffs 13 GPUs into FASTRA II supercomputer

The researchers at the University of Antwerp’s Vision Lab caused quite a stir last year when they built a supercomputer with four high-end NVIDIA graphics cards, but it looks like they’ve truly stepped up their game for their followup: a supercomputer that packs no less than thirteen GPUs. That, as you might have guessed, presented a few new challenges, but after wrangling some flexible PCI cables into a specially-made case and loading up a custom BIOS courtesy of ASUS, they were apparently able to get six dual-GPU NVIDIA GTX295 cards and one single-GPU GTX275 card up and running with only a few hiccups. As before, the big advantage with this approach is that you get an enormous amount of computing power for a relatively small cost — twelve teraflops for less than €6,000, to be specific. Head on past the break for a pair of videos showing the thing off, and hit up the link below for the complete details (including some jaw dropping benchmarks).

Continue reading University of Antwerp stuffs 13 GPUs into FASTRA II supercomputer

University of Antwerp stuffs 13 GPUs into FASTRA II supercomputer originally appeared on Engadget on Mon, 14 Dec 2009 16:20:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceFASTRA II  | Email this | Comments

HP leaks forthcoming Radeon GPUs, Core i3 CPUs, Hulu and Netflix software integration

We’ve come across a bonanza of information about HP’s Spring 2010 plans for North America. Kicking off the new year in style will be Pavilion desktops featuring a choice between ATI’s Radeon HD 5350 (code named Evora Cedar), which will have HDMI, DVI and VGA ports along with 1GB of onboard memory, or the juicier Radeon HD 5570 (aka Jaguar), which bids adieu to VGA in favor of DisplayPort and bumps up the memory allowance to 2GB. Core i3-5xx and Core i5-6xx machines are also slated for the early part of 2010, based on that energy-conscious Clarkdale core we’ve already seen, with the difference being that Turbo Boost and higher L3 cache (4MB versus 3MB) will be available on the higher numbered chips. Arrandale fans need not despair either, as HP’s TouchSmart 600 all-in-ones will be getting upgrades to Core i5 and Core i7 CPUs based on that architecture. Finally, on the software side, HP is introducing native Hulu and Netflix to its MediaSmart software suite. Check out the gallery below for more, and let the waiting begin!

HP leaks forthcoming Radeon GPUs, Core i3 CPUs, Hulu and Netflix software integration originally appeared on Engadget on Wed, 09 Dec 2009 07:20:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Intel’s Larrabee graphics processor delayed, downsized to mere software development platform

Well. NVIDIA has to be loving this. Intel has announced today that not only is its Larrabee graphics chip delayed, that chip which promised to usher in a new era of post-GPU computing, but that it’s been downgraded to a “software development platform.” Intel isn’t even saying what that “software development” will be aimed at, though we have to assume it would be some future version of the hybrid GPU / CPU chip. As to when the kit itself might arrive is anybody’s guess, Intel is merely saying “next year.” Meanwhile we can look forward to Intel’s first example of a GPU / CPU hybrid in the upcoming Pineview Atom processor, which kicks those lackluster integrated graphics to the curb and moves everything onto the CPU. Who knows if that will be enough to quell the NVIDIA’s quiet takeover of the higher-end netbook space with its ION graphics, but with Intel’s current track record in the graphics space, we doubt it.

Intel’s Larrabee graphics processor delayed, downsized to mere software development platform originally appeared on Engadget on Sat, 05 Dec 2009 03:45:00 EST. Please see our terms for use of feeds.

Permalink   |  sourceCNET News  | Email this | Comments

Core Values: What’s next for NVIDIA?

Core Values is our new monthly column from Anand Shimpi, Editor-in-chief of AnandTech. With over a decade of experience poring over the latest in chip developments, he’s here to explain how things work and why our tech is the way it is.


I remember the day AMD announced it was going to acquire ATI. NVIDIA told me that its only competitor just threw in the towel. What a difference a few years can make.

The last time NVIDIA was this late to a major DirectX transition was seven years ago, and the company just quietly confirmed we won’t see its next-generation GPU, Fermi, until Q1 2010. If AMD’s manufacturing partner TSMC weren’t having such a terrible time making 40nm chips I’d say that AMD would be gobbling up marketshare like a fat kid. By the time NVIDIA gets its entire stack of DX11 hardware out the gate, AMD will be a quarter away from putting out newly refreshed GPUs.

Things aren’t much better on the chipset side either — for all intents and purposes, the future of NVIDIA’s chipset business in the PC space is dead. Not only has NVIDIA recently announced that it won’t be pursuing any chipsets for Intel’s Core i3, i5. or i7 processors until its various legal disputes with Intel are resolved, It doesn’t really make sense to be a third-party chipset vendor anymore. Both AMD and Intel are more than capable of doing chipsets in-house, and the only form of differentiation comes from the integrated graphics core — so why not just sell cheap discrete GPUs for OEMs to use alongside Intel chipsets instead?

Even Ion is going to be short lived. NVIDIA’s planning to mold an updated graphics chip into an updated chipset for the next-gen Atom processor, but Pine Trail brings the memory controller and graphics onto the CPU and leaves NVIDIA out in the cold once again.

Let’s see, no competitive GPUs, no future chipset business. This isn’t looking good so far — but the one thing I’ve learned from writing about these companies for the past 12 years is that the future’s never as it seems. Chances are, NVIDIA’s going to look a lot different in the future because of two things: Tesla and Tegra.

Continue reading Core Values: What’s next for NVIDIA?

Core Values: What’s next for NVIDIA? originally appeared on Engadget on Fri, 04 Dec 2009 15:00:00 EST. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

NVIDIA’s Fermi-based GeForce 100 GPU makes a Twitter appearance

We’d been hearing that NVIDIA’s Fermi chips had been delayed, but they’re apparently far enough along for spokesperson Brian Burke to tweet this image of the new Fermi-based GeForce 100 GPU running the Unigine Heaven DX11 benchmark earlier today. That’s certainly one way to hit back at ATI’s launch of the fastest graphics card ever, the Radeon HD 5970, but we’d much rather have some hard info to work with. We’ve pinged NVIDIA, we’ll let you know if we hear anything.

[Thanks, Alex]

Filed under:

NVIDIA’s Fermi-based GeForce 100 GPU makes a Twitter appearance originally appeared on Engadget on Wed, 18 Nov 2009 11:57:00 EST. Please see our terms for use of feeds.

Permalink Bit-tech  |  sourceTwitPic  | Email this | Comments

ATI Radeon HD 5970: world’s fastest graphics card confirmed

ATI just announced its latest greatest polygon cruncher on the planet: the previously leaked Radeon HD 5970. The new card card is also one of the first to support Microsoft DirectX 11 and Eyefinity multi-display (driving up to three displays at once for a 7680×1600 maximum resolution) with ripe potential for overclocking thanks to the card’s Overdrive technology. Instead of relying upon a single GPU like the already scorching Radeon HD 5870, the 5970 brings a pair of Cypress GPUs linked on a single board by a PCI Express bridge for nearly 5 TeraFLOPS of computer power, or a mind boggling 10 TeraFLOPS when setup in CrossFireX. Naturally, the card’s already been put to the test by all the usual benchmarking nerds who praise the card as the undisputed performance leader regardless of game or application. It even manages to keep power consumption in check until you start rolling on the voltage to ramp those clock speeds. As you’d expect then, ATI isn’t going to offer any breaks on pricing so you can expect to pay the full $599 suggested retail price when these cards hit shelves today for retail or as part of your new gaming rig bundle.

Read – Press Release
Read – Anandtech
Read – HotHardware
Read – PC Perspective
Read – HardOCP
Read – Hexus
Read – MaximumPC
Read
– TweakTown

Filed under: ,

ATI Radeon HD 5970: world’s fastest graphics card confirmed originally appeared on Engadget on Wed, 18 Nov 2009 01:49:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Adobe’s Flash Player 10.1 beta GPU acceleration tested, documented

We know you don’t actually care about 99 percent of the contents of the latest Flash Player update. What you really want to know is whether those new 1080p YouTube streams will run smoothly on your machine thanks to the newly implemented graphics card video acceleration. AnandTech has come to our collective aid on that one, with an extensive testing roundup of some of the more popular desktop and mobile GPU solutions. NVIDIA’s ION scored top marks, with “almost perfect” Hulu streaming (see table above), though Anand and crew encountered some issues with ATI’s chips and Intel’s integrated GMA 4500 MHD, which they attribute to the new Flash Player’s beta status. On the OS front, although Linux and Mac OS are not yet on the official hardware acceleration beneficiary list, the wily testers found marked improvements in performance under OS X. It seems, then, that Adobe has made good on its partnership with NVIDIA, and made ION netbooks all the more scrumptious in the process, while throwing a bone to the Mac crowd, but leaving the majority of users exercising the virtue of patience until the finalized non-beta Player starts making the rounds in a couple of months. Hit the read link for further edification.

Filed under:

Adobe’s Flash Player 10.1 beta GPU acceleration tested, documented originally appeared on Engadget on Tue, 17 Nov 2009 05:46:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD spells out the future: heterogeneous computing, Bulldozer and Bobcats galore

Believe it or not, it’s just about time for AMD to start thinking about its future. We know — you’re still doing your best to wrap that noodle around Congos and Thubans, but now it’s time to wonder how exactly Leo, Llano and Zambezi (to name a few) can fit into your already hectic schedule. At an Analyst Day event this week, the chipmaker removed the wraps on its goals for 2010 and 2011, and while it’s still focusing intently on Fusion (better described as heterogeneous computing, where “workloads are divided between the CPU and GPU”), it’s the forthcoming platforms that really have us worked up. For starters, AMD is looking into Accelerated Processing Unit (APU) configurations, which “represent the combined capabilities of [practically any] two separate processors.” We’re also told that the firm may actually introduce its Bulldozer (architecture for mainstream machines) and Bobcat (architecture for low-power, ultrathin PCs) platforms more hastily than similar ones have been rolled out in the past, which demonstrates an effort to really target the consumer market where Intel currently reigns. Frankly, we’re jazzed about the possibilities, so hit the links below for a deep dive into what just might be powering your next (or next-next) PC.

[Via Digitimes]

Filed under: ,

AMD spells out the future: heterogeneous computing, Bulldozer and Bobcats galore originally appeared on Engadget on Thu, 12 Nov 2009 10:17:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Intel Arrandale chips detailed, priced and dated?

Who’s up for some more Intel roadmap rumoring? The latest scuttlebutt from “notebook players” over in the far East is that the chip giant has finally settled on names, speeds, and prices for its first three Arrandale CPUs, which are expected to arrive in the first half of 2010. The Core i5-520UM and Core i7-620UM both run at 1.06GHz, while the top Core i7-640UM model speeds ahead at 1.2GHz, with bulk-buying prices of $241, $278, and $305 per unit of each processor. Even if the processing speeds might not impress on paper, these 32nm chips splice two processing cores, the memory controller, and graphics engine all into the same package and thereby deliver major power savings. Platform pricing is expected to remain at around $500 for netbooks, while the ultrathins these chips are intended for should hit the $600 to $800 range… if Lord Intel wills it so.

Filed under:

Intel Arrandale chips detailed, priced and dated? originally appeared on Engadget on Thu, 12 Nov 2009 09:38:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments