NVIDIA GPU resurrected after 10 minutes at 425°F

We’ve seen some pretty weird stuff in our years on this planet — heck, we’ve revived our own drenched Sony DAP by burying it in rice for 48 hours — but this is easily one of the most bizarre gizmo resurrections we’ve ever come across. As the tale goes, one valiant NVIDIA GPU owner apparently bit on a myth which suggested that a pinch of time in the oven (quite literally, might we add) would repair faulty GPUs that were throwing up oodles of vertical lines. After purchasing another GPU to replace his ailing 8800GTX, he figured he had zilch to lose and gave it a shot; lo and behold, the temporary warmth seemingly melted the solder points and healed micro-fractures that were causing the unwanted lines. We’ve yet to hear how his attempt at returning the new GPU went, but hey, there’s always eBay. Give the read link a look if you’re still in disbelief.

[Via Digg]

Filed under: ,

NVIDIA GPU resurrected after 10 minutes at 425°F originally appeared on Engadget on Sat, 30 May 2009 21:57:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

ASUS Mars GPU weds twin GeForce GTX 285s, might just melt your face

You into frame rates? No, we mean are you frickin’ bonkers over watching your rig hit triple digits in a Crysis timedemo? If you’re still nodding “yes,” have a gander at what’ll absolutely have to be your next buy. The ASUS Mars 295 Limited Edition is quite the unique beast, rocking a pair of GTX 285 chips that are viewed by Windows as a GeForce GTX 295. All told, you’re looking at 240 shader processors, a 512-bit GDDR3 memory interface, 32 total memory chips and 4GB of RAM. Amazingly, the card is totally compatible with existing drivers and is Quad-SLI capable, and if all goes to plan, it’ll actually peek its head out at Computex next week. Rest assured, we’ll do everything we can to touch it.

Filed under: ,

ASUS Mars GPU weds twin GeForce GTX 285s, might just melt your face originally appeared on Engadget on Fri, 29 May 2009 11:22:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

ASUS Eee PC 1000HV resurfaces with Atom N280, HD 3450

Another day, another entrant in the mile-long list of Eee PC netbooks. This one, however, is a curious add. You see, the Eee PC 1000HV originally came to light way back in July of 2008, when no fewer than 23 Eee model names were casually leaked out. Since that day, we’ve heard not a peep from the machine… until now, obviously. In a few locations overseas, the 1000HV has emerged for order, packing a 10.1-inch 1,024 x 600 display, a 1.66GHz Atom N280 CPU, 160GB hard drive, 1GB of RAM, VGA output, the standard assortment of ports and a mildly attractive AMD HD 3450 graphics set — the same one that ASUS recently shoved in its HD-minded Eee Box 206. We can’t help but applaud the choice to slip in a real (or quasi-real, anyway) GPU here, but until this pup heads stateside, we’re still figuring this is all just a figment of our imagination.

[Via Slashgear]

Read – Eee PC 1000HV order site
Read – Another Eee PC 1000HV order site

Filed under:

ASUS Eee PC 1000HV resurfaces with Atom N280, HD 3450 originally appeared on Engadget on Fri, 22 May 2009 04:44:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Intel details next-generation Atom platform, say hello to Pine Trail

Intel has been doing a lot of talking about big new processors and platforms as of late, and it’s now gotten official with one that’s soon to be ever-present: its next-generation Atom platform, codenamed Pine Trail. In case you haven’t been tracking the rumors, the big news here is that the processor part of the equation, dubbed Pineview, will incorporate both the memory controller and the GPU, which reduces the number of chips in the platform to two, and should result in some significant size and power savings. As Ars Technica points out, the platform is also the one that’ll be going head to head with NVIDIA’s Ion, which is likely to remain more powerful but not as affordable or efficient, especially considering that NVIDIA can’t match Intel’s on-die GPU. Either way, things should only get more interesting once Pine Trail launches in the last quarter of this year.

Filed under:

Intel details next-generation Atom platform, say hello to Pine Trail originally appeared on Engadget on Wed, 20 May 2009 13:58:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Giz Explains: GPGPU Computing, and Why It’ll Melt Your Face Off

No, I didn’t stutter: GPGPU—general-purpose computing on graphics processor units—is what’s going to bring hot screaming gaming GPUs to the mainstream, with Windows 7 and Snow Leopard. Finally, everbody’s face melts! Here’s how.

What a Difference a Letter Makes
GPU sounds—and looks—a lot like CPU, but they’re pretty different, and not just ’cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they’re essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs have a pretty long history—you could go all the way back to the Commodore Amiga, if you wanted to—but we’re going to stick to the fairly present. That is, the last 10 years, when Nvidia’s Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations—the calculations needed to figure out what pixels to display your screen as you snipe someone’s head off in Team Fortress 2—are particularly suited to being handled in parallel.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear [the book] into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU—800 stream processors in ATI’s Radeon 4870—make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD’s Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where’s the General Part?
Okay, so the thing about parallel processing—using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They’re frameworks that make it way easier to use graphics cards for kinds of computing that aren’t related to making zombie guts fly in Left 4 Dead. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it’s pretty much an industry-wide thing. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

OpenCL is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we’re really looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What’s It Going to Do for Me?
Parallel processing is pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it’s not going too far from graphics, since that’s still the easiest to parallelize. But converting, decoding and creating videos—stuff you’re probably using now more than you did a couple years ago—will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you’re just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.

AMD busts out world’s first air-cooled 1GHz GPU

The last time a GPU milestone this significant was passed, it was June of 2007, and we remember it well. We were kicked back, soaking in the rays from Wall Street and firmly believing that nothing could ever go awry — anywhere, to anyone — due to a certain graphics card receiving 1GB of onboard RAM. Fast forward a few dozen months, and now we’ve got AMD dishing out the planet’s first factory-clocked card to hit the 1GHz mark. Granted, overclockers have been running their cards well above that point for awhile now, but hey, at least this bugger comes with a warranty. The device doing the honors is the ATI Radeon HD 4890, and it’s doing it with air cooling alone and just a wee bit of factory overclocking. Take a bow, AMD — today’s turning out to be quite a good one for you.

Filed under: ,

AMD busts out world’s first air-cooled 1GHz GPU originally appeared on Engadget on Wed, 13 May 2009 11:17:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

NVIDIA Tesla GPUs now shipping with Dell ‘personal supercomputers’

Been itching to get your hands on a personal supercomputer, as NVIDIA’s ad wizards put it? The company has just announced that its CUDA-based Tesla C1060 GPU is now available in Dell’s Precision R5400, T5500 and T7500 workstations. And just to put things into perspective, NVIDIA points out that a Dell workstation rockin’ a single Tesla C1060 has enough going on under the hood to power the control system for the European Extremely Large Telescope project (“the world’s largest,” apparently). According to one of the developers, Jeff Meisel at National Instruments, a workstation “equipped with a single Tesla C1060 can achieve near real-time control of the mirror simulation and controller, which before wouldn’t be possible in a single machine without the computational density offered by GPUs.” Wild, huh? If you’re curious about the workout that Tesla GPUs are getting on a wide range of projects, from Bio-Informatics to Computational Chemistry to Molecular Dynamics and more — or if you’re merely a glutton for long-winded PR — check out the good stuff after the break.

Continue reading NVIDIA Tesla GPUs now shipping with Dell ‘personal supercomputers’

Filed under:

NVIDIA Tesla GPUs now shipping with Dell ‘personal supercomputers’ originally appeared on Engadget on Wed, 06 May 2009 15:18:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

NVIDIA’s GeForce GTX 285 coming to Macs in June

Mac users — are you tired of being taunted by your PC friends over their myriad GPU options / killer gaming rigs? Well, here’s one less front they can battle you on. We’ve just received a pic of this nasty piece of work in our inboxes with word that it’s due in June. Like the PC version, we’re guessing you can expect two things here: it’s killer… and it’s expensive.

Filed under: ,

NVIDIA’s GeForce GTX 285 coming to Macs in June originally appeared on Engadget on Wed, 29 Apr 2009 17:37:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

ATI Radeon HD 4770 GPU review roundup

We like how you’re thinking, AMD, and we don’t say that everyday — or ever, really. During a time when even hardcore gamers are having to rethink whether or not that next-gen GPU is a necessity, AMD has pushed out a remarkably potent new graphics card for under a Benjamin, and the whole world has joined in to review it. The ATI Radeon HD 4770, which was outed just over a week ago, has been officially introduced for the low, low price of just $99 (including rebates, which should surface soon). Aside from being the company’s first mainstream desktop GPU manufactured using a 40nm process, this little gem was a real powerhouse when put to the test. In fact, critics at HotHardware exclaimed that this card “offers performance in the same range as cards that were launched at the $299 to $349 price point only a year ago.” The bottom line? It’s “one of the best buys” out in its price range, and even with all that belt tightening you’ve been doing, surely you can spare a C-note, yeah?

Read – HotHardware (“Recommended; one of the best buys at its price point”)
Read – XBit Labs (“the best budget graphics accelerator [out there]”)
Read – LegitReviews (“great performance, low power consumption and low noise”)
Read – PCStats (“strikes a balance between performance and price”)
Read – TechSpot (“an outstanding choice in the $100 graphics market”)
Read – NeoSeeker (“a good value”)
Read – PCPerspective (“impressive”)

Filed under: ,

ATI Radeon HD 4770 GPU review roundup originally appeared on Engadget on Tue, 28 Apr 2009 14:02:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

AMD releases another notebook roadmap, does not release Fusion chips

Well, well, a new AMD roadmap promising a superior hybrid CPU/GPU chip sometime in the distant future. That doesn’t sound like the same old vaporware refrain we’ve been hearing about Fusion since 2006 at all, does it? Yep, everyone’s favorite underdog is back in the paperwork game, and this time we’ve got a sheaf of pointy-eared details on the company’s upcoming notebook plans, all culminating in the “Sabine” platform, which is wholly dependent on Sunnyvale actually shipping a mobile variant of the delayed Fusion APU in 2011 once it finds the Leprechaun City. In the meantime, look forward to a slew of forgettable laptops getting bumped to the “Danube” platform, which supports 45nm quad-core chips, DDR3-1066 memory, and an absolutely shocking 14 USB 2.0 ports. Ugh, seriously — does anyone else think AMD should suck it up, put out a cheap Atom-class processor paired with a low-end Radeon that can do reasonable HD video output, and actually take it to Intel in booming low-end market instead of goofing around with the expensive, underperforming Neo platform and a fantasy chip it’s been promising for three years now? Call us crazy.

[Via PC Authority; thanks Geller]

Filed under:

AMD releases another notebook roadmap, does not release Fusion chips originally appeared on Engadget on Wed, 15 Apr 2009 13:47:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments