Giz Explains: How to Choose the Right Graphics Card

There are plenty of great graphics cards out there, no matter what you’re looking for. Thing is, the odds are seemingly stacked against you ever finding the right one. It doesn’t have to be that hard.

Whether you’re buying a new computer, building your own or upgrading an old one, the process of choosing a new graphics card can be daunting. Integrated graphics solutions—the kind that come standard with many PCs—have trouble playing games from three years ago, let alone today, and will put you at a disadvantage when future technologies like GPGPU computing, which essentially uses your graphics card as an additional processor, finally take hold. On top of all this, we’re in the middle of a price dip—it’s objectively a great time to buy. (Assuming you’re settled on a desktop. Ahem.) The point is, you’ll want to make the right choice. But how?

Set Specific Goals, Sight Unseen
Your first step to finding the right graphics card is to just step back. Just as graphics card specs are nigh-on impossible to understand, naming conventions and marketing materials will do nothing except give you a headache. The endlessly higher numerical names, the overlapping product lines, the misleadingly-named chip technologies—just leave them. For now, pretend they don’t exist.

Now, choose your goals. What games do you want to play? What video output options and ports do you want? What resolution will you be playing your games at? Do you have any use for the fledgling GPGPU technologies that are slowly permeating the marketplace? And although you may have to adjust this, set a price goal. Ready-built PC buyers will have to consider whatever upgrade cost your chosen company is charging, and adjust accordingly. For people upgrading their own systems, $150-$200 has been something of a sweet spot: It’ll get you a card with a new enough GPU, and sufficient VRAM to handily deal with mainstream games for a solid two years. If you want to spend less, you can; if you want to spend more, fine.

These are the terms that matter most. Seriously, disregard any allegiance to Nvidia or ATI, prior experiences with years-old graphics hardware or some heretofore distant, unreleased and unspec’d game franchise. Be decisive about what you want, but as far as hardware and marketing materials go, start blind.

Don’t Get Caught Up In Specs
Now that you’ve laid out your ambitions, as modest or extreme as they may be, it’s time to dive into the seething, disorienting pool of hardware that you’ll be choosing from. The selection, as you’ll find out, is daunting. The first layer of complexity comes from the big two—Nvidia and ATI—whose product lines read more like Terminator robot taxonomies than something generated by humans. Here’s Nvidia’s desktop product line, right now:

It seems like you ought to be able glean a linear progression of performance (or at least price) out of that alphanumeric pile, right? Not at all. How in the world are we to know that the 9800GTX is generally more powerful than the GTS 250, or that the 8800GTS trumps a 9600GT? A two letter suffix can mean more than a model number, and likewise, a model number can mean more than membership in a product line. These naming conventions change every couple years, and occasionally even get traded between companies. For example, I’ve personally owned two graphics cards that bore 9×00 names—you just won’t see them on the chart above, because they were made by ATI. Point is: You don’t need to bother with this nonsense.

The next layer of awfulness comes from the sundry OEMs that rebrand, tweak and come up with elaborate ways to cool offerings from the big two. This is what Sapphire, EVGA, HIS, Sparkle, Zotac and any number of other inanely named companies do. They can, on occasion, cause some sizable changes to the performance of the GPUs they’re built around, but by and large, the Nvidia or ATI label on the box is still the best indication of what to expect from the product, i.e., a Zotax Gtx285 won’t be that much better or worse than an eVGA or stock model. You’ll get a different fan/heatsink configuration, different hardware styling, and possibly different memory or GPU frequency specs, but the most important difference—and the only one you should really concern yourself with—is price.

Graphics cards’ last, least penetrable line of defense against your comprehension is hardware jargon. Bizarre, unhelpful spec sheets are, and always have been, a common feature in PC hardware, from RAM (DDR3-1600!) to processors (12 MB L2 cache! 1333MHz FSB!).

The image associated with this post is best viewed using a browser.Graphics cards are worse. Each one has three MHz-measured speeds you’ll see advertised—the core clock, the CPU (shader) clock and the memory frequency. VRAM—the amount of dedicated memory your card has to work with—is another touted specification, ranging from 256MB to well beyond the 1GB barrier for gaming cards. On top of frequency, memory introduces a whole slew of additional confusing numbers: memory type (as in, DDR2 or DDR3); interface width (in bits, the higher the better); and memory bandwidth, nowadays measured in GB/s. And increasingly, you’ll see processor core numbers trotted out. Did you know that Nvidia’s top-line card has 480 of them? No? Good.

The best way to approach these numbers is to ignore them. Sure, they provide comparative evaluation and yes, they do actually mean something, but unless you’re a bonafide graphics card enthusiast, you won’t be able to look at a single spec—or a whole spec sheet—and come to any useful conclusions about the cards. Think of it like cars: horsepower, torque and engine displacement are all real things. They just demand context before they can be taken to mean anything to the driver. That’s why road tests carry so much weight.

Graphics cards have their own road testers, and they’ve got the only numbers you need to worry about.

The image associated with this post is best viewed using a browser.Respect the Bench, or Trust the Experts
In the absence of meaningful specs, names or distinguishing features, we’re left with benchmarks. This is a good thing! For years, sites like Tom’s Hardware, Maximum PC, and Anandtech have tirelessly run nearly every new piece of graphics hardware through a battery of tests, providing the buying public with comparative measures of real-word performance. These are the only numbers you need to bother yourself with, and where those goals you settled on come into play.

Here’s how to apply them. Say you just really want to play Left 4 Dead, and have about a hundred dollars to spend. Navigate over to Tom’s, check their benchmarks for that particular game, and scroll down the list. You’re looking for a card that is a) an option on whatever system you’re buying and b) can handle the game well—at a high resolution and high texture quality—which, generally speaking, is a comfortable 60 frames per second. Find the card, check the price and you’re practically done. Once you’ve zeroed in on a card based on your narrow criteria, expand outward. You can check out more games benchmarks and seek out standalone reviews, which will enlighten you on other, less obvious considerations, like fan noise, power draw and reported reliability. (Note: resources for notebook users are a little more sparse. That said, Notebook Check [click the British flag for English] does good work.]

From there, your next worry will be buying for the future. You shouldn’t buy the bare minimum hardware for the current generation of games—there’s no need to spring for a card that’ll be obsolete within a few months, no matter how cheap it is. But buying the latest, greatest dual-GPU graphics cards is an equally bad value proposition. As generations of video hardware have come and gone, one thing has remained constant: A company’s midrange offerings, usually pegged at about $150-$200, are your best bet, period. Sometimes they’ll be new products, and sometimes they’ll have been around a while. What you’ll be buying, basically, is the top end of the last generation. This is fine, and will keep the vast majority of users happy for the lifecycle of their PC. Those of you who live on the bleeding edge probably don’t need this guide anyway.

Your alternative route is to just trust the experts. Sites like Ars Technica and Maximum PC regularly assemble system guides at various pricepoints, in which they’ve made your value judgments for you. Tom’s even assembles a “Best Cards for the Money” guide each month, which is invaluable. At given price points, the answer will often be obvious, and these guys know what they’re talking about.

But keep in mind, they’re applying the same formula you can, just with a slightly more knowing eye. The matter truly is as simple as broadly deciding what you need, consulting the right sources and floating far enough above the spec-ravaged landscape so as to avoid getting a headache. Good luck.

AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena

AMD’s newfangled ATI Radeon E4690 may not be the next Crysis killer, but it should do just fine in next-gen arcade and slot machines. All kidding aside (sort of…), this new embedded graphics set is said to triple the performance of AMD’s prior offerings in the field, bringing with it 512MB of GDDR3 RAM, DirectX 10.1 / OpenGL 3.0 support and hardware acceleration of H.264 and VC-1 high-definition video. The 35mm chip also differentiates itself by integrating directly onto motherboards and taking on many of the tasks that are currently assigned to the CPU, but alas, it doesn’t sound as if we’ll be seeing this in any nettops / netbooks anytime soon ever.

Filed under:

AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena originally appeared on Engadget on Mon, 01 Jun 2009 08:56:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

iBUYPOWER launches 15.6-inch Battalion 101 CZ-10 gaming laptop

iBUYPOWER may not yet be a household name when it comes to gaming laptops, but its sure doing its darnedest to take on the likes of HP, Dell, Acer and ASUS with its totally respectable Battalion 101 CZ-10. This 15.6-inch lappie arrives with a 2.66GHz T9550 Core 2 Duo processor, 2GB of DDR3 RAM, ATI’s 512MB Radeon HD 4650 GPU, a 500GB 5400RPM hard drive, 8x dual-layer DVD burner, a 6-cell battery and a WXGA (1,366 x 768) panel. You’ll also find an HDMI output, three USB 2.0 sockets, a 2 megapixel webcam, inbuilt microphone, 3-in-1 card reader and a fingerprint scanner. Best of all, the outfit throws in its accidental damage protection plan, all for the completely reasonable asking price of $1,235. It’s available to order now for those who can’t resist, and the full release is after the break.

Continue reading iBUYPOWER launches 15.6-inch Battalion 101 CZ-10 gaming laptop

Filed under:

iBUYPOWER launches 15.6-inch Battalion 101 CZ-10 gaming laptop originally appeared on Engadget on Wed, 27 May 2009 09:44:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Giz Explains: GPGPU Computing, and Why It’ll Melt Your Face Off

No, I didn’t stutter: GPGPU—general-purpose computing on graphics processor units—is what’s going to bring hot screaming gaming GPUs to the mainstream, with Windows 7 and Snow Leopard. Finally, everbody’s face melts! Here’s how.

What a Difference a Letter Makes
GPU sounds—and looks—a lot like CPU, but they’re pretty different, and not just ’cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they’re essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs have a pretty long history—you could go all the way back to the Commodore Amiga, if you wanted to—but we’re going to stick to the fairly present. That is, the last 10 years, when Nvidia’s Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations—the calculations needed to figure out what pixels to display your screen as you snipe someone’s head off in Team Fortress 2—are particularly suited to being handled in parallel.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear [the book] into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU—800 stream processors in ATI’s Radeon 4870—make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD’s Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where’s the General Part?
Okay, so the thing about parallel processing—using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They’re frameworks that make it way easier to use graphics cards for kinds of computing that aren’t related to making zombie guts fly in Left 4 Dead. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it’s pretty much an industry-wide thing. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

OpenCL is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we’re really looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What’s It Going to Do for Me?
Parallel processing is pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it’s not going too far from graphics, since that’s still the easiest to parallelize. But converting, decoding and creating videos—stuff you’re probably using now more than you did a couple years ago—will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you’re just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.

AMD busts out world’s first air-cooled 1GHz GPU

The last time a GPU milestone this significant was passed, it was June of 2007, and we remember it well. We were kicked back, soaking in the rays from Wall Street and firmly believing that nothing could ever go awry — anywhere, to anyone — due to a certain graphics card receiving 1GB of onboard RAM. Fast forward a few dozen months, and now we’ve got AMD dishing out the planet’s first factory-clocked card to hit the 1GHz mark. Granted, overclockers have been running their cards well above that point for awhile now, but hey, at least this bugger comes with a warranty. The device doing the honors is the ATI Radeon HD 4890, and it’s doing it with air cooling alone and just a wee bit of factory overclocking. Take a bow, AMD — today’s turning out to be quite a good one for you.

Filed under: ,

AMD busts out world’s first air-cooled 1GHz GPU originally appeared on Engadget on Wed, 13 May 2009 11:17:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD reorganizes, ATI now fully assimilated

It looks like the final step in AMD totally subsuming ATI has been taken. The company announced a reorganization around four specific pillars: products, future techology, marketing, and customer relations. The restructuring also marks the end of Randy Allen’s tenure, as the SVP of the Computing Solutions Group has decided to leave for unspecified reasons. ATI holdover Rick Bergman, who had also be head of the subsidiary known internally as the Graphics Product Group, will head up the products division with the goal of unifying the GPU and CPU teams (not necessarily the products). We highly doubt this means ATI branding is going anywhere — it’s far too valuable for AMD. Will Bergman’s lead help the company reclaim its position among the top ten chip makers? Give Fusion the kick in the pants it needs? Only time will tell.

Filed under: , ,

AMD reorganizes, ATI now fully assimilated originally appeared on Engadget on Wed, 06 May 2009 20:58:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Happy 40th Birthday AMD: 4 Ways You Beat Intel in the Glory Days

AMD, the other chip company, is 40 years old today. It’s the scrappy underdog to the Intel juggernaut. Today, it’s not in great shape, but at one point, it was actually beating Intel on innovation.

AMD tried to kill the megahertz myth before Intel. During the Pentium 4 days Intel kept pushing clock speeds higher and higher, before it hit a wall and abandoned the Prescott architecture. The message was clearly, “more megahertz is more better.” AMD’s competing Athlon XP chips, while clocked slower, often beat their Pentium 4 rivals. Ironically, AMD was the first to 1GHz, as some commenters have pointed out (don’t know how I forgot that). Obviously though, AMD’s performance lead didn’t last forever.

AMD beat Intel to 64-bit in mainstream computers. And we’re not just talking about its Opteron and Athlon 64 processors. AMD actually designed the X86-64 specification, which Intel wound up adopting and licensing—so AMD’s spec is used Intel’s 64-bit processors to this day.

AMD was first to consider energy efficiency in processor designs. Okay, this is kind of an extension of point number one, but during Intel’s Pentium 4 ‘roid rage period AMD’s processors consistently used less power than Intel’s. Intel’s performance per watt revelation didn’t really start until the Pentium M (which was actually a throwback to the P6 architecture), which set the tone for Intel’s new direction in its successor, the Core line of chips.

AMD beat Intel to having an integrated memory controller. A tech feature AMD lorded over Intel for years: AMD’s processors started integrating the memory controller with its processors years ago, reducing memory latency. Intel’s first chip to use an integrated memory controller is the Core i7—before, the memory controller was separate from the processor. (Here’s why Intel says they held off.)

Athlon XP and Athlon 64—those were the good old days, AMD’s cutthroat competitive days. The days they were ahead of Intel. I miss them—at one point, every hand-built computer in my house ran AMD processors. I felt like a rebel—a rebel with faster, cheaper computers.

Unfortunately, I don’t run AMD chips anymore. Intel came back, and came back hard. But here’s hoping for another resurgence, and another 40 years, guys. Share your favorite AMD memories in the comments.

ATI Radeon HD 4770 GPU review roundup

We like how you’re thinking, AMD, and we don’t say that everyday — or ever, really. During a time when even hardcore gamers are having to rethink whether or not that next-gen GPU is a necessity, AMD has pushed out a remarkably potent new graphics card for under a Benjamin, and the whole world has joined in to review it. The ATI Radeon HD 4770, which was outed just over a week ago, has been officially introduced for the low, low price of just $99 (including rebates, which should surface soon). Aside from being the company’s first mainstream desktop GPU manufactured using a 40nm process, this little gem was a real powerhouse when put to the test. In fact, critics at HotHardware exclaimed that this card “offers performance in the same range as cards that were launched at the $299 to $349 price point only a year ago.” The bottom line? It’s “one of the best buys” out in its price range, and even with all that belt tightening you’ve been doing, surely you can spare a C-note, yeah?

Read – HotHardware (“Recommended; one of the best buys at its price point”)
Read – XBit Labs (“the best budget graphics accelerator [out there]”)
Read – LegitReviews (“great performance, low power consumption and low noise”)
Read – PCStats (“strikes a balance between performance and price”)
Read – TechSpot (“an outstanding choice in the $100 graphics market”)
Read – NeoSeeker (“a good value”)
Read – PCPerspective (“impressive”)

Filed under: ,

ATI Radeon HD 4770 GPU review roundup originally appeared on Engadget on Tue, 28 Apr 2009 14:02:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

AMD’s 40nm ATI Radeon HD 4770 outed, slated for May release?

Ever since we saw the glowing review AMD’s ATI Radeon RV740 prototype received, we’ve been looking forward to the day that the company would make one of these 40nm wonders available. It looks like that day might be close at hand — according to these purloined slides, a little something called the ATI Radeon HD 4770 is due to make the scene next month in the $99 price point. This handsome lad sports GDDR5 memory, DirectX 10.1 support, a 750 MHz clock speed, a memory clock of 800 MHz using a 128-bit memory bus, a frame buffer size of 512 MB, and much, much more. Curious? Of course you are. Check the slides out below for all of the glorious details.

[Via Tom’s Hardware]

Filed under:

AMD’s 40nm ATI Radeon HD 4770 outed, slated for May release? originally appeared on Engadget on Mon, 20 Apr 2009 16:20:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

NVIDIA GTX 275 / ATI Radeon HD 4890 review roundup

Unless you’ve started your weekend early, you have probably realized that both NVIDIA and AMD announced new GPUs this morning. Coincidental timing aside, it sure makes things easy for the consumer to eye the respective benchmarks and plan out their next mid-range GPU purchase accordingly. A whole bevy of reviews, tests, graphs and bar charts have hit the web this morning extolling and panning the pros and cons, but without getting too deep in the nitty-gritty, we can sum things up pretty easily with this. NVIDIA’s GTX 275 showed performance that placed it perfectly between the GTX 285 and GTX 260, and in all but a few off-the-wall tests, it outpaced the ATI Radeon HD 4890 (albeit slightly). Granted, the HD 4890 was called the “fastest, single-GPU powered graphics card AMD has ever produced” by HotHardware, though apparently even that wasn’t enough to help it snag the gold across the board. If you’re hungry for more (and you are, trust us), take the rest of the day off and dig in below.

Read – HotHardware GeForce GTX 275 review
Read – HotHardware Radeon HD 4890 review
Read – ExtremeTech GeForce GTX 275 and Radeon HD 4890 review
Read – DailyTech GeForce GTX 275 and Radeon HD 4890 review
Read – X-bit Labs ATI Radeon HD 4890 review
Read – ComputerShopper ATI Radeon HD 4890 review
Read – Guru 3D GeForce GTX 275 review
Read – Guru 3D ATI Radeon HD 4890 review
Read – PCPerspective ATI Radeon HD 4890 review

Filed under: ,

NVIDIA GTX 275 / ATI Radeon HD 4890 review roundup originally appeared on Engadget on Thu, 02 Apr 2009 11:26:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments