AMD’s integrated 785G graphics platform review roundup

It’s mildly hard to believe that AMD‘s DirectX 10-compatible 780 Series motherboard GPU was introduced well over a year ago now, but the long awaited successor has finally landed. This fine morning, a gaggle of hardware sites around the web have taken a look at a number of AMD 785G-equipped mainboards, all of which boast integrated Radeon HD 4200 GPUs, support for AMD’s AM3 processors and a price point that’s downright delectable (most boards are sub-$100). Without getting into too much detail here in this space, the general consensus seems to be that the new platform is definitely appreciated, but hardly revolutionary. It fails to destroy marks set by the 780G, and it couldn’t easily put NVIDIA’s GeForce 9300 to shame. What it can do, however, is provide better-than-average HD playback, making it a prime candidate for basic desktop users and even HTPC builders. For the full gamut of opinions, grab your favorite cup of joe and get to clickin’ below.

Read – HotHardware review
Read – The Tech Report review
Read – Tom’s Hardware review
Read – PC Perpective review
Read – Hardware Zone review
Read – Hexus review

Continue reading AMD’s integrated 785G graphics platform review roundup

Filed under: ,

AMD’s integrated 785G graphics platform review roundup originally appeared on Engadget on Tue, 04 Aug 2009 05:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

MSI takes the pain, fun out of overclocking with OC Genie

If you thought MSI‘s obsession with motherboard implants was over after it unveiled Winki to a nearly nonexistent amount of fanfare, think again. The company has just taken the wraps off its latest mobo addition, the OC Genie. In essence, this is the one-touch overclock button that laptop owners have long enjoyed, but for desktops. Right now, the OC Genie is custom built for the company’s own P55 motherboard, though it insists that all sorts of mainboards will be supported in due time. If you’re curious about the details, you’ll have to remain that way for now; all we’re told is that activating the module automatically pushes your system to a safe brink within a second, giving even the newbies in attendance the ability to squeeze more from their current rig. In related news, MSI also added yet another model to its growing Classic laptop series, the 17.3-inch CX700, which gets powered by a Core 2 Duo processor, ATI’s Mobility Radeon HD4330 GPU and 4GB of RAM.

[Via HotHardware]

Filed under: ,

MSI takes the pain, fun out of overclocking with OC Genie originally appeared on Engadget on Sun, 19 Jul 2009 08:34:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD’s RS880 integrated graphics chip could make netbooks usable

Tired of hearing that your next favorite netbook / nettop is hamstrung with one of those woefully underpowered GMA950 graphics chipsets? Eager to see what all AMD is going to do about it? If The Inquirer is to be believed, an up and coming integrated chipset should elevate the multimedia prowess of low-end machines, as the RS880 would actually be based around the new Radeon HD 4200 core. In theory, at least, this chip would be around 15 percent faster than similar alternatives out there now, giving future netbooks just enough power to churn through 720p video without st, st, stuttering. Needless to say, the suits are refusing to comment on the matter, but we’re definitely holding out hope for this one.

Filed under:

AMD’s RS880 integrated graphics chip could make netbooks usable originally appeared on Engadget on Thu, 02 Jul 2009 08:52:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena

AMD’s newfangled ATI Radeon E4690 may not be the next Crysis killer, but it should do just fine in next-gen arcade and slot machines. All kidding aside (sort of…), this new embedded graphics set is said to triple the performance of AMD’s prior offerings in the field, bringing with it 512MB of GDDR3 RAM, DirectX 10.1 / OpenGL 3.0 support and hardware acceleration of H.264 and VC-1 high-definition video. The 35mm chip also differentiates itself by integrating directly onto motherboards and taking on many of the tasks that are currently assigned to the CPU, but alas, it doesn’t sound as if we’ll be seeing this in any nettops / netbooks anytime soon ever.

Filed under:

AMD’s ATI Radeon E4690 brings HD, DirectX 10.1 support to embedded GPU arena originally appeared on Engadget on Mon, 01 Jun 2009 08:56:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Giz Explains: GPGPU Computing, and Why It’ll Melt Your Face Off

No, I didn’t stutter: GPGPU—general-purpose computing on graphics processor units—is what’s going to bring hot screaming gaming GPUs to the mainstream, with Windows 7 and Snow Leopard. Finally, everbody’s face melts! Here’s how.

What a Difference a Letter Makes
GPU sounds—and looks—a lot like CPU, but they’re pretty different, and not just ’cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they’re essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs have a pretty long history—you could go all the way back to the Commodore Amiga, if you wanted to—but we’re going to stick to the fairly present. That is, the last 10 years, when Nvidia’s Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations—the calculations needed to figure out what pixels to display your screen as you snipe someone’s head off in Team Fortress 2—are particularly suited to being handled in parallel.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear [the book] into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU—800 stream processors in ATI’s Radeon 4870—make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD’s Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where’s the General Part?
Okay, so the thing about parallel processing—using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They’re frameworks that make it way easier to use graphics cards for kinds of computing that aren’t related to making zombie guts fly in Left 4 Dead. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it’s pretty much an industry-wide thing. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

OpenCL is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we’re really looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What’s It Going to Do for Me?
Parallel processing is pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it’s not going too far from graphics, since that’s still the easiest to parallelize. But converting, decoding and creating videos—stuff you’re probably using now more than you did a couple years ago—will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you’re just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.

AMD busts out world’s first air-cooled 1GHz GPU

The last time a GPU milestone this significant was passed, it was June of 2007, and we remember it well. We were kicked back, soaking in the rays from Wall Street and firmly believing that nothing could ever go awry — anywhere, to anyone — due to a certain graphics card receiving 1GB of onboard RAM. Fast forward a few dozen months, and now we’ve got AMD dishing out the planet’s first factory-clocked card to hit the 1GHz mark. Granted, overclockers have been running their cards well above that point for awhile now, but hey, at least this bugger comes with a warranty. The device doing the honors is the ATI Radeon HD 4890, and it’s doing it with air cooling alone and just a wee bit of factory overclocking. Take a bow, AMD — today’s turning out to be quite a good one for you.

Filed under: ,

AMD busts out world’s first air-cooled 1GHz GPU originally appeared on Engadget on Wed, 13 May 2009 11:17:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

ATI Radeon HD 4770 GPU review roundup

We like how you’re thinking, AMD, and we don’t say that everyday — or ever, really. During a time when even hardcore gamers are having to rethink whether or not that next-gen GPU is a necessity, AMD has pushed out a remarkably potent new graphics card for under a Benjamin, and the whole world has joined in to review it. The ATI Radeon HD 4770, which was outed just over a week ago, has been officially introduced for the low, low price of just $99 (including rebates, which should surface soon). Aside from being the company’s first mainstream desktop GPU manufactured using a 40nm process, this little gem was a real powerhouse when put to the test. In fact, critics at HotHardware exclaimed that this card “offers performance in the same range as cards that were launched at the $299 to $349 price point only a year ago.” The bottom line? It’s “one of the best buys” out in its price range, and even with all that belt tightening you’ve been doing, surely you can spare a C-note, yeah?

Read – HotHardware (“Recommended; one of the best buys at its price point”)
Read – XBit Labs (“the best budget graphics accelerator [out there]”)
Read – LegitReviews (“great performance, low power consumption and low noise”)
Read – PCStats (“strikes a balance between performance and price”)
Read – TechSpot (“an outstanding choice in the $100 graphics market”)
Read – NeoSeeker (“a good value”)
Read – PCPerspective (“impressive”)

Filed under: ,

ATI Radeon HD 4770 GPU review roundup originally appeared on Engadget on Tue, 28 Apr 2009 14:02:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

NVIDIA GTX 275 / ATI Radeon HD 4890 review roundup

Unless you’ve started your weekend early, you have probably realized that both NVIDIA and AMD announced new GPUs this morning. Coincidental timing aside, it sure makes things easy for the consumer to eye the respective benchmarks and plan out their next mid-range GPU purchase accordingly. A whole bevy of reviews, tests, graphs and bar charts have hit the web this morning extolling and panning the pros and cons, but without getting too deep in the nitty-gritty, we can sum things up pretty easily with this. NVIDIA’s GTX 275 showed performance that placed it perfectly between the GTX 285 and GTX 260, and in all but a few off-the-wall tests, it outpaced the ATI Radeon HD 4890 (albeit slightly). Granted, the HD 4890 was called the “fastest, single-GPU powered graphics card AMD has ever produced” by HotHardware, though apparently even that wasn’t enough to help it snag the gold across the board. If you’re hungry for more (and you are, trust us), take the rest of the day off and dig in below.

Read – HotHardware GeForce GTX 275 review
Read – HotHardware Radeon HD 4890 review
Read – ExtremeTech GeForce GTX 275 and Radeon HD 4890 review
Read – DailyTech GeForce GTX 275 and Radeon HD 4890 review
Read – X-bit Labs ATI Radeon HD 4890 review
Read – ComputerShopper ATI Radeon HD 4890 review
Read – Guru 3D GeForce GTX 275 review
Read – Guru 3D ATI Radeon HD 4890 review
Read – PCPerspective ATI Radeon HD 4890 review

Filed under: ,

NVIDIA GTX 275 / ATI Radeon HD 4890 review roundup originally appeared on Engadget on Thu, 02 Apr 2009 11:26:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

ATI Radeon RV740 prototype 40nm video card gets reviewed, loved on

The Guru of 3D (not an actual guru, by the way) got its hands on a prototype ATI Radeon RV740 video card, and has been kind enough to put the thing through its paces. This is the company’s first 40nm video card and while the review should all be taken with a grain of salt — being “done with beta drivers and an early engineering sample board” — preliminary results are quite positive. The card performs “fairly close to a Radeon HD 4850,” something you don’t often hear about in cards retailing for less than a hundred bucks. In fact, the reviewer was so taken by the card’s performance at this price point that he predicts that this thing will be responsible for nothing less than “another shift in current mid-range pricing.” But don’t wait until the April release date to see this thing in action — hit the read link for the big review.

[Thanks, Weston]

Filed under: ,

ATI Radeon RV740 prototype 40nm video card gets reviewed, loved on originally appeared on Engadget on Thu, 26 Feb 2009 12:44:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Engadget’s recession antidote: win an ATI Radeon HD 4650 graphics card

Yup, all ’round the globe, economies are taking a hit, and people are losing jobs, houses and investments (take, for instance, the news that Netgear had an unexpectedly bad fourth quarter, as well as the rumors that both Asustek and MSI will be cutting workers). So we here at Engadget are committed to trying to counter-attack a little bit of that suffering by handing out a new gadget every day (except for weekends) to lucky readers until we run out of stuff / companies stop sending things. Today we’ve got an ATI Radeon HD 4650 graphics card to offer up. Read the rules below (no skimming — we’re omniscient and can tell when you’ve skimmed) and get commenting!

Special thanks to AMD for providing the gear!

The rules:

  • Leave a comment below. Any comment will do, but if you want to share your proposal for “fixing” the world economy, that’d be sweet too.
  • You may only enter this specific giveaway once. If you enter this giveaway more than once you’ll be automatically disqualified, etc. (Yes, we have robots that thoroughly check to ensure fairness.)
  • If you enter more than once, only activate one comment. This is pretty self explanatory. Just be careful and you’ll be fine.
  • Contest is open to anyone in the 50 States, 18 or older! Sorry, we don’t make this rule (we hate excluding anyone), so be mad at our lawyers and contest laws if you have to be mad.
  • Winner will be chosen randomly. The winner will receive one ATI Radeon HD 4650 graphics card. Approximate value is $70.
  • Entries can be submitted until Friday, February 13th, 11:59PM ET. Good luck!
  • Full rules can be found here.

Filed under:

Engadget’s recession antidote: win an ATI Radeon HD 4650 graphics card originally appeared on Engadget on Fri, 13 Feb 2009 12:00:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments