ASUS’ well-rounded G51 gaming laptop reviewed, lauded

NVIDIA’s world-beating GeForce GTX 260M GPU hasn’t presented itself in too many gaming laptops just yet, but somehow or another it found its way into ASUS’ bargain priced G51VX. Originally showcased back at Computex, this 15.6-inch rig is amongst the cheapest portable gaming machines in its class, packing a 2GHz Core 2 Duo CPU, 4GB of DDR2 RAM, a 1,368 x 768 panel, 320GB (7200RPM) hard drive and a 1GB GeForce GTX 260M handling the graphical duties. The benchmarking gurus over at HotHardware sat this here machine down for a stern talking to, and while they could’ve stood for the resolution to be a bit higher and the battery life (1.75 hours) to be a tad longer, the actual performance was top shelf. Put simply, it was deemed a “well balanced machine that’s a winner at this price point,” offering up a far nicer GPU than any other competitors in the $1,000 range. Tap that read link for a look at the full review — we get the feeling you’ll like what you see.

Filed under:

ASUS’ well-rounded G51 gaming laptop reviewed, lauded originally appeared on Engadget on Mon, 17 Aug 2009 13:39:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Sony finally admits NVIDIA chips are borking its laptops, offers free repair

Last summer, while Dell and HP were busy pinpointing and replacing faulty NVIDIA chips in their notebooks, Sony was adamant that its superior products were unaffected by the dreaded faulty GPU packaging. Well, after extensive support forum chatter about its laptops blanking out, distorting images and showing random characters, the Japanese company has finally relented and admitted that “a small percentage” of its VAIO range is indeed afflicted by the issue. That small percentage comes from the FZ, AR, C, LM and LT model lines, and Sony is offering to repair yours for free within four years of the purchase date, irrespective of warranty status. Kudos go to Sony for (eventually) addressing the problem, but if you’re NVIDIA, don’t you have to stop calling this a “small distraction” when it keeps tarnishing your reputation a full year after it emerged?

[Thanks, Jonas]

Filed under:

Sony finally admits NVIDIA chips are borking its laptops, offers free repair originally appeared on Engadget on Tue, 11 Aug 2009 12:09:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

ATI Stream goes fisticuffs with NVIDIA’s CUDA in epic GPGPU tussle

It’s a given that the GPGPU (or General-Purpose Graphics Processing Unit) has a long, long ways to go before it can make a dent in the mainstream market, but given that ATI was talking up Stream nearly three whole years ago, we’d say a battle royale between it and its biggest rival was definitely in order. As such, the benchmarking gurus over at PC Perspective saw fit to pit ATI’s Stream and NVIDIA’s CUDA technologies against one another in a knock-down-drag-out for the ages, essentially looking to see which system took the most strain away from the CPU during video encoding and which produced more visually appealing results. We won’t bother getting into the nitty-gritty (that’s what the read link is for), but we will say this: in testing, ATI’s contraption managed to relieve the most stress from the CPU, though NVIDIA’s alternative seemed to pump out the highest quality materials. In other words, you can’t win for losin’.

Filed under:

ATI Stream goes fisticuffs with NVIDIA’s CUDA in epic GPGPU tussle originally appeared on Engadget on Mon, 10 Aug 2009 08:57:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Samsung’s Ion-infused N510 netbook steeply priced across the pond

€499. $717. Or three easy payments of €171 ($246). That’s the price folks in Europe are being asked to pony up for Samsung’s admittedly svelte 11.6-inch N510 netbook. As one of the largest netbooks in its class, this machine — which can purportedly last for around 6.5 hours under ideal circumstances — also packs NVIDIA’s Ion technology, but a sluggish Atom N280 is still manning the ship. If you’ll recall, we actually heard that this here rig would surface sometime this summer, but it looks as if those orders may end up pushed to September. Anyone care to place a pre-order? Or are you more interested in those “real laptops” for just north of seven Benjamins?

[Via Blogeee]

Filed under:

Samsung’s Ion-infused N510 netbook steeply priced across the pond originally appeared on Engadget on Wed, 05 Aug 2009 07:02:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

BFG gifts GTX 285 and GTX 295 cards with self-contained liquid cooling

Believe it or not, this is far from the first time we’ve heard of a liquid cooled GPU; in fact, NVIDIA was tossing the idea around way back in 2006, when Quake III and Unreal Tournament were still top titles in the FPS realm. BFG Technologies, which currently holds the greatest name for a graphics card company ever, has today introduced its GeForce GTX 285 H2O+ and GeForce GTX 295 H2OC cards, both of which boast ThermoIntelligence Advanced Cooling Solutions (read: self-contained liquid cooling systems). BFG swears that both cards are completely maintenance free, with the GPUs kept around 30°C cooler under load as compared to standard air cooled models. There’s no mention of pricing just yet, but both should be available any moment at NewEgg. Good luck resisting the sudden urge to upgrade.

Filed under: ,

BFG gifts GTX 285 and GTX 295 cards with self-contained liquid cooling originally appeared on Engadget on Wed, 05 Aug 2009 06:06:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD’s integrated 785G graphics platform review roundup

It’s mildly hard to believe that AMD‘s DirectX 10-compatible 780 Series motherboard GPU was introduced well over a year ago now, but the long awaited successor has finally landed. This fine morning, a gaggle of hardware sites around the web have taken a look at a number of AMD 785G-equipped mainboards, all of which boast integrated Radeon HD 4200 GPUs, support for AMD’s AM3 processors and a price point that’s downright delectable (most boards are sub-$100). Without getting into too much detail here in this space, the general consensus seems to be that the new platform is definitely appreciated, but hardly revolutionary. It fails to destroy marks set by the 780G, and it couldn’t easily put NVIDIA’s GeForce 9300 to shame. What it can do, however, is provide better-than-average HD playback, making it a prime candidate for basic desktop users and even HTPC builders. For the full gamut of opinions, grab your favorite cup of joe and get to clickin’ below.

Read – HotHardware review
Read – The Tech Report review
Read – Tom’s Hardware review
Read – PC Perpective review
Read – Hardware Zone review
Read – Hexus review

Continue reading AMD’s integrated 785G graphics platform review roundup

Filed under: ,

AMD’s integrated 785G graphics platform review roundup originally appeared on Engadget on Tue, 04 Aug 2009 05:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Personal Supercomputers Promise Teraflops on Your Desk

js-personal-supercomputer

About a year ago John Stone, a senior research programmer at the University of Illinois, and his colleagues found a way to bypass the long waits for computer time at the National Center for Supercomputing Applications.

Stone’s team got “personal supercomputers,” compact machines with a stack of graphics processors that together pack quite a punch and can be used to run complex simulations.

“Now instead of taking a couple of days and waiting in a queue, we can do the calculations locally,” says Stone. “We can do more and better science.”

Personal supercomputers are available in many flavors, both as clusters of CPU and graphics processing units (GPUs). But it is GPU computing that is gaining in popularity for its ability to offer researchers easy and quick access to raw computing power. That’s opening up a new market for makers of GPUs, such as Nvidia and AMD, which have traditionally focused on high-end video cards for gamers and graphics pros.

True supercomputers, the rock stars of computing, are capable of millions of calculations per second. But they can be extremely expensive — the fastest supercomputer of 2008, IBM’s RoadRunner, costs $120 million — and access to them is limited. That’s why smaller versions, no bigger than a typical desktop PC, are becoming a hit among researchers who want access to massive processing power along with the convenience of having a machine at their own desk.

“Personal supercomputers that can run off a 110 volt wall circuit allow for a significant amount of performance at a very reasonable price,” says John Fruehe, director of business development for serve and workstation at AMD. Companies such as Nvidia and AMD make the graphics chips that personal supercomputer resellers assemble into personalized configurations for customers like Stone.

Demand for these personal supercomputers grew at an average of 20 percent every year between 2003 and 2008, says research firm IDC. Since Nvidia introduced its Tesla personal supercomputer less than a year ago, the company has sold more than 5,000 machines.

“Earlier when people talked about supercomputers, they meant giant Crays and IBMs,” says Jie Wu, research manager for technical computing at IDC. “Now it is more about having smaller clusters.”

Today, most U.S. researchers at universities who need access to a supercomputer have to submit a proposal to the National Science Foundation, which funds a number of supercomputer centers. If the proposal is approved, the researcher gets access to an account for a certain number of CPU hours at one of the major supercomputing centers at the universities of San Diego, Illinois or Pittsburgh, among others.

“Its like waiting in line at the post office to send a message,” says Stone. “Now you would rather send a text message from your computer rather than wait in line at the post office to do it. That way it is much more time efficient.”

Personal supercomputers may not be as powerful as the mighty mainframes, but they are still leagues above their desktop cousins. For instance, a four-GPU Tesla personal supercomputer from Nvidia can offer 4 teraflops of parallel supercomputing performance with 960 cores and two Intel Xeon 5500 Series Nehalem processors. That’s just a fraction of the IBM RoadRunner’s 1 petaflop speed, but it’s enough for most researchers to get the job done.

For researchers, this means the ability to run calculations faster than they can with a traditional desktop PC. “Sometimes researchers have to wait for six to eight hours before they can have the results from their tests,” says Sumit Gupta, senior product manager at Nvidia. “Now the wait time for some has come down to about 20 minutes.”

It also means that research projects that typically would have never get off the ground because they are deemed too costly and too resource and time intensive now get the green light. “The cost of making a mistake is much lower and a lot less intimidating,” says Stone.

The shift away from large supercomputers to smaller versions has also made research more cost effective for organizations. Stone, who works in a group that develops software used by scientists to simulate and visualize biomolecular structures, says his lab has 19 personal supercomputers shared by 30 researchers. “If we had what we wanted, we would run everything locally because it is better,” says Stone. “But the science we do is more powerful than what we can afford.”

The personal supercomputing idea has also gained momentum thanks to the emergence of programming languages designed especially for GPU-based machines. Nvidia has been trying to educate programmers and build support for CUDA, the C language programming environment created specifically for parallel programming the company’s GPUs. Meanwhile, AMD has declared its support for OpenCL (open computing language) this year. OpenCL is an industry standard programming language. Nvidia says it also works with developers to support OpenCL.

Stone says the rise of programming environments for high performance machines have certainly made them more popular. And while portable powerhouses can do a lot, there is still place for the large mainframe supercomputers. “There are still the big tasks for which we need access to the larger supercomputers,” says Stone. “But it doesn’t have to be for every thing.”

Photo: John Stone sits next to a personal supercomputer- a quad-core Linux PC with 8GB of memory and 3 GPUs (one NVIDIA Quadro FX 5800, and two NVIDIA Tesla C1060) each with 4GB of GPU memory/ Kirby Vandivort


ATI’s $1,800 2GB FirePro V8750 GPU introduced and reviewed

Need a quick way to blow 1,800 bones? Looking to single-handedly jump-start this so-called “economy” we keep hearing about? Look no further, friends, as ATI just did you a solid. Just four months after the outfit dished out its 1GB FirePro V7750, the company is now looking to strike it rich once more with the 2GB FirePro V8750. Obviously designed for the workstation crowd, this CAD destroying GPU is equipped with more GDDR5 memory than our own four-year old Quake III server, but as HotHardware points out, the clock speed remains exactly the same as the entirely more affordable V8700. When pushed, this newfangled card did manage to best every other rival on the test bench, but not by a wide margin. What you’re left with is a cutting-edge device that’s priced way out of consideration for most, and frankly, way outside the realm of sensibility. If you just can’t shake the urge to hear more, give that read link a tap for the full review.

Read – ATI FirePro V8750 review
Read – ATI press release

Filed under:

ATI’s $1,800 2GB FirePro V8750 GPU introduced and reviewed originally appeared on Engadget on Tue, 28 Jul 2009 04:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September

Looks like AMD‘s heading off trail with its upcoming 40nm DirectX 11-based Evergreen series processors. The Inquirer’s dug up some details, and while clock speeds are still unknown, the codenames for the lineup include Cypress at the top of the pile, followed by Redwood, then Juniper and Cedar for the mainstream crowd, and finally Hemlock for the lower end. The series could reportedly be ready by late September, which gives a month of breathing room before DX11-supporting Windows 7 hits the scene. Could this give AMD its much-desired lead over NVIDIA? Hard to say, but things should get mighty interesting between now and late October.

Filed under: ,

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September originally appeared on Engadget on Tue, 21 Jul 2009 19:43:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Silverlight 3 out of beta, joins forces with your GPU for HD streaming

A day earlier than expected, Microsoft has launched its third edition of Silverlight and its SDK. As Ars Technica notes, some of the bigger improvements on the user side are GPU hardware acceleration and new codec support including H.264, AAC, and MPEG-4. If you’re looking to give it a spin, there’s a Smooth Streaming demo available that, as the name suggests, does a pretty good job of streaming HD video with little stutter, even when skipping around. If you’ve got Firefox 2, Internet Explorer 6, Safari 3 or anything fresher, hit up the read link to get the update.

[Via Ars Technica]

Read – Download Page
Read – Smooth Streaming demo

Filed under: ,

Silverlight 3 out of beta, joins forces with your GPU for HD streaming originally appeared on Engadget on Thu, 09 Jul 2009 21:42:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments