MSI’s AMD-powered U210 up for pre-order, still not ‘official’

Who needs press releases? You can snap up an MSI U210 pre-order right this second on Amazon, so why bother waiting MSI to actually confirm the thing for a Stateside release? Morality. That’s why. Kids these days think they can just drop $430 on any old Athlon Neo MV-40-powered (the same chips at the heart of HP’s dv2) 12-inch XGA ultraportable with 2GB of RAM and a 250GB HDD and 802.11n and not have to pay the consequences. Well, we’re not standing for it. That read link right below? Not an implied approval of these illicit activities.

[Via Mark’s Technology News]

Filed under:

MSI’s AMD-powered U210 up for pre-order, still not ‘official’ originally appeared on Engadget on Sat, 29 Aug 2009 13:33:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

MSI X-Slim X610 leaked, reviewed by Russians

If the gang at 3D News are to be believed (and why not?), this familiar looking notebook isn’t MSI’s X-Slim X600 at all, but the not-yet-announced X-Slim X610. And if a leaked ultraportable isn’t enough excitement for you, wait’ll we tell you that they actually got their hands on one of these beauts and gave it the full-on review treatment. As you’d expect from a machine that shares chassis, specs, ATI Mobility Radeon HD 4330 graphics, a 250GB hard drive, 4GB RAM, and all but one digit of its name with the original, there is not too much to report. The major difference is that the X610 foregoes Intel’s 1.4GHz SU3500 CPU in place of an AMD Athlon MV-40 (1.6GHz), which results in some slower benchmarks, but not enough that you’d readily notice in everyday use. And then there is battery life — the new guy clocks in at slightly less than two hours, or around 20 percent less than the X600. Same machine, same specs, poorer performance — not really a step in the right direction, MSI. Perhaps you can at least give consumers a break on the price?

[Via SlashGear]

Filed under:

MSI X-Slim X610 leaked, reviewed by Russians originally appeared on Engadget on Wed, 19 Aug 2009 09:57:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

Windows Media Center is set to thrill at CEDIA 2009 next month

Everyone likes to try and predict the future and with the Custom Electronic Design & Installation (CEDIA) show only a month away, the crew at Engadget HD threw all of their crazy ideas out there for your reading pleasure. For the most part all of the predictions are around Windows Media Center and how it will integrate with other products like the Zune HD, Digital Cable and HD satellite services, but there are some other fun things throw in. We really believe that this is going to be the year that Redmond brings everything together, so if you’re the type who doesn’t think it’ll ever happen, then click through to find out why we think you’re wrong. Either way, you can expect we’ll be on the scene in Atlanta to check out what’s new first hand.

Filed under: , ,

Windows Media Center is set to thrill at CEDIA 2009 next month originally appeared on Engadget on Mon, 10 Aug 2009 11:17:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

ATI Stream goes fisticuffs with NVIDIA’s CUDA in epic GPGPU tussle

It’s a given that the GPGPU (or General-Purpose Graphics Processing Unit) has a long, long ways to go before it can make a dent in the mainstream market, but given that ATI was talking up Stream nearly three whole years ago, we’d say a battle royale between it and its biggest rival was definitely in order. As such, the benchmarking gurus over at PC Perspective saw fit to pit ATI’s Stream and NVIDIA’s CUDA technologies against one another in a knock-down-drag-out for the ages, essentially looking to see which system took the most strain away from the CPU during video encoding and which produced more visually appealing results. We won’t bother getting into the nitty-gritty (that’s what the read link is for), but we will say this: in testing, ATI’s contraption managed to relieve the most stress from the CPU, though NVIDIA’s alternative seemed to pump out the highest quality materials. In other words, you can’t win for losin’.

Filed under:

ATI Stream goes fisticuffs with NVIDIA’s CUDA in epic GPGPU tussle originally appeared on Engadget on Mon, 10 Aug 2009 08:57:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD’s integrated 785G graphics platform review roundup

It’s mildly hard to believe that AMD‘s DirectX 10-compatible 780 Series motherboard GPU was introduced well over a year ago now, but the long awaited successor has finally landed. This fine morning, a gaggle of hardware sites around the web have taken a look at a number of AMD 785G-equipped mainboards, all of which boast integrated Radeon HD 4200 GPUs, support for AMD’s AM3 processors and a price point that’s downright delectable (most boards are sub-$100). Without getting into too much detail here in this space, the general consensus seems to be that the new platform is definitely appreciated, but hardly revolutionary. It fails to destroy marks set by the 780G, and it couldn’t easily put NVIDIA’s GeForce 9300 to shame. What it can do, however, is provide better-than-average HD playback, making it a prime candidate for basic desktop users and even HTPC builders. For the full gamut of opinions, grab your favorite cup of joe and get to clickin’ below.

Read – HotHardware review
Read – The Tech Report review
Read – Tom’s Hardware review
Read – PC Perpective review
Read – Hardware Zone review
Read – Hexus review

Continue reading AMD’s integrated 785G graphics platform review roundup

Filed under: ,

AMD’s integrated 785G graphics platform review roundup originally appeared on Engadget on Tue, 04 Aug 2009 05:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Dell adds high-powered ATI FirePro M7740 graphics to the Precision M6400

We’ve always lusted after Dell’s high-zoot Precision M6400 mobile workstation, and now we’ve got yet another reason to save all these nickels and dimes in the sock drawer: the company’s adding AMD’s new ATI FirePro M7740 graphics processor to the mix. The new chip is due to be announced tomorrow at SIGGRAPH 2009, and like the rest of the FirePro line, it’ll offer 1GB of DDR5 frame buffer memory, 30-bit DisplayPort and dual-link DVI output, and tons of CAD application certifications. We’re looking for hard specs and prices now, we’ll let you know as soon as we get ’em.

Dell adds high-powered ATI FirePro M7740 graphics to the Precision M6400 originally appeared on Engadget on Mon, 03 Aug 2009 19:31:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Personal Supercomputers Promise Teraflops on Your Desk

js-personal-supercomputer

About a year ago John Stone, a senior research programmer at the University of Illinois, and his colleagues found a way to bypass the long waits for computer time at the National Center for Supercomputing Applications.

Stone’s team got “personal supercomputers,” compact machines with a stack of graphics processors that together pack quite a punch and can be used to run complex simulations.

“Now instead of taking a couple of days and waiting in a queue, we can do the calculations locally,” says Stone. “We can do more and better science.”

Personal supercomputers are available in many flavors, both as clusters of CPU and graphics processing units (GPUs). But it is GPU computing that is gaining in popularity for its ability to offer researchers easy and quick access to raw computing power. That’s opening up a new market for makers of GPUs, such as Nvidia and AMD, which have traditionally focused on high-end video cards for gamers and graphics pros.

True supercomputers, the rock stars of computing, are capable of millions of calculations per second. But they can be extremely expensive — the fastest supercomputer of 2008, IBM’s RoadRunner, costs $120 million — and access to them is limited. That’s why smaller versions, no bigger than a typical desktop PC, are becoming a hit among researchers who want access to massive processing power along with the convenience of having a machine at their own desk.

“Personal supercomputers that can run off a 110 volt wall circuit allow for a significant amount of performance at a very reasonable price,” says John Fruehe, director of business development for serve and workstation at AMD. Companies such as Nvidia and AMD make the graphics chips that personal supercomputer resellers assemble into personalized configurations for customers like Stone.

Demand for these personal supercomputers grew at an average of 20 percent every year between 2003 and 2008, says research firm IDC. Since Nvidia introduced its Tesla personal supercomputer less than a year ago, the company has sold more than 5,000 machines.

“Earlier when people talked about supercomputers, they meant giant Crays and IBMs,” says Jie Wu, research manager for technical computing at IDC. “Now it is more about having smaller clusters.”

Today, most U.S. researchers at universities who need access to a supercomputer have to submit a proposal to the National Science Foundation, which funds a number of supercomputer centers. If the proposal is approved, the researcher gets access to an account for a certain number of CPU hours at one of the major supercomputing centers at the universities of San Diego, Illinois or Pittsburgh, among others.

“Its like waiting in line at the post office to send a message,” says Stone. “Now you would rather send a text message from your computer rather than wait in line at the post office to do it. That way it is much more time efficient.”

Personal supercomputers may not be as powerful as the mighty mainframes, but they are still leagues above their desktop cousins. For instance, a four-GPU Tesla personal supercomputer from Nvidia can offer 4 teraflops of parallel supercomputing performance with 960 cores and two Intel Xeon 5500 Series Nehalem processors. That’s just a fraction of the IBM RoadRunner’s 1 petaflop speed, but it’s enough for most researchers to get the job done.

For researchers, this means the ability to run calculations faster than they can with a traditional desktop PC. “Sometimes researchers have to wait for six to eight hours before they can have the results from their tests,” says Sumit Gupta, senior product manager at Nvidia. “Now the wait time for some has come down to about 20 minutes.”

It also means that research projects that typically would have never get off the ground because they are deemed too costly and too resource and time intensive now get the green light. “The cost of making a mistake is much lower and a lot less intimidating,” says Stone.

The shift away from large supercomputers to smaller versions has also made research more cost effective for organizations. Stone, who works in a group that develops software used by scientists to simulate and visualize biomolecular structures, says his lab has 19 personal supercomputers shared by 30 researchers. “If we had what we wanted, we would run everything locally because it is better,” says Stone. “But the science we do is more powerful than what we can afford.”

The personal supercomputing idea has also gained momentum thanks to the emergence of programming languages designed especially for GPU-based machines. Nvidia has been trying to educate programmers and build support for CUDA, the C language programming environment created specifically for parallel programming the company’s GPUs. Meanwhile, AMD has declared its support for OpenCL (open computing language) this year. OpenCL is an industry standard programming language. Nvidia says it also works with developers to support OpenCL.

Stone says the rise of programming environments for high performance machines have certainly made them more popular. And while portable powerhouses can do a lot, there is still place for the large mainframe supercomputers. “There are still the big tasks for which we need access to the larger supercomputers,” says Stone. “But it doesn’t have to be for every thing.”

Photo: John Stone sits next to a personal supercomputer- a quad-core Linux PC with 8GB of memory and 3 GPUs (one NVIDIA Quadro FX 5800, and two NVIDIA Tesla C1060) each with 4GB of GPU memory/ Kirby Vandivort


ATI’s $1,800 2GB FirePro V8750 GPU introduced and reviewed

Need a quick way to blow 1,800 bones? Looking to single-handedly jump-start this so-called “economy” we keep hearing about? Look no further, friends, as ATI just did you a solid. Just four months after the outfit dished out its 1GB FirePro V7750, the company is now looking to strike it rich once more with the 2GB FirePro V8750. Obviously designed for the workstation crowd, this CAD destroying GPU is equipped with more GDDR5 memory than our own four-year old Quake III server, but as HotHardware points out, the clock speed remains exactly the same as the entirely more affordable V8700. When pushed, this newfangled card did manage to best every other rival on the test bench, but not by a wide margin. What you’re left with is a cutting-edge device that’s priced way out of consideration for most, and frankly, way outside the realm of sensibility. If you just can’t shake the urge to hear more, give that read link a tap for the full review.

Read – ATI FirePro V8750 review
Read – ATI press release

Filed under:

ATI’s $1,800 2GB FirePro V8750 GPU introduced and reviewed originally appeared on Engadget on Tue, 28 Jul 2009 04:29:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

AMD parties hard after shipping 500 millionth x86 processor

Get on down with your bad self, Mr. Spaceman — AMD just shipped its 500 millionth x86 processor! Shortly after the company celebrated 40 years of hanging tough and doing its best to overtake Intel, the outfit has now revealed that a half billion x86 CPUs have left its facilities over the past two score. We pinged Intel in order to find out just how that number stacked up, but all we were told is that the 500 million milestone was celebrated awhile back down in Santa Clara. We’ll just chalk the vagueness up to Intel not wanting to spoil an otherwise raucous Silicon Valley shindig. Classy.

[Via HotHardware]

Filed under: ,

AMD parties hard after shipping 500 millionth x86 processor originally appeared on Engadget on Fri, 24 Jul 2009 02:18:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September

Looks like AMD‘s heading off trail with its upcoming 40nm DirectX 11-based Evergreen series processors. The Inquirer’s dug up some details, and while clock speeds are still unknown, the codenames for the lineup include Cypress at the top of the pile, followed by Redwood, then Juniper and Cedar for the mainstream crowd, and finally Hemlock for the lower end. The series could reportedly be ready by late September, which gives a month of breathing room before DX11-supporting Windows 7 hits the scene. Could this give AMD its much-desired lead over NVIDIA? Hard to say, but things should get mighty interesting between now and late October.

Filed under: ,

AMD’s 40nm DirectX 11-based Evergreen GPUs could be ready for bloom by late September originally appeared on Engadget on Tue, 21 Jul 2009 19:43:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments