NVIDIA unveils Tesla K40 accelerator, teams with IBM on GPU-based supercomputing

NVIDIA unveils Tesla K40, teams with IBM on supercomputing in the data center

NVIDIA’s Tesla GPUs are already mainstays in supercomputers that need specialized processing power, and they’re becoming even more important now that the company is launching its first Tesla built for large-scale projects. The new K40 accelerator only has 192 more processing cores than its K20x ancestor (2,880, like the GeForce GTX 780 Ti), but it crunches analytics and science numbers up to 40 percent faster. A jump to 12GB of RAM, meanwhile, helps it handle data sets that are twice as big as before. The K40 is already available in servers from NVIDIA’s partners, and the University of Texas at Austin plans to use it in Maverick, a remote visualization supercomputer that should be up and running by January.

As part of the K40 rollout, NVIDIA has also revealed a partnership with IBM that should bring GPU-boosted supercomputing to enterprise-grade data centers. The two plan on bringing Tesla GPU support to IBM’s Power8-based servers, including both apps and development tools. It’s not clear when the deal will bear fruit, but don’t be surprised if it turbocharges a corporate mainframe near you.

Filed under: , ,

Comments

Source: NVIDIA

AMD unveils Radeon HD 8900M laptop graphics, ships them in MSI’s GX70 (eyes-on)

AMD unveils Radeon HD 8900M laptop graphics, ships them in MSI's GX70 eyeson

Did you think AMD showed all its mobile GPU cards when it launched the Radeon HD 8000M series in January? Think twice. The company has just unveiled the 8900M series, an adaptation of its Graphics Core Next architecture for desktop replacement-class gaming laptops. To call it a big jump would be an understatement: compared to the 8800M, the flagship 8970M chip doubles the stream processors to 1,280, hikes the clock speed from 725MHz to 850MHz and bumps the memory speed slightly to 1.2GHz. The net effect is about 12 to 54 percent faster game performance than NVIDIA’s current mobile speed champion, the GTX 680M, and up to four times the general computing prowess in OpenCL. The 8970M is more than up to the task of powering up to 4K in one screen, and it can handle up to six screens if there are enough ports.

We’ll see how long AMD’s performance reign lasts, although we won’t have to wait to try the 8970M — MSI is launching the GPU inside the new GX70 laptop you see above. We got a brief, hands-off tease of the 17.3-inch GX60 successor at the 8900M’s unveiling, and it’s clear the graphics are the centerpiece. We saw it driving Crysis 3 very smoothly on one external display while powering 2D on two other screens, albeit through a bulky set of Mini DisplayPort, HDMI and VGA cables. Otherwise, the GX70 is superficially similar to its ancestor with that chunky profile, an unnamed Richland-based AMD A10 processor, Killer networking and a SteelSeries keyboard. More than anything, price should be the clincher: MSI is pricing the GX70 with the new Radeon at $1,100, which amounts to quite the bargain for anyone whose laptop has to double as a primary gaming PC.

Filed under: , ,

Comments

Source: AMD, MSI

NVIDIA rolls out Apex and PhysX developer support for the PlayStation 4

NVIDIA rolls out APEX and PhysX developer support for the PlayStation 4

Just because the PlayStation 4 centers around an AMD-based platform doesn’t mean that NVIDIA is out of the picture. The graphics firm is updating the software developer kits for both its Apex dynamics framework and PhysX physics modeling system to address Sony’s new console, even if they won’t have the full hardware acceleration that comes with using NVIDIA’s own chipsets. The introductions will mostly take some of the guesswork out of creating realistic-looking games — theoretically, adding a larger number of collisions, destructible objects and subtler elements like cloth and hair modeling. Most of us won’t see the fruits of the updated SDKs until at least this holiday, but programmers looking for more plausible PS4 game worlds can hit the source links.

Filed under: , ,

Comments

Source: NVIDIA (1), (2)

Lighty paints real lighting Photoshop-style, minus the overdone lens flare (video)

Lighty paints realworld lighting Photoshopstyle, minus the excess lens flare video

It’s not hard to find smart lightbulbs that bow to our every whim. Creating a well-coordinated light scheme can be difficult without tweaking elements one by one, however, which makes the Japan Science and Technology Agency’s Lighty project that much more elegant. The approach lets would-be interior coordinators paint degrees of light and shadow through an app, much as they would create a magnum opus in Photoshop or a similar image editor. Its robotic lighting system sorts out the rest: a GPU-assisted computer steers a grid of gimbal-mounted lightbulbs until their positions and intensity match the effect produced on the screen. While Lighty currently exists just as a scale model, the developers plan to work with life-sized rooms, and potentially large halls, from now on. We’re all for the newfound creativity in our lighting, as long as we can’t mess it up with a Gaussian Blur filter.

Filed under:

Comments

Via: DigInfo TV

Source: JST Igarashi

ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it

ARM claims new GPU has desktopclass brains, requests OpenCL certificate to prove it

It’s been a while since ARM announced its next generation of Mali GPUs, the T604 and T658, but in the semiconductor business silence should never be confused with inactivity. Behind the scenes, the chip designers have been working with Khronos — that great keeper of open standards — to ensure the new graphics processors are fully compliant with OpenCL and are therefore able to use their silicon for general compute tasks (AR, photo manipulation, video rendering etc.) as well as for producing pretty visuals.

Importantly, ARM isn’t settling for the Embedded Profile version of OpenCL that has been “relaxed” for mobile devices, but is instead aiming for the same Full Profile OpenCL 1.1 found in compliant laptop and desktop GPUs. A tall order for a low-power processor, perhaps, but we have a strong feeling that Khronos’s certification is just a formality at this point, and that today’s news is a harbinger of real, commercial T6xx-powered devices coming before the end of the year. Even the souped-up Mali 400 in the European Galaxy S III can only reign for so long.

Continue reading ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it

Filed under: ,

ARM claims new GPU has desktop-class brains, requests OpenCL certificate to prove it originally appeared on Engadget on Thu, 02 Aug 2012 14:56:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments