NVIDIA announces Quadro K5000 for Mac Pro, brings 4K support, 2x performance over Quadro 4000

NVIDIA announces Quadro K5000 for Mac Pro, brings 4K support, 2x performance over Quadro 4000

NVIDIA’s Kepler-powered Quadro K5000 GPU will be making its way to Apple’s Mac Pro systems, the company announced today at IBC. The professional graphics card made its debut earlier this summer and is slated to ship beginning in October for $2,249. Timing for the Quadro K5000 for Mac isn’t quite so firm, with NVIDIA simply stating that it’ll ship “later this year,” though pricing is expected to be in line with the previously announced flavor. The next-gen GPU is said to offer up to 2x faster performance over the Quadro 4000, while also delivering 4K display support, quad display capability through two DVI-DL and two DisplayPort 1.2 connectors, and 4 gigs of graphics memory. Furthermore, each Mac Pro will be able to support up to two separate cards, should you need the extra power. You’ll find full details in the press release after the break.

Continue reading NVIDIA announces Quadro K5000 for Mac Pro, brings 4K support, 2x performance over Quadro 4000

Filed under: ,

NVIDIA announces Quadro K5000 for Mac Pro, brings 4K support, 2x performance over Quadro 4000 originally appeared on Engadget on Fri, 07 Sep 2012 04:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Adobe Premiere Pro CS6 now fully supports Retina MacBook Pro: both HiDPI and GPU compute

Adobe Premiere Pro CS6 now fully supports Retina MacBook Pro: both HiDPI and GPU compute

Adobe’s video editing application is already a lovely thing on the Retina MacBook Pro, but not visually — only in terms of its raw performance on that Core i7 CPU. Until today’s update — 6.0.2 — the software hasn’t actually been able to make use of HiDPI itself, and neither has it been able to exploit the performance-boosting potential of GPU compute on the laptop’s NVIDIA GTX 650M graphics card. If you’re lucky enough to own this combo of hardware and software, Adobe’s official blog suggests that you go ahead and check for the update or apply it manually following the instructions at the source link below (it’s actually within Bridge that you should check for the update, with other Adobe titles closed). We’re hopefully about to apply it ourselves and will report back on its impact.

Update on the update: As expected, video thumbnails look sumptuous in the absence of pixelation, making this a worthy revision. That said, software encoding of a short timeline was still faster with the Mercury Engine set to software mode rather than GPU compute. A 2:30 clip took 2:02 to encode with OpenCL, 2:00 to encode with CUDA, but just 1:42 to encode in Software mode. No doubt people who do multi-cam editing or need to render complex effects in real-time may see a benefit — please, let us know if you do!

Update: Just had word from NVIDIA that may explain what’s happening with our encoding times. We’re told it’s only if we enable “Maximum Render Quality” that GPU compute will shine through in terms of performance, because enabling max quality in software mode would slow it down. So far we’ve only tried with default settings, so clearly there’s room here for more experimentation.

Filed under:

Adobe Premiere Pro CS6 now fully supports Retina MacBook Pro: both HiDPI and GPU compute originally appeared on Engadget on Thu, 06 Sep 2012 05:40:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceAdobe  | Email this | Comments

Your next Samsung could learn to love your smile

Heterogeneous System Architecture might not be a phrase that trips off your tongue right now, but if AMD, TI and – in a quiet addition – Samsung have their way, you could be taking advantage of it to interact with the computers of tomorrow. AMD VP Phil Rogers, president of the HSA Foundation, filled his IFA keynote with futurology and waxing lyrical about how PCs, tablets and other gadgets will react to not only touch and gestures, but body language and eye contact, among other things. Check out the concept demo after the cut.

Heterogeneous System Architecture is a catch-all for scalar CPU processing and parallel GPU processing, along with high-bandwidth memory access for boosting app performance while minimizing power consumption. In short, it’s what AMD has been pushing for with its APUs (and, elsewhere – though not involved with HSA – NVIDIA has with its CUDA cores), with the HSA seeing smartphones, desktops, laptops, consumer entertainment, cloud computing, and enterprise hardware all taking advantage of such a system.

While there were six new public additions to the Foundation, Samsung Electronics’ presence came as a surprise. The HSA was initially formed by AMD, ARM, Imagination Technologies, MediaTek, and Texas Instruments, but today’s presentation saw Samsung added to the slides and referred to as a founding member.

Samsung is no stranger to heterogeneous computing tech. Back in October 2011, the company created the Hybrid Memory Cube Consortium (along with Micron) to push a new, ultra-dense memory system that – running at 15x the speed of DDR3 and requiring 70-percent less energy per bit – would be capable of keeping up with multicore technologies. The Cubes would be formed of a 3D stack of silicon layers, formed on the logic layer and then with memory layers densely stacked on top.

As for the concept, Rogers described a system which could not only learn from a user’s routine, but react to whether they were smiling or not, whereabouts at the display they were looking, and to more mundane cues such as voice and gesture. Such a system could offer up a search result and then, if the user was seen to be smiling at it, learn from that reaction to better shape future suggestions.

Exactly when we can expect such technology to show up on our desktop (or, indeed, in laptops, phones and tablets) isn’t clear. However, Samsung has already been experimenting with devices that react to the user in basic ways; the Galaxy S III, for instance, uses eye-recognition to keep the screen active even if it’s not being touched, while its camera app includes face- and smile-recognition.


Your next Samsung could learn to love your smile is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


AMD bares all with Southern Islands GPU shots

AMD bares all with Southern Islands GPU shots

AMD’s been showing off its forthcoming wares at Hot Chips, and has taken the rare step of releasing detailed die shots for its Southern Islands GPUs. According to AnandTech, the company’s shy about releasing such details, since it’s aiming to beat rival NVIDIA to the punch by several months. Still, it’s letting us humble members of the public peer inside the gallery we’ve got for you below, just as long as you pinkie-swear that you won’t be selling the secrets to the boys in Santa Clara, okay?

Filed under:

AMD bares all with Southern Islands GPU shots originally appeared on Engadget on Thu, 30 Aug 2012 19:04:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceAnandTech  | Email this | Comments

Engadget Primed: The crazy science of GPU compute

Primed goes in-depth on the technobabble you hear on Engadget every day — we dig deep into each topic’s history and how it benefits our lives. You can follow the series here. Looking to suggest a piece of technology for us to break down? Drop us a line at primed *at* engadget *dawt* com.

Primed

As you’re hopefully aware, this is a gadget blog. As a result, we’re innately biased towards stuff that’s new and preferably fandangled. More cores, more pixels, more lenses; just give it here and make us happy. The risk of this type of technological greed is that we don’t make full use of what we already have, and nothing illustrates that better than the Graphics Processing Unit. Whether it sits in our desktops, laptops, tablets or phones, the GPU is cruelly limited by its history — its long-established reputation as a dumb, muscular component that takes instructions from the main processor and translates them into pixels for us to gawp at.

But what if the GPUs in our devices had some buried genius — abilities that, if only we could tap into them, would yield hyper-realistic experiences and better all-round performance from affordable hardware? Well, the thing is, this hidden potential actually exists. We’ve been covering it since at least 2008 and, even though it still hasn’t generated enough fuss to become truly famous, the semiconductor industry is making more noise about it now than ever before.

So please, join us after the break as we endeavor to explain why the trend known as “GPU compute,” aka “general purpose GPU (GPGPU),” or simply “not patronizing your graphics processor,” is still exciting despite having let us down in the past. We’ll try to show why it’s worth learning a few related concepts and terms to help provide a glossary for future coverage; and why, on the whole, your graphics chip is less Hasselhoff and more Hoffman than you may have imagined.

Continue reading Engadget Primed: The crazy science of GPU compute

Filed under: , ,

Engadget Primed: The crazy science of GPU compute originally appeared on Engadget on Mon, 20 Aug 2012 16:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

NVIDIA GeForce GTX 660 Ti review roundup: impressive performance for around $300

NVIDIA GeForce GTX 660 Ti review roundup impressive performance for around $300

No one’s saying that $300 is “cheap,” but compared to the GTX 670 and GTX 680 before it, the newly announced GeForce GTX 660 Ti is definitely in a more attainable category. The usual suspects have hashed out their reviews today, with the general consensus being one of satisfaction. A gamechanger in the space it’s not, but this Kepler-based GPU managed to go toe-to-toe with similarly priced Radeon GPUs while being relatively power efficient in the process. That said, AnandTech was quick to point out that unlike Kepler reviews in the past, the 660 Ti wasn’t able to simply blow away the competition; it found the card to perform around 10 to 15 percent faster than the 7870 from AMD, while the 7950 was putting out roughly the same amount of performance as the card on today’s test bench. HotHardware mentioned that NVIDIA does indeed have another winner on its hands, noting that it’d be tough to do better right now for three Benjamins. Per usual, there’s plenty of further reading available in the links below for those seriously considering the upgrade.

Filed under: ,

NVIDIA GeForce GTX 660 Ti review roundup: impressive performance for around $300 originally appeared on Engadget on Thu, 16 Aug 2012 13:44:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceAnandTech, HotHardware, Tech Report  | Email this | Comments

NVIDIA GeForce GTX 660 Ti brings Kepler power with a cut on cost

It’s time for some fabulous 28nm Kepler action from NVIDIA with the GeForce GTX 660 Ti – bringing the fabulous next-generation graphics of Kepler GPU architecture in at a surprisingly affordable price. With the $299 Nvidia GeForce GTX 660 Ti you’ll be rolling out with a base clock speed of 915MHz and no less than two dual-link DVI outs, HDMI, and a DisplayPort 1.2 to boot. Along with a fabulous 1,344 CUDA cores in the mix, you’re getting essentially the same package as the GTX 670 with a major chop-out on the cash cost.

With the GTX 660Ti you’ve got what NVIDIA tells us will be a lovely 150W TDP or down to 134W under “typical use”. NVIDIA also notes that here in the hardware’s 2GB video buffer you’ve got 192-bit GDDR5 RAM making it all hum nicely too – this is just below the GTX 670′s configuration which is 256-bit GDDR5. Configurations will vary between brand releases, of course, but most will be popping up with a bit of a bonus – Borderlands 2!

Have a peek at MAINGEAR’s release and see a couple of custom built limited-edition PCs made with the 660 Ti with Borderlands 2 included as well. You’ll find that NVIDIA is making an effort to come up directly against the AMD Radeon HD 7870, also on the market now, with both of them coming in at $299 standard.

NVIDIA has made claims that the GeForce GTX 660 Ti has had anywhere in between 10 and 30 percent performance gain over the AMD Radeon HD 7870, and has taken on the slightly more expensive $350 AMD Radeon HD 7950 as well. NVIDIA let it be known that in tests vs the 7950, their GTX 660 Ti came up with higher average frame rates, up to 20 percent in some cases, when testing games such as Max Payne 3.

And of course, as shown above, each new generation of NVIDIA’s GPU line keeps getting better and better! Stay tuned for more GTX 660 Ti action throughout the day, and stick around as we test our own build on a custom-built MAINGEAR review unit soon as well. The NVIDIA GeForce GTX 660 Ti will begin shipping today from all your favorite outlets, too, so get pumped!


NVIDIA GeForce GTX 660 Ti brings Kepler power with a cut on cost is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


SIGGRAPH 2012 wrap-up

SIGGRAPH 2012 wrapup

Considering that SIGGRAPH focuses on visual content creation and display, there was no shortage of interesting elements to gawk at on the show floor. From motion capture demos to 3D objects printed for Hollywood productions, there was plenty of entertainment at the Los Angeles Convention Center this year. Major product introductions included ARM’s Mali-T604 GPU and a handful of high-end graphics cards from AMD, but the highlight of the show was the Emerging Technologies wing, which played host to a variety of concept demonstrations, gathering top researchers from institutions like the University of Electro-Communications in Toyko and MIT. The exhibition has come to a close for the year, but you can catch up with the show floor action in the gallery below, then click on past the break for links to all of our hands-on coverage, direct from LA.

Continue reading SIGGRAPH 2012 wrap-up

Filed under:

SIGGRAPH 2012 wrap-up originally appeared on Engadget on Fri, 10 Aug 2012 13:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Samsung’s New Exynos Mobile Chip: USB 3.0 and 1080p Video at 60fps [Samsung]

Samsung has released the specs of its new Exynos 5 Dual mobile chip, and it looks set to be an absolute powerhouse. An ARM-based 1.7 GHz mobile CPU, it seems set be a massive leap forward from the Exynos 4 Quad which currently powers the Galaxy S III . More »

Samsung Exynos 5 Dual white paper confirms new high marks for mobile graphics, memory performance

Our SIGGRAPH demo of the ARM Mali-T604 GPU gave a brief preview of Samsung’s upcoming Exynos 5 Dual CPU, but now all the details of the company’s next great processor are ready for us to view. Other than that GPU which includes support for up to WQXGA (2,560 x 1,600) resolutions — perfect for the 11.8-inch P10 mentioned in court filings — and much more, the white paper uncovered by Android Authority also mentions support for features like Wi-Fi Display, high bandwidth LPDDR3 RAM running at up to 800MHz with a bandwidth of 12.8GBps, USB 3.0 and SATA III. It also claims the horsepower to decode 1080p video at 60fps in pretty much any codec, stereoscopic 3D plus handle graphics APIs like OpenGL ES 3.0 and OpenCL 1.1. All of this is comes courtesy of a dual-core 1.7GHz ARM Cortex-A15 CPU built on the company’s 32nm High-K Metal Gate process and Panel Self Refresh technology that avoids changing pixels unnecessarily to reduce power consumption. There’s plenty of other buzzwords and benchmarks floating around in the PDF, you can check them out in the PDF linked below or just sit back and see what tablets and phones arrive with one of these — or the competition from Qualcomm’s S4 and NVIDIA’s Tegra — inside starting later this year.

Filed under:

Samsung Exynos 5 Dual white paper confirms new high marks for mobile graphics, memory performance originally appeared on Engadget on Fri, 10 Aug 2012 00:12:00 EDT. Please see our terms for use of feeds.

Permalink The Verge, Android Authority  |  sourceSamsung Exynos Blog, White paper (PDF)  | Email this | Comments