BioShock Infinite drivers blast forth for NVIDIA GeForce GPU

Should you happen to be playing the widely applauded BioShock Infinite game this week with a machine with the graphics power of NVIDIA’s GeForce under the hood, you’ll also find game-optimized drivers available to you for download. These drivers have ben sent out to the public along with notes about how they’ll destroy your concept of what looks best graphics-wise. If you thought this game looked great before, you’re in for a real trip.

geforce_bioshock

NVIDIA is notorious for being up-to-date with the newest biggest baddest games on the market, releasing software for the graphics cards they’ve got out in the wild so that you can have the best experience you could possibly hope for. What we’re seeing with the NVIDIA GeForce family of GPUs here in 2013 is a big drive for NVIDIA-pushed optimization, especially with services such as the GeForce Experience. At the moment we’re seeing NVIDIA release a massive amount of information on this release specifically, showing the undeniable power of optimization on a variety of systems.

nvidia-geforce-314-22-whql-drivers-gtx-680-bioshock-infinite-performance

What you’re seeing here is the game being tested on four different resolutions and setups, each of them running on Rampage IV Extreme, i7-3960X clocked at 3.3GHz with 8GB of RAM under the hood. It should be mighty clear that you’ll want to update your drivers to GeForce 314.22 WHQL as quick as possible, even on GPUs not listed here.

nvidia-geforce-314-22-whql-drivers-gtx-680-tomb-raider-performance

NVIDIA has also shown the difference in performance for such games as the new Tomb Raider. You’ll find here that the change is just as impressive, changing the frames per second by levels of 10 with ease. These games are two in a collection of games the team has tested and optimized, with this particular driver update hitting the following:

nvidia-geforce-314-22-whql-drivers-gtx-680-performance

In addition to the BioShock Infinite and Tomb Raider optimizations, 314.22 WHQL also includes performance improvements for Batman: Arkham City, Battlefield 3, Borderlands 2, Call of Duty: Black Ops II, Civilization V, Sleeping Dogs, Sniper Elite V2, and The Elder Scrolls V: Skyrim.

Good deal! You’ll find this update available to you today in an over-the-air update – automatic! You can also go ahead and download the GeForce Experience for your machine to get the update with a click of a button – make it happen! These drivers can also be downloaded at the GeForce Homepage where all great driver downloads are housed.


BioShock Infinite drivers blast forth for NVIDIA GeForce GPU is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Jetson Development Platform hits smart cars with CUDA and Kepler power

If you’ve been following NVIDIA’s news blasts this past week, you know that they’ve revealed their next-generation chipset to be working with CUDA-capable GPUs. What’s more, you’ll have a bit of an idea what that means for mobile devices, the computing power they’ll have extremely soon, and you’ll be pumped up about that power coming to smart vehicles through their new developer program. This new developer kit goes by the name NVIDIA Jetson Development Platform – available to you right this minute!

NV_Jetson-3qtr-black

This new platform is a big ol’ amalgamation of metal and plastic, power and next-generation precision. What developers in the smart segment of the next generation of our everyday road-ready vehicles will be doing with this beast is optimizing their ideas for the processing power of NVIDIA’s Tegra processors. Automakers will be able to work with this proof-of-concept in a tiny 1-DIN form that fits in a car stereo slot.

Jetson Development Platform package:

• Jetson main board
• Tegra VCM with automotive-grade Tegra 3 mobile processor
• Embedded Breakout Board (EBB) with a wide range of connectivity options
• NVIDIA CUDA-capable discrete GPU
• Wi-Fi, Bluetooth module, and GPS antennas
• 64 GB mSATA Drive
• Touchscreen display and cables
• Power supply and cables
• USB cable (mini-USB to USB)
• HDMI to DVI cable

With the 1-DIN model of the Jetson, you’ll have the performance of a beastly NVIDIA Tegra VCM combined with the excellence of a Kepler-glass GPU. This GPU supports CUDA as well as OpenCV so any and all developers creating software for this setup will be able to do so with the following visual-based technologies:

• Pedestrian Detection
• Lane Departure Warnings
• Collision Avoidance

NV_Jetson-Bottom-blackweb

This development kit is made not just to make the developer’s job awesome with the processing power of Tegra and Kepler, but to make their job as easy as possible so they can concentrate on what matters most – making their ideas a reality. Jetson is designed to help automakers overcome three key challenges, too, each of them allowing for quicker and easier implementation of forward-thinking technologies.

NVIDIA’s Jetson Development Platform does the following:

1) Simplifies and streamlines the development of advanced driver assistance and connected car technologies.

2) Accelerates the transition to each new generation of mobile SoC, enabling automakers to better keep pace with the rapid innovation cycle in consumer electronics.

3) Reduces the number of processors and independent silver boxes needed to develop infotainment, navigation, computer vision and driver assistance capabilities.

Sound pretty good to you? Have a peek at the timeline we’ve laid out below for all the NVIDIA action you can handle from this past week alone! NVIDIA is ramping up for not just GPUs in your most masterful gaming desktop computers, not just for some of the most powerful mobile processor architectures in the mobile universe for your superphones and tablets, but for next-generation smart vehicles of all kinds, soon and very soon!


NVIDIA Jetson Development Platform hits smart cars with CUDA and Kepler power is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Tegra 4 Chimera camera technology hands-on

This past week we’ve had the opportunity to have a peek at one of the many new features involved in the NVIDIA Tegra 4 processor technology family: Chimera computational photography. The NVIDIA Tegra 4 (and Tegra 4i) SoC works with what they’re calling the “world’s first mobile computational photography architecture”, and today what you’ll be seeing is one of the several features NVIDIA will be delivering to smartphones that utilize their processor. This first demonstration involves “Always-on HDR” photography.

hand

What you’re seeing here is a demonstration done by NVIDIA at the official GTC 2013 conference. That the GPU Technology Conference, a multi-day event we attended with bells on – have a peek at our GTC 2013 tag portal now for everything we got to see – with more coming up in the future! The demonstration shown here is of a technology originally revealed earlier this year at NVIDIA’s keynote presentation at CES 2013 – head back to the original reveal post to see a whole different angle!

Here a high dynamic range scene has been arranged behind a device running the Chimera photography experience with an NVIDIA Tegra 4 (or perhaps 4i) processor inside. While a traditional HDR-capable camera takes two images one-after-another at different exposures and fuses them together, NVIDIA’s Always-on HDR feature works to take away the two negative bits involved with traditional HDR by allowing the following:

• Live preview through your camera’s display (on your smartphone, tablet, etc).
• The ability to capture moving objects.

P1050190-580x326

With traditional HDR, if you’ve got someone running through the scene, you’ll get even more of a blur than you’d normally get because you’re effectively taking two photos. With NVIDIA’s method you’re capturing your image 10 times faster than you’d be capturing it without a Tegra 4 working to help. Because of this, when you’ve got a Tegra 4 processor in your smartphone, you’ll be able to use a flash in your HDR photos, use burst mode to capture several HDR shots in quick succession, and you’ll be able to capture HDR video, too!

P1050188-580x326

We’re very much looking forward to rolling out with the Tegra 4 on smart devices soon – until then, we can only dream of the colors! Check out the full NVIDIA mobile experience in our fabulous Tegra hub right this minute!


NVIDIA Tegra 4 Chimera camera technology hands-on is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

These Incredibly Realistic Human Face Computer Graphics Will Obviously Be Used for Porn

We’ve seen how impressive Nvidia’s new Titan GPU can be, but this is kind of nuts. Face rendering that is pretty darn close to bridging the uncanny valley. It’s remarkable. And also? This is obviously going to be used for porn. More »

SlashGear 101: Remote Computing with NVIDIA GRID VCA

This week at NVIDIA’s own GPU Technology Conference 2013, we’ve been introduced to no less than the company’s first end-to-end system: NVIDIA GRID VCA. The VCA part of the name stands for “Visual Computing Appliance”, and it’s part of the greater NVIDIA GRID family we were re-introduced to at CES 2013 earlier this year. This VCA is NVIDIA’s way of addressing those users – and SMBs (small-to-medium businesses) – out there that want a single web-accessible database without a massive rack of servers.

28200453_VvCjxD-10

What is the NVIDIA GRID VCA?

The NVIDIA GRID VCA is a Visual Computing Appliance. In it’s current state, you’ll be working with a massive amount of graphics computing power no matter where you are, accessing this remote system over the web. As NVIDIA CEO Jen-Hsun Huang noted on-stage at GTC 2013, “It’s as if you have your own personal PC under your desk” – but you’re in a completely different room.

gridside

You’re wireless, you’re in a completely different state – NVIDIA GRID VCA is basically whatever you need it to be. The first iteration of the NVIDIA GRID VCA will be packed as follows:

• 4U high.
• Sized to fit inside your standard server rack (if you wish).
• 2x highest-performance Xeon processors.
• 8x GRID GPU.
• 2x Kepler GPU.
• Support for 16 virtual machines.

You’ll be able to work with the NVIDIA GRID VCA system with basically any kind of computer, be it a Mac, a PC, mobile devices with Android, ARM or x86-toting machines, anything. With the NVIDIA GRID VCA, your remotely-hosted workspace shows up wherever you need it to. Each device you’ve got simply needs to download and run a single client going by the name “GRID client.” Imagine that.

If you’ve got a company using NVIDIA’s GRID, you’ll have access to mega-powerful computing on whatever machine you’ve got connected to it. One of the use-cases spoken about at GTC 2013 was some advanced video editing on-the-go.

Use Case 1: Autocad 3D and remote Video Editing

On-stage with NVIDIA’s CEO Jen-Hsun Huang spoke James Fox, CEO of the group Dawnrunner. As a film and video production company (based in San Francisco, if you’d like to know), workers at Dawnrunner use Adobe software and Autodesk. As Fox notes, “Earth Shattering is what gets talked about in the office.”

28506295_CDRWM2-13

Fox and his compatriots use their own GRID configuration to process video, head out to a remote spot and show a customer, and change the video on the spot if the customer does so wish it. While processing video of the monster sizes Dawnrunner works with, still needs relatively large computing power – “Hollywood big” we could call it – NVIDIA’s GRID can make it happen inside the NVIDIA GRID VCA.

With the processing going on inside the VCA and shown on a remote workstation environment (basically a real-time window into the GRID), you could potentially show real-time Hollywood movie-sized video editing from your Android phone. In that one image of a situation you’ve got the power of this new ecosystem.

Use Case 2: Hollywood Rendering with Octane Render

Of course no big claim with the word “Hollywood” in it is complete without some big-name movie examples to go with it. At GTC 2013, NVIDIA’s CEO Jen-Hsun Huang brought both Josh Trank and Jules Urbach onstage. The former is the director of the upcoming re-boot (2015) movie The Fantastic Four (yes, that Fantastic Four), and the latter is the founder and CEO of the company known as Otoy.

Both men speak of the power of GPUs, Trank speaking first about how people like he, the movie director, use CGI from the beginning of the creation of a film with pre-visualization to bid it out to studios, getting funding before there is any cash to be had. Meanwhile Urbach spoke of how CGI like this can be rendered 40-100 times faster with GPUs than CPUs – and with that speed you’ve got a lot less energy spent and far fewer hours used for a final product.

render

With that, Urbach showed Otoy’s Octane Render (not brand new as of today, but made ultra-powerful with NVIDIA GRID backing it up). This system exists on your computer as a tiny app and connects your computer to a remote workstation – that’s where NVIDIA’s GRID comes in – and you’ll be able to work with massive amounts of power wherever you go.

28506295_CDRWM2-11

Octane Render allows you to use “hundreds or thousands” of GPUs to be used by renderers in the cloud. Shown on-stage was a pre-visualization of a scene from the original Transformers movies (which Otoy helped create), streamed in real time over the web from Los Angeles to the location of the conference: San Jose.

What they showed, it was made clear, is that the power of GPUs in this context cannot be denied. With the power of 112 GPUs at once, it was shown that a high-powered Hollywood-big scene could be rendered in a second where in the past it would have taken several hours. And here, once again, it can all be controlled remotely.

Cost

There are two main configurations at the moment for NVIDIA’s GRID VCA, the first working with 8 GPU units, 32GB of GPU Memory, 192 GB System Memory, 16 thread CPU, and up to 8 concurrent users. The second is as follows – and this is the beast:

GPU: 16 GPUs
GPU Memory: 64 GB
System Memory: 384 GB
CPU: 32 thread CPU
Number of users: up to 16 concurrent

If you’re aiming for the big beast of a model, you’re going to be paying $39,900 USD with a $4,800-a-year software license. If you’re all about the smaller of the two, you’ll be paying $24,900 USD with a $2400-a-year software license.

28506295_CDRWM2-10

Sound like the pocket-change salvation you’ve been waiting for? Let us know if your SMB will be busting out with the NVIDIA GRID VCA immediately if not soon, and be sure to let us know how it goes, too!


SlashGear 101: Remote Computing with NVIDIA GRID VCA is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Open To Licensing Its Technology To “Vertically Integrated” Companies

NVIDIA Open To Licensing Its Technology To Vertically Integrated Companies

NVIDIA’s CEO Jen-Hsun Huang’s talked to industry analysts today, and talked about various growth avenue, and he mentioned that NVIDIA was “open to licensing” its technology to companies that are heavily vertically integrated. Of course, two names pop immediately: Apple and Samsung, and this wouldn’t be very hard to see this as a message to those players, who both have their own processors. “We do it all the time” says NVIDIA’s CEO – even if most people think of NVIDIA as a company that fundamentally “sell chips” to its customers. (more…)

By Ubergizmo. Related articles: ZTE Quantum For Sprint Leaked, HTC E1 Is Official Name Of HTC 603e ,

NVIDIA Volta Next-Gen GPU Architecture Provides Huge Bandwidth Boost

NVIDIA Volta Next Gen GPU Architecture Provides Huge Bandwidth Boost

Earlier today, we saw NVIDIA present Volta, it’s next-generation GPU architecture that is designed to increase performance by addressing one of the “eternal” problems linked to GPUs more than any other processor type: memory bandwidth. Volta is going to use RAM that is layered on top of the GPU silicon to reduce energy consumption and latency during memory accesses. With this, NVIDIA says that it will be able to push 1 Terabytes of data per second through the chip. This is 5X superior to the GeForce Titan, which tops 192GB/s. (more…)

By Ubergizmo. Related articles: Valve Claims “No Involvement” With Xi3’s Piston, EA Apologizes For SimCity Snafu,

Digital Storm Hailstorm II gaming PC brings torrential TITAN downpour

This week as we roll through NVIDIA’s GPU Technology Conference and hear of the latest innovations in graphics processing prowess, we’ve heard a thunder strike – the Digital Storm Hailstorm II, a massive monster of a gaming PC. This beast has four distinct levels of excellence, ranging from a single GeForce GTX 680 all the way up to three – count them – three NVIDIA GeForce GTX TITAN GPUs for face-blasting graphics processing excellence. This set of builds is bordering on absolutely insane as the home gaming universe ramps up to a place where you’d have to be no less than tattooed with dedication to having the most powerful set of specifications – here you’ll go wild!

Hailstorm_II_01

With the Hailstorm II you’ll have space for four radiators, four GPU units, and two CPUs. That’s one massive amount of space on its own – then you consider how it’ll all be blasting forth with the components Digital Storm is quoting here as out-of-box builds, you’ll find your fingers sweating. With the Hailstorm II, you’ve got the first appearance of the Corsair Obsidian Series 900D, a monstrous black tower with a big window on the side so you can view this futuristic wallet-crushing collection for yourself.

Hailstorm_II_04

Inside you’ve got a liquid cooling system with three front intake fans and a lovely large rear exhaust fan to keep the air running through. If you’d like, this build allows you to ad an absurd 15 fans in total – so much freaking airflow you’ll have to wear a jacket.

Hailstorm_II_08

Up front you’ll find a lovely brushed aluminum front panel that’ll open up to show you a vast number of expansion slots – ten expansion slots in all, with room for up to nine hard drives or SSD with three hot-swappable mounts, four 5.25-inch optical drive bays, and more! You’ll have two USB 3.0 ports for super quick transfer, four USB 2.0 ports for all your peripherals, and, just incase you’re an over-the-top expander, the ability to work with two power supplies on the back.

Hailstorm_II_02

If that weren’t enough, you’ll find that each unit has gone through a 72-hour stress-test by Digital Storm, this including industry standard testing of the hardware and software as well as a proprietary testing process in place to detect any and all components that show the potential to fail in the future – you’ll be set!

beast

The system builds you’ll be working with are as you see above, each of the prices reflective of the beastly innards they contain. You’ll find that each of these systems uses fabulous Intel CPU power with the Core i7 across the board as well as NVIDIA GPUs. As noted, this is one of the first systems to work with the NVIDIA GeForce GTX TITAN GPU, and you’ll be able to knock it up to 3x SLI NVIDIA GeForce GTX TITAN at 6GB – intense!


Digital Storm Hailstorm II gaming PC brings torrential TITAN downpour is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA updates its mobile roadmap: Logan and Parker, mobile SoCs packing Kepler and Maxwell GPUs

NVIDIA updates its mobile roadmap Logan and Parker, mobile SoCs packing Kepler and Maxwell GPUs

Thought the new Tegra 4i was the bees knees when it we saw it last month? Well, NVIDIA gave us a bit more info on the next steps in the Tegra roadmap, Logan and Stark Parker. It turns out that these next two mobile platforms will both utilize NVIDIA’s CUDA technology, with Logan packing a Kepler GPU and Parker running a Project Denver 64-bit ARM CPU and a next-gen Maxwell GPU. Logan arrives early next year, while Parker won’t be in devices until sometime in 2015.

Filed under: , ,

Comments

Graphics Processors Push Computer Vision Ahead

Graphics Processors Push Computer Vision Ahead

At GTC 2013, NVIDIA has demonstrated their version of Image Search, which was powered by their massively parallel processors. The goal of computer vision is to be able to recognize and understand what things are – not merely “seeing” them. This is a very intricate problem since computers are fast but fairly dumb (out of the box). They have no notion of “concepts”, unlike humans who can be shown one object, like a hat, then are able to recognize new hats as they see them. That’s because humans can acquire a notion of what a “hat” is. (more…)

By Ubergizmo. Related articles: 600 Students To Receive MacBook Air At Illinois High School, Twitter Receives Own Patent,