NVIDIA Tegra 4 Chimera camera technology hands-on

This past week we’ve had the opportunity to have a peek at one of the many new features involved in the NVIDIA Tegra 4 processor technology family: Chimera computational photography. The NVIDIA Tegra 4 (and Tegra 4i) SoC works with what they’re calling the “world’s first mobile computational photography architecture”, and today what you’ll be seeing is one of the several features NVIDIA will be delivering to smartphones that utilize their processor. This first demonstration involves “Always-on HDR” photography.

hand

What you’re seeing here is a demonstration done by NVIDIA at the official GTC 2013 conference. That the GPU Technology Conference, a multi-day event we attended with bells on – have a peek at our GTC 2013 tag portal now for everything we got to see – with more coming up in the future! The demonstration shown here is of a technology originally revealed earlier this year at NVIDIA’s keynote presentation at CES 2013 – head back to the original reveal post to see a whole different angle!

Here a high dynamic range scene has been arranged behind a device running the Chimera photography experience with an NVIDIA Tegra 4 (or perhaps 4i) processor inside. While a traditional HDR-capable camera takes two images one-after-another at different exposures and fuses them together, NVIDIA’s Always-on HDR feature works to take away the two negative bits involved with traditional HDR by allowing the following:

• Live preview through your camera’s display (on your smartphone, tablet, etc).
• The ability to capture moving objects.

P1050190-580x326

With traditional HDR, if you’ve got someone running through the scene, you’ll get even more of a blur than you’d normally get because you’re effectively taking two photos. With NVIDIA’s method you’re capturing your image 10 times faster than you’d be capturing it without a Tegra 4 working to help. Because of this, when you’ve got a Tegra 4 processor in your smartphone, you’ll be able to use a flash in your HDR photos, use burst mode to capture several HDR shots in quick succession, and you’ll be able to capture HDR video, too!

P1050188-580x326

We’re very much looking forward to rolling out with the Tegra 4 on smart devices soon – until then, we can only dream of the colors! Check out the full NVIDIA mobile experience in our fabulous Tegra hub right this minute!


NVIDIA Tegra 4 Chimera camera technology hands-on is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

SlashGear 101: Remote Computing with NVIDIA GRID VCA

This week at NVIDIA’s own GPU Technology Conference 2013, we’ve been introduced to no less than the company’s first end-to-end system: NVIDIA GRID VCA. The VCA part of the name stands for “Visual Computing Appliance”, and it’s part of the greater NVIDIA GRID family we were re-introduced to at CES 2013 earlier this year. This VCA is NVIDIA’s way of addressing those users – and SMBs (small-to-medium businesses) – out there that want a single web-accessible database without a massive rack of servers.

28200453_VvCjxD-10

What is the NVIDIA GRID VCA?

The NVIDIA GRID VCA is a Visual Computing Appliance. In it’s current state, you’ll be working with a massive amount of graphics computing power no matter where you are, accessing this remote system over the web. As NVIDIA CEO Jen-Hsun Huang noted on-stage at GTC 2013, “It’s as if you have your own personal PC under your desk” – but you’re in a completely different room.

gridside

You’re wireless, you’re in a completely different state – NVIDIA GRID VCA is basically whatever you need it to be. The first iteration of the NVIDIA GRID VCA will be packed as follows:

• 4U high.
• Sized to fit inside your standard server rack (if you wish).
• 2x highest-performance Xeon processors.
• 8x GRID GPU.
• 2x Kepler GPU.
• Support for 16 virtual machines.

You’ll be able to work with the NVIDIA GRID VCA system with basically any kind of computer, be it a Mac, a PC, mobile devices with Android, ARM or x86-toting machines, anything. With the NVIDIA GRID VCA, your remotely-hosted workspace shows up wherever you need it to. Each device you’ve got simply needs to download and run a single client going by the name “GRID client.” Imagine that.

If you’ve got a company using NVIDIA’s GRID, you’ll have access to mega-powerful computing on whatever machine you’ve got connected to it. One of the use-cases spoken about at GTC 2013 was some advanced video editing on-the-go.

Use Case 1: Autocad 3D and remote Video Editing

On-stage with NVIDIA’s CEO Jen-Hsun Huang spoke James Fox, CEO of the group Dawnrunner. As a film and video production company (based in San Francisco, if you’d like to know), workers at Dawnrunner use Adobe software and Autodesk. As Fox notes, “Earth Shattering is what gets talked about in the office.”

28506295_CDRWM2-13

Fox and his compatriots use their own GRID configuration to process video, head out to a remote spot and show a customer, and change the video on the spot if the customer does so wish it. While processing video of the monster sizes Dawnrunner works with, still needs relatively large computing power – “Hollywood big” we could call it – NVIDIA’s GRID can make it happen inside the NVIDIA GRID VCA.

With the processing going on inside the VCA and shown on a remote workstation environment (basically a real-time window into the GRID), you could potentially show real-time Hollywood movie-sized video editing from your Android phone. In that one image of a situation you’ve got the power of this new ecosystem.

Use Case 2: Hollywood Rendering with Octane Render

Of course no big claim with the word “Hollywood” in it is complete without some big-name movie examples to go with it. At GTC 2013, NVIDIA’s CEO Jen-Hsun Huang brought both Josh Trank and Jules Urbach onstage. The former is the director of the upcoming re-boot (2015) movie The Fantastic Four (yes, that Fantastic Four), and the latter is the founder and CEO of the company known as Otoy.

Both men speak of the power of GPUs, Trank speaking first about how people like he, the movie director, use CGI from the beginning of the creation of a film with pre-visualization to bid it out to studios, getting funding before there is any cash to be had. Meanwhile Urbach spoke of how CGI like this can be rendered 40-100 times faster with GPUs than CPUs – and with that speed you’ve got a lot less energy spent and far fewer hours used for a final product.

render

With that, Urbach showed Otoy’s Octane Render (not brand new as of today, but made ultra-powerful with NVIDIA GRID backing it up). This system exists on your computer as a tiny app and connects your computer to a remote workstation – that’s where NVIDIA’s GRID comes in – and you’ll be able to work with massive amounts of power wherever you go.

28506295_CDRWM2-11

Octane Render allows you to use “hundreds or thousands” of GPUs to be used by renderers in the cloud. Shown on-stage was a pre-visualization of a scene from the original Transformers movies (which Otoy helped create), streamed in real time over the web from Los Angeles to the location of the conference: San Jose.

What they showed, it was made clear, is that the power of GPUs in this context cannot be denied. With the power of 112 GPUs at once, it was shown that a high-powered Hollywood-big scene could be rendered in a second where in the past it would have taken several hours. And here, once again, it can all be controlled remotely.

Cost

There are two main configurations at the moment for NVIDIA’s GRID VCA, the first working with 8 GPU units, 32GB of GPU Memory, 192 GB System Memory, 16 thread CPU, and up to 8 concurrent users. The second is as follows – and this is the beast:

GPU: 16 GPUs
GPU Memory: 64 GB
System Memory: 384 GB
CPU: 32 thread CPU
Number of users: up to 16 concurrent

If you’re aiming for the big beast of a model, you’re going to be paying $39,900 USD with a $4,800-a-year software license. If you’re all about the smaller of the two, you’ll be paying $24,900 USD with a $2400-a-year software license.

28506295_CDRWM2-10

Sound like the pocket-change salvation you’ve been waiting for? Let us know if your SMB will be busting out with the NVIDIA GRID VCA immediately if not soon, and be sure to let us know how it goes, too!


SlashGear 101: Remote Computing with NVIDIA GRID VCA is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Open To Licensing Its Technology To “Vertically Integrated” Companies

NVIDIA Open To Licensing Its Technology To Vertically Integrated Companies

NVIDIA’s CEO Jen-Hsun Huang’s talked to industry analysts today, and talked about various growth avenue, and he mentioned that NVIDIA was “open to licensing” its technology to companies that are heavily vertically integrated. Of course, two names pop immediately: Apple and Samsung, and this wouldn’t be very hard to see this as a message to those players, who both have their own processors. “We do it all the time” says NVIDIA’s CEO – even if most people think of NVIDIA as a company that fundamentally “sell chips” to its customers. (more…)

By Ubergizmo. Related articles: ZTE Quantum For Sprint Leaked, HTC E1 Is Official Name Of HTC 603e ,

NVIDIA Volta Next-Gen GPU Architecture Provides Huge Bandwidth Boost

NVIDIA Volta Next Gen GPU Architecture Provides Huge Bandwidth Boost

Earlier today, we saw NVIDIA present Volta, it’s next-generation GPU architecture that is designed to increase performance by addressing one of the “eternal” problems linked to GPUs more than any other processor type: memory bandwidth. Volta is going to use RAM that is layered on top of the GPU silicon to reduce energy consumption and latency during memory accesses. With this, NVIDIA says that it will be able to push 1 Terabytes of data per second through the chip. This is 5X superior to the GeForce Titan, which tops 192GB/s. (more…)

By Ubergizmo. Related articles: Valve Claims “No Involvement” With Xi3’s Piston, EA Apologizes For SimCity Snafu,

NVIDIA’s Powerful Logan and Parker Tegra Chips Presented At GTC

NVIDIAs Powerful Logan and Parker Tegra Chips Presented At GTC

NVIDIA has revealed a little more of its roadmap today. Off the initial roadmap that was shown to us at Mobile World Congress 2011, there NVIDIA has revealed what Logan is about, and has added a new chip codenamed Parker to the roadmap. Before we talk about Parker, NVIDIA has reminded us that Logan is using a Kepler GPU, which means that this is the first time that NVIDIA says that it is using direct development coming from the PC. (more…)

By Ubergizmo. Related articles: Tegra 4 Benchmarks: NVIDIA Jumps Into Hyperspace, Slacker Music Application Rebranded,

NVIDIA Tegra “Parker” blasts forth aside mini ARM computer “Kayla”

This week the folks at NVIDIA have been revealing bits and pieces of their GPU roadmap with Tegra and GeForce GPU action left and right, moving forward with their newest mobile superhero code-named SoC “Parker.” This SoC comes after the still code-named “Logan” and will, if the naming scheme holds true, be Tegra 6 down the road. Along with this reveal came word of a code-named system called “Kayla” – a processing beast that, when it’s ready for action, will be extra-tiny and extra-powerful beyond anything we’re capable of today.

parker

Parker is the newest in a line of code-named Tegra processors, coming after Wayne (Tegra 4) and Logan (Tegra 5, more than likely), and bringing on the innovations of past generations and/or outdoing them with the following firsts:

• First with Denver CPU.
• First 64 bit ARM processor coupled with NVIDIA’s next-gen Maxwell GPU.
• First to use FinFET transistors.

According to NVIDIA CEO Jen-Hsun Huang, this is only the beginning. Huang noted that “In five years time, we’ll increase Tegra by 100 times, though Moore’s Law would suggest an eight-fold increase.” With Logan we’ll see the first mobile processor on the planet to work with CUDA. This processor will also bring Kepler GPU power and OpenGL 4.3 – and it’ll be in production by early 2014.

Parker, on the other hand, is still in the pipeline. While we may see it out by 2015, we can’t be sure until NVIDIA gives the real word.

28506295_CDRWM2-9

Then there’s Kayla. With NVIDIA’s Kayla, we’ve got what’s been described by Huang as “Logan’s girlfriend.” This device is around the size of a tablet PC at the moment, and is beastly enough already to run real-time ray tracing. As Huang said, “this is showing the kind of demos we used to do on massive GPUs.”

Inside Kayla you’ll find CUDA 5, Linux, and PhysX processing. All of this runs on a rather tiny ARM-toting computer – and it’s coming sooner than later. Have a peek at the timeline below for more Tegra and GeForce GTX action from NVIDIA as GTC 2013 continues – hit up our tag portal for more action as well, we’ll be here the whole conference long!

And don’t forget to check our massive Tegra hub for more mobile processing action than you can handle – more big blasts coming up quick!


NVIDIA Tegra “Parker” blasts forth aside mini ARM computer “Kayla” is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Tegra “Logan” detailed with game-changing CUDA integration

This week NVIDIA’s CEO Jen-Hsun Huang spoke up at their GPU Technology Conference on the future of the mobile processor known as Tegra and has teased what will likely be called “Tegra 5″. Running through what we’d already learned about the Tegra 2, Tegra 3, and the upcoming Tegra 4, Huang let us know that the next code-name “Logan” would be breaking boundaries once again. The next Tegra processor will, according to Huang, do “everything a modern computer should do.”

28506295_CDRWM2-8

Speaking on how they created the idea of a single energy-saving core – seen first in the NVIDIA Tegra 3 quad-core processor – with 4-PLUS-1 technology, sleeping with this one sleeper core for low-powered tasks. Huang spoke also of the first software-defined radio – Deep Execution Processor – and the Computational Camera using both the CPU and the GPU with the sensors of the mobile camera – introduced on the Tegra 4.

Inside Logan we’ll be seeing CUDA 5 and Kepler. This is the first time we’ve seen a mobile processor incorporating CUDA, and also the first time a Kepler GPU will be coming to the mobile universe. This processor will also be bringing on full CUDA 5 as well as OpenGL 4.3.

Interestingly enough, Huang mentioned that Logan – this next generation – will be coming out at the beginning of next year. As we’ve heard from NVIDIA not too many weeks ago, Tegra 4 and Tegra 4i will not be coming to market any sooner than the second half of 2013. In other words, we’re looking at some rather rapid movement between the two generations, without a doubt.

Have a peek at the timeline below as well as the GTC 2013 tag portal for more information on Tegra and the ever-expanding GPU universe of NVIDIA in many great and rather exciting ways! We’ll be here the whole conference long!

Be sure to tune in all week in our massive Tegra hub as well!


NVIDIA Tegra “Logan” detailed with game-changing CUDA integration is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA CEO races toward GPU Computing “tipping point” at GTC 2013

This week at NVIDIA’s GPU Technology Conference, CEO Jen-Hsun Huang spoke about the ever-growing GPU-utilizing universe, in both the mobile and desktop computer global environments. According to Huang, there will be more than 400 sessions at GTC. “This is the Mecca for scientific discovery”, said Huang of GTC 2013, “Nothing’s more important than the research being done on GPU computers.”

28506295_CDRWM2-3

Huang ran through massive amounts of GPU-friendly happenings and upcoming events, including bits and pieces like the following:

• A 50 Gigapixel Camera being developed at the U of Arizona.
• GPU-accelerated diamond cutting.
• CUDA utilization for dating site matching compatibility.
• Oak Ridge’s Titan Supercomputing using 40 million CUDA processors together for 10 petaflops of power.
• Swiss Supercomputing Center starting construction on Europe’s fastest GPU Supercomputing Center Piz Dant – made for weather forecasting purposes.

According to Huang, GPU supercomputing is taking hold at an undeniably quick rate.

“If we’re not at the tipping point for GPU Computing, we’re racing at it. There’s a huge spike in GPU-based computers being built for real work – about 20 percent of total Top500 horsepower is GPU. Included in this is the world’s most powerful supercomputer, the Oak Ridge National Laboratory’s Titan supercomputer.” – Huang

28506295_CDRWM2-6
28506295_CDRWM2-5
28506295_CDRWM2-4
28506295_CDRWM2-3

Read more about Titan in our most recent update on the machine and know this – SlashGear’s experience with the GeForce GTX home-ready GPU is coming up quick, too – stay tuned! Of course you’ll also want to stick out the full conference with us here on SlashGear as we cover the entirety of the show, front to back. Have a look at our GTC 2013 tag for more information and stay tuned for more amazing rendering beastliness!


NVIDIA CEO races toward GPU Computing “tipping point” at GTC 2013 is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA’s GTC kicks off with stunning real-time rendering

Jen-Hsun Huang stepped on stage this week at GTC 2013 with words on the GPU, the graphics processing engine that NVIDIA uses to push the envelope in many, many more ways than one. Five features were announced as coming on through the conference: breakthroughs in computer graphics, updates on development, a roadmap update for NVIDIA, an update on remote graphics, and a brand new product announcement. While we’re expecting this conference to hold quite a bit of news on computing outside the mobile world with Tegra, there’s certainly going to be some amazing Android-based excellence coming on too.

28506295_CDRWM2-2

Wave Works

Beginning this show with TITAN – the GeForce GTX GPU we’re about to have hands-on time with in the very near future here on SlashGear – some interactive ocean experimentation was shown. Straight away we saw a ship shown on a large screen, real-time water being pushed up against the craft as heavy waves came up and crashed against it. With 20,000 sensors in-place (virtually), this demonstration showed how with NVIDIA GPU power, we’ll be able to test the ability of ships in the future to withstand a beat-down.

28506295_CDRWM2

If we didn’t know better, we’d have to guess that this demonstration of the ship was real – this demonstration was called Wave Works, and was a Beaufort-Scale Real-Time Ocean rendering. Absolutely gorgeous.

Face Works

Also included was a show of what the company calls Kepler Dawn. This lovely fairy was the work of many, many years of work on the creation of a very real human form. Attempting to escape the so-called “Uncanny Valley”, Huang let us know that they were close, but weren’t quite there yet with this first show. The “Uncanny Valley” is a place where realistic animations get creepy – incase you didn’t know – this happening between an obviously animated creature and a real human being.

28506295_CDRWM2-1

A new technology called Face Works was introduced, letting a system that before NVIDIA got to it was 32GB to be pushed into 400 MB. Here we’ve seen NVIDIA’s Titan GPU turning an animated face look real. For those of you that aren’t able to see this face move in real-time yet, hear this: it’s impossibly realistic. If Star Wars is going to feature Harrison Ford, Carrie Fisher, and Mark Hammil, they’ll use Face Works to make it work.

Stick out the full conference with us here on SlashGear as we cover the entirety of the show, front to back. Have a look at our GTC 2013 tag portal for more information and stay tuned for more amazing rendering beastliness!


NVIDIA’s GTC kicks off with stunning real-time rendering is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA: GTX Titan is a supercomputer in your home

Here live at GTC 2013 Jen-Hsun Huang, CEO, NVIDIA took the stage for the opening keynote, and quickly got things started off by jumping right in with the GTX Titan. Obviously NVIDIA is extremely proud of the brand new single-GPU powerhouse, and we’re expecting plenty of details to quickly follow.

20130319_091431-XL

NVIDIA’s CEO already is talking about what we can expect to see this year at GTC, and the opening keynote will have 5 main things you can all look forward too. One, we’ll hear plenty about the breakthrough in “supercomputing” we’ve never seen before. As well as all the breakthrough’s this year has already given us.

We’ll get a broad update on GPU computing as a whole, how it’s progressing, and where it’s headed. Then what I’m sure many of you are waiting for is the NVIDIA Roadmap. Then last but not least NVIDIA will have a new product announcement. That will be last, so stay tuned for all the details.

While on stage Jen-Hsun Huang stated they didn’t know what to call their new GTX GeForce graphics card that you see above. After realizing it was more than just a GPU, but a GPU that truly brings supercomputing to our homes, they settled on the GTX Titan — and we think that’s fitting. Stay tuned folks.

20130319_091506-XL
20130319_091431-XL
20130319_091047-XL


NVIDIA: GTX Titan is a supercomputer in your home is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.