NVIDIA Jetson Development Platform hits smart cars with CUDA and Kepler power

If you’ve been following NVIDIA’s news blasts this past week, you know that they’ve revealed their next-generation chipset to be working with CUDA-capable GPUs. What’s more, you’ll have a bit of an idea what that means for mobile devices, the computing power they’ll have extremely soon, and you’ll be pumped up about that power coming to smart vehicles through their new developer program. This new developer kit goes by the name NVIDIA Jetson Development Platform – available to you right this minute!

NV_Jetson-3qtr-black

This new platform is a big ol’ amalgamation of metal and plastic, power and next-generation precision. What developers in the smart segment of the next generation of our everyday road-ready vehicles will be doing with this beast is optimizing their ideas for the processing power of NVIDIA’s Tegra processors. Automakers will be able to work with this proof-of-concept in a tiny 1-DIN form that fits in a car stereo slot.

Jetson Development Platform package:

• Jetson main board
• Tegra VCM with automotive-grade Tegra 3 mobile processor
• Embedded Breakout Board (EBB) with a wide range of connectivity options
• NVIDIA CUDA-capable discrete GPU
• Wi-Fi, Bluetooth module, and GPS antennas
• 64 GB mSATA Drive
• Touchscreen display and cables
• Power supply and cables
• USB cable (mini-USB to USB)
• HDMI to DVI cable

With the 1-DIN model of the Jetson, you’ll have the performance of a beastly NVIDIA Tegra VCM combined with the excellence of a Kepler-glass GPU. This GPU supports CUDA as well as OpenCV so any and all developers creating software for this setup will be able to do so with the following visual-based technologies:

• Pedestrian Detection
• Lane Departure Warnings
• Collision Avoidance

NV_Jetson-Bottom-blackweb

This development kit is made not just to make the developer’s job awesome with the processing power of Tegra and Kepler, but to make their job as easy as possible so they can concentrate on what matters most – making their ideas a reality. Jetson is designed to help automakers overcome three key challenges, too, each of them allowing for quicker and easier implementation of forward-thinking technologies.

NVIDIA’s Jetson Development Platform does the following:

1) Simplifies and streamlines the development of advanced driver assistance and connected car technologies.

2) Accelerates the transition to each new generation of mobile SoC, enabling automakers to better keep pace with the rapid innovation cycle in consumer electronics.

3) Reduces the number of processors and independent silver boxes needed to develop infotainment, navigation, computer vision and driver assistance capabilities.

Sound pretty good to you? Have a peek at the timeline we’ve laid out below for all the NVIDIA action you can handle from this past week alone! NVIDIA is ramping up for not just GPUs in your most masterful gaming desktop computers, not just for some of the most powerful mobile processor architectures in the mobile universe for your superphones and tablets, but for next-generation smart vehicles of all kinds, soon and very soon!


NVIDIA Jetson Development Platform hits smart cars with CUDA and Kepler power is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

AMD Radeon HD 7790 Strengthens AMD’s Mid-Range Offering

AMD Radeon HD 7790 Strengthens AMDs Mid Range Offering

AMD just announced its new Radeon HD 7790 graphics processor (GPU) which will go in add-on cards immediately from close to ten partners right away. This new chip uses a new design which was created to improve the performance for the price, and for the power consumption, two key metrics when “absolute performance” (at any cost) is no longer the name of the game.

The Radeon 7790 uses the Graphics Core Next architecture that was unveiled to developers in late 2011, and currently used in high-end AMD cards, it improves performance by increasing the transistor density when compared to previous architectures. This allows for much faster geometry and tessellation engines, which are powering critical DirectX 11 features (it’s a DX11.1 chip by the way). (more…)

By Ubergizmo. Related articles: Tegra 4 Announced By NVIDIA, Alienware X51 gaming PC is a milestone,

SlashGear 101: Remote Computing with NVIDIA GRID VCA

This week at NVIDIA’s own GPU Technology Conference 2013, we’ve been introduced to no less than the company’s first end-to-end system: NVIDIA GRID VCA. The VCA part of the name stands for “Visual Computing Appliance”, and it’s part of the greater NVIDIA GRID family we were re-introduced to at CES 2013 earlier this year. This VCA is NVIDIA’s way of addressing those users – and SMBs (small-to-medium businesses) – out there that want a single web-accessible database without a massive rack of servers.

28200453_VvCjxD-10

What is the NVIDIA GRID VCA?

The NVIDIA GRID VCA is a Visual Computing Appliance. In it’s current state, you’ll be working with a massive amount of graphics computing power no matter where you are, accessing this remote system over the web. As NVIDIA CEO Jen-Hsun Huang noted on-stage at GTC 2013, “It’s as if you have your own personal PC under your desk” – but you’re in a completely different room.

gridside

You’re wireless, you’re in a completely different state – NVIDIA GRID VCA is basically whatever you need it to be. The first iteration of the NVIDIA GRID VCA will be packed as follows:

• 4U high.
• Sized to fit inside your standard server rack (if you wish).
• 2x highest-performance Xeon processors.
• 8x GRID GPU.
• 2x Kepler GPU.
• Support for 16 virtual machines.

You’ll be able to work with the NVIDIA GRID VCA system with basically any kind of computer, be it a Mac, a PC, mobile devices with Android, ARM or x86-toting machines, anything. With the NVIDIA GRID VCA, your remotely-hosted workspace shows up wherever you need it to. Each device you’ve got simply needs to download and run a single client going by the name “GRID client.” Imagine that.

If you’ve got a company using NVIDIA’s GRID, you’ll have access to mega-powerful computing on whatever machine you’ve got connected to it. One of the use-cases spoken about at GTC 2013 was some advanced video editing on-the-go.

Use Case 1: Autocad 3D and remote Video Editing

On-stage with NVIDIA’s CEO Jen-Hsun Huang spoke James Fox, CEO of the group Dawnrunner. As a film and video production company (based in San Francisco, if you’d like to know), workers at Dawnrunner use Adobe software and Autodesk. As Fox notes, “Earth Shattering is what gets talked about in the office.”

28506295_CDRWM2-13

Fox and his compatriots use their own GRID configuration to process video, head out to a remote spot and show a customer, and change the video on the spot if the customer does so wish it. While processing video of the monster sizes Dawnrunner works with, still needs relatively large computing power – “Hollywood big” we could call it – NVIDIA’s GRID can make it happen inside the NVIDIA GRID VCA.

With the processing going on inside the VCA and shown on a remote workstation environment (basically a real-time window into the GRID), you could potentially show real-time Hollywood movie-sized video editing from your Android phone. In that one image of a situation you’ve got the power of this new ecosystem.

Use Case 2: Hollywood Rendering with Octane Render

Of course no big claim with the word “Hollywood” in it is complete without some big-name movie examples to go with it. At GTC 2013, NVIDIA’s CEO Jen-Hsun Huang brought both Josh Trank and Jules Urbach onstage. The former is the director of the upcoming re-boot (2015) movie The Fantastic Four (yes, that Fantastic Four), and the latter is the founder and CEO of the company known as Otoy.

Both men speak of the power of GPUs, Trank speaking first about how people like he, the movie director, use CGI from the beginning of the creation of a film with pre-visualization to bid it out to studios, getting funding before there is any cash to be had. Meanwhile Urbach spoke of how CGI like this can be rendered 40-100 times faster with GPUs than CPUs – and with that speed you’ve got a lot less energy spent and far fewer hours used for a final product.

render

With that, Urbach showed Otoy’s Octane Render (not brand new as of today, but made ultra-powerful with NVIDIA GRID backing it up). This system exists on your computer as a tiny app and connects your computer to a remote workstation – that’s where NVIDIA’s GRID comes in – and you’ll be able to work with massive amounts of power wherever you go.

28506295_CDRWM2-11

Octane Render allows you to use “hundreds or thousands” of GPUs to be used by renderers in the cloud. Shown on-stage was a pre-visualization of a scene from the original Transformers movies (which Otoy helped create), streamed in real time over the web from Los Angeles to the location of the conference: San Jose.

What they showed, it was made clear, is that the power of GPUs in this context cannot be denied. With the power of 112 GPUs at once, it was shown that a high-powered Hollywood-big scene could be rendered in a second where in the past it would have taken several hours. And here, once again, it can all be controlled remotely.

Cost

There are two main configurations at the moment for NVIDIA’s GRID VCA, the first working with 8 GPU units, 32GB of GPU Memory, 192 GB System Memory, 16 thread CPU, and up to 8 concurrent users. The second is as follows – and this is the beast:

GPU: 16 GPUs
GPU Memory: 64 GB
System Memory: 384 GB
CPU: 32 thread CPU
Number of users: up to 16 concurrent

If you’re aiming for the big beast of a model, you’re going to be paying $39,900 USD with a $4,800-a-year software license. If you’re all about the smaller of the two, you’ll be paying $24,900 USD with a $2400-a-year software license.

28506295_CDRWM2-10

Sound like the pocket-change salvation you’ve been waiting for? Let us know if your SMB will be busting out with the NVIDIA GRID VCA immediately if not soon, and be sure to let us know how it goes, too!


SlashGear 101: Remote Computing with NVIDIA GRID VCA is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Volta Next-Gen GPU Architecture Provides Huge Bandwidth Boost

NVIDIA Volta Next Gen GPU Architecture Provides Huge Bandwidth Boost

Earlier today, we saw NVIDIA present Volta, it’s next-generation GPU architecture that is designed to increase performance by addressing one of the “eternal” problems linked to GPUs more than any other processor type: memory bandwidth. Volta is going to use RAM that is layered on top of the GPU silicon to reduce energy consumption and latency during memory accesses. With this, NVIDIA says that it will be able to push 1 Terabytes of data per second through the chip. This is 5X superior to the GeForce Titan, which tops 192GB/s. (more…)

By Ubergizmo. Related articles: Valve Claims “No Involvement” With Xi3’s Piston, EA Apologizes For SimCity Snafu,

Digital Storm Hailstorm II gaming PC brings torrential TITAN downpour

This week as we roll through NVIDIA’s GPU Technology Conference and hear of the latest innovations in graphics processing prowess, we’ve heard a thunder strike – the Digital Storm Hailstorm II, a massive monster of a gaming PC. This beast has four distinct levels of excellence, ranging from a single GeForce GTX 680 all the way up to three – count them – three NVIDIA GeForce GTX TITAN GPUs for face-blasting graphics processing excellence. This set of builds is bordering on absolutely insane as the home gaming universe ramps up to a place where you’d have to be no less than tattooed with dedication to having the most powerful set of specifications – here you’ll go wild!

Hailstorm_II_01

With the Hailstorm II you’ll have space for four radiators, four GPU units, and two CPUs. That’s one massive amount of space on its own – then you consider how it’ll all be blasting forth with the components Digital Storm is quoting here as out-of-box builds, you’ll find your fingers sweating. With the Hailstorm II, you’ve got the first appearance of the Corsair Obsidian Series 900D, a monstrous black tower with a big window on the side so you can view this futuristic wallet-crushing collection for yourself.

Hailstorm_II_04

Inside you’ve got a liquid cooling system with three front intake fans and a lovely large rear exhaust fan to keep the air running through. If you’d like, this build allows you to ad an absurd 15 fans in total – so much freaking airflow you’ll have to wear a jacket.

Hailstorm_II_08

Up front you’ll find a lovely brushed aluminum front panel that’ll open up to show you a vast number of expansion slots – ten expansion slots in all, with room for up to nine hard drives or SSD with three hot-swappable mounts, four 5.25-inch optical drive bays, and more! You’ll have two USB 3.0 ports for super quick transfer, four USB 2.0 ports for all your peripherals, and, just incase you’re an over-the-top expander, the ability to work with two power supplies on the back.

Hailstorm_II_02

If that weren’t enough, you’ll find that each unit has gone through a 72-hour stress-test by Digital Storm, this including industry standard testing of the hardware and software as well as a proprietary testing process in place to detect any and all components that show the potential to fail in the future – you’ll be set!

beast

The system builds you’ll be working with are as you see above, each of the prices reflective of the beastly innards they contain. You’ll find that each of these systems uses fabulous Intel CPU power with the Core i7 across the board as well as NVIDIA GPUs. As noted, this is one of the first systems to work with the NVIDIA GeForce GTX TITAN GPU, and you’ll be able to knock it up to 3x SLI NVIDIA GeForce GTX TITAN at 6GB – intense!


Digital Storm Hailstorm II gaming PC brings torrential TITAN downpour is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Tegra “Parker” blasts forth aside mini ARM computer “Kayla”

This week the folks at NVIDIA have been revealing bits and pieces of their GPU roadmap with Tegra and GeForce GPU action left and right, moving forward with their newest mobile superhero code-named SoC “Parker.” This SoC comes after the still code-named “Logan” and will, if the naming scheme holds true, be Tegra 6 down the road. Along with this reveal came word of a code-named system called “Kayla” – a processing beast that, when it’s ready for action, will be extra-tiny and extra-powerful beyond anything we’re capable of today.

parker

Parker is the newest in a line of code-named Tegra processors, coming after Wayne (Tegra 4) and Logan (Tegra 5, more than likely), and bringing on the innovations of past generations and/or outdoing them with the following firsts:

• First with Denver CPU.
• First 64 bit ARM processor coupled with NVIDIA’s next-gen Maxwell GPU.
• First to use FinFET transistors.

According to NVIDIA CEO Jen-Hsun Huang, this is only the beginning. Huang noted that “In five years time, we’ll increase Tegra by 100 times, though Moore’s Law would suggest an eight-fold increase.” With Logan we’ll see the first mobile processor on the planet to work with CUDA. This processor will also bring Kepler GPU power and OpenGL 4.3 – and it’ll be in production by early 2014.

Parker, on the other hand, is still in the pipeline. While we may see it out by 2015, we can’t be sure until NVIDIA gives the real word.

28506295_CDRWM2-9

Then there’s Kayla. With NVIDIA’s Kayla, we’ve got what’s been described by Huang as “Logan’s girlfriend.” This device is around the size of a tablet PC at the moment, and is beastly enough already to run real-time ray tracing. As Huang said, “this is showing the kind of demos we used to do on massive GPUs.”

Inside Kayla you’ll find CUDA 5, Linux, and PhysX processing. All of this runs on a rather tiny ARM-toting computer – and it’s coming sooner than later. Have a peek at the timeline below for more Tegra and GeForce GTX action from NVIDIA as GTC 2013 continues – hit up our tag portal for more action as well, we’ll be here the whole conference long!

And don’t forget to check our massive Tegra hub for more mobile processing action than you can handle – more big blasts coming up quick!


NVIDIA Tegra “Parker” blasts forth aside mini ARM computer “Kayla” is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA Tegra “Logan” detailed with game-changing CUDA integration

This week NVIDIA’s CEO Jen-Hsun Huang spoke up at their GPU Technology Conference on the future of the mobile processor known as Tegra and has teased what will likely be called “Tegra 5″. Running through what we’d already learned about the Tegra 2, Tegra 3, and the upcoming Tegra 4, Huang let us know that the next code-name “Logan” would be breaking boundaries once again. The next Tegra processor will, according to Huang, do “everything a modern computer should do.”

28506295_CDRWM2-8

Speaking on how they created the idea of a single energy-saving core – seen first in the NVIDIA Tegra 3 quad-core processor – with 4-PLUS-1 technology, sleeping with this one sleeper core for low-powered tasks. Huang spoke also of the first software-defined radio – Deep Execution Processor – and the Computational Camera using both the CPU and the GPU with the sensors of the mobile camera – introduced on the Tegra 4.

Inside Logan we’ll be seeing CUDA 5 and Kepler. This is the first time we’ve seen a mobile processor incorporating CUDA, and also the first time a Kepler GPU will be coming to the mobile universe. This processor will also be bringing on full CUDA 5 as well as OpenGL 4.3.

Interestingly enough, Huang mentioned that Logan – this next generation – will be coming out at the beginning of next year. As we’ve heard from NVIDIA not too many weeks ago, Tegra 4 and Tegra 4i will not be coming to market any sooner than the second half of 2013. In other words, we’re looking at some rather rapid movement between the two generations, without a doubt.

Have a peek at the timeline below as well as the GTC 2013 tag portal for more information on Tegra and the ever-expanding GPU universe of NVIDIA in many great and rather exciting ways! We’ll be here the whole conference long!

Be sure to tune in all week in our massive Tegra hub as well!


NVIDIA Tegra “Logan” detailed with game-changing CUDA integration is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA CEO races toward GPU Computing “tipping point” at GTC 2013

This week at NVIDIA’s GPU Technology Conference, CEO Jen-Hsun Huang spoke about the ever-growing GPU-utilizing universe, in both the mobile and desktop computer global environments. According to Huang, there will be more than 400 sessions at GTC. “This is the Mecca for scientific discovery”, said Huang of GTC 2013, “Nothing’s more important than the research being done on GPU computers.”

28506295_CDRWM2-3

Huang ran through massive amounts of GPU-friendly happenings and upcoming events, including bits and pieces like the following:

• A 50 Gigapixel Camera being developed at the U of Arizona.
• GPU-accelerated diamond cutting.
• CUDA utilization for dating site matching compatibility.
• Oak Ridge’s Titan Supercomputing using 40 million CUDA processors together for 10 petaflops of power.
• Swiss Supercomputing Center starting construction on Europe’s fastest GPU Supercomputing Center Piz Dant – made for weather forecasting purposes.

According to Huang, GPU supercomputing is taking hold at an undeniably quick rate.

“If we’re not at the tipping point for GPU Computing, we’re racing at it. There’s a huge spike in GPU-based computers being built for real work – about 20 percent of total Top500 horsepower is GPU. Included in this is the world’s most powerful supercomputer, the Oak Ridge National Laboratory’s Titan supercomputer.” – Huang

28506295_CDRWM2-6
28506295_CDRWM2-5
28506295_CDRWM2-4
28506295_CDRWM2-3

Read more about Titan in our most recent update on the machine and know this – SlashGear’s experience with the GeForce GTX home-ready GPU is coming up quick, too – stay tuned! Of course you’ll also want to stick out the full conference with us here on SlashGear as we cover the entirety of the show, front to back. Have a look at our GTC 2013 tag for more information and stay tuned for more amazing rendering beastliness!


NVIDIA CEO races toward GPU Computing “tipping point” at GTC 2013 is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

NVIDIA: GTX Titan is a supercomputer in your home

Here live at GTC 2013 Jen-Hsun Huang, CEO, NVIDIA took the stage for the opening keynote, and quickly got things started off by jumping right in with the GTX Titan. Obviously NVIDIA is extremely proud of the brand new single-GPU powerhouse, and we’re expecting plenty of details to quickly follow.

20130319_091431-XL

NVIDIA’s CEO already is talking about what we can expect to see this year at GTC, and the opening keynote will have 5 main things you can all look forward too. One, we’ll hear plenty about the breakthrough in “supercomputing” we’ve never seen before. As well as all the breakthrough’s this year has already given us.

We’ll get a broad update on GPU computing as a whole, how it’s progressing, and where it’s headed. Then what I’m sure many of you are waiting for is the NVIDIA Roadmap. Then last but not least NVIDIA will have a new product announcement. That will be last, so stay tuned for all the details.

While on stage Jen-Hsun Huang stated they didn’t know what to call their new GTX GeForce graphics card that you see above. After realizing it was more than just a GPU, but a GPU that truly brings supercomputing to our homes, they settled on the GTX Titan — and we think that’s fitting. Stay tuned folks.

20130319_091506-XL
20130319_091431-XL
20130319_091047-XL


NVIDIA: GTX Titan is a supercomputer in your home is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

GTC 2013: We’re Here!

It’s that time of year again for all those diehard gaming and graphics fans. The annual GPU Technology Conference has just kicked off live in San Jose in the heart of Silicon Valley. There’s plenty going on again this year, especially with NVIDIA‘s new GTX Titan graphics card taking front and center. Read on for more details on what to expect.

20130319_085131-XL

Obviously this will be all about developers, gaming, and high performance graphics, but we’re also expecting some exciting news on the NVIDIA GPU front (aka TITAN) as well as some details on their impressive mobile chipset recently announced, the NVIDIA Tegra 4 and Tegra 4i.

From Ray Tracing to Crysis 3 gaming and graphics will obviously be the star of the show here. From emerging technology, emerging companies and much more we’ll be here live with all the details.

The official GTC Keynote is about to begin here shortly this morning with NVIDIA’s own Jen-Hsun Huang taking the stage as usual to share some details. We’re expecting the focus to be on content, developers, partners, and a few nice announcements about the products mentioned above. Stay tuned for all the details live from SlashGear! Don’t forget to check out our Tegra Portal for more NVIDIA news.


GTC 2013: We’re Here! is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.