The newest version of Google’s mobile operating system is on the way to devices of all kinds in the very near future, bringing a load of updates for the back end as well as the front in Android 4.4 KitKat. This version of the software brings changes first to the Google Nexus 5, made by […]
The original Nike FuelBand works with a rubberized body with the ability to rack up NikeFuel Points with a companion app for iOS devices – the Nike+ FuelBand SE isn’t all that different. What you’re seeing here is the next-generation device detailed by Nike this week in an effort to reboot the wearable for the […]
SlashGear 101: What is Chromecast?
Posted in: Today's ChiliGoogle’s Chromecast device is a Web media player, introduced by the company just a bit over a year after they first showed of a machine with very similar capabilities: the Nexus Q. Where the Nexus Q came into play as a bocce-ball-sized TV “box”, Chromecast is the size of a USB dongle, small enough to fit in your pocket. It connects through a television’s full-sized HDMI port and you’ll be able to pull it up with the input button on your television remote, the same as you would a DVD player.
Instead of playing physical content, like DVDs or Blu-ray disks, Chromecast uses the internet to pull content from web-based apps. Chromecast does not have a remote control included in the box it’s delivered with when you buy it because it’s able to connect with basically any smart device you’ve got in your home – or in your pocket.
Size: 72(L) x 35(W) x 12(H) mm
Weight: 34g
Video Output: 1080p
Connectivity: HDMI, Wi-Fi
RAM: 256k
Processor: N/A
OS: Chromecast
You’ll plug Chromecast in to your TV, plug a microUSB power cord (included in the box) into Chromecast to keep it powered up, and press the single physical button on Chromecast to send out a wireless signal that effectively says, “I’m ready to go!”
Turning the television on and the input to the HDMI port you’ve plugged Chromecast into, you’ll see a screen that directs you to google.com/chromecast/setup. Note that this URL may change over time, but this is the first place you’ll be sent in this initial launch of Chromecast when this article is first posted.
This one-time setup connects Chromecast to the web – if you’ve got a password on your Wi-Fi network, you’ll need to enter it. You can do this setup process from any device with an internet browser, while actually sending content to Chromecast is limited to the following:
• Android 2.3 and higher
• iOS 6 and higher
• Windows®7 and higher
• Mac OS 10.7 and higher
• Chrome OS (Chromebook Pixel, additional Chromebooks coming soon).
At the moment you’ll be able to use Chromecast to connect with Netflix, YouTube, Google Play Movies, and Google Music. Using Chromecast’s “Cast” protocol, you’re able to “fling” content from your control device (laptop, smartphone, tablet) to your TV.
So you’ll open up YouTube, for example, and play a video, but you’ll also be clicking the Cast button that, (once you’re set up), appears in the upper right-hand corner of your Chrome web browser or app. From there you’ll be able to control said media as it plays OR continue on with your regularly scheduled web browsing as the media plays on your TV.
Once the media you’ve chosen on your phone, tablet, or notebook has been flung to Chromecast, you no longer have to worry about it. If you DO want to control it again, you’ll have only to return to the app you were in and change it up. You can also choose to send something new to Chromecast, this immediately stopping the current media from playing, moving on to the next piece of media in kind.
There are also interesting side-loading features you can use if you’re not all about working with one of the few apps supported by Chromecast so far. At the moment Chromecast has a BETA mirroring feature that works with Chrome web browser windows.
You can open a file in a Chrome web browser window and fling it to Chromecast, your television then mirroring this window as you do so. This feature requires that you actually keep the window open if you want to keep watching it on your TV since the content is not on the web, it’s on your computer.
This BETA mirroring feature can be used for photos and video as well – we’ll be seeing how close we can get to real web-based gaming mirroring soon!
What else do you want to know about Chromecast? Is this a device (at $35 USD) that you’ll be picking up, supposing it’s not already sold out every which way from physical stores to Google Play? Let us know!
SlashGear 101: What is Chromecast? is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2013, SlashGear. All right reserved.
There are two major paths you might go down when you’re attempting to see what’s different in the change-over from Android 4.2 or 4.2.x over to 4.3 Jelly Bean: one is behind the scenes, the other – right up front. What we’re going to be doing is taking a mostly up-front approach, sourced straight from Google’s guides, tuned here for the common user while we keep the developer back end in mind: those bits and pieces are put in place for your machine to work well – here’s what you’ll be well off knowing.
Graphics
Google has added a collection enhancements in the performance features already built-in to Jelly Bean, this including vsync timing, triple buffering, reduced touch latency, CPU input boost, and hardware-accelerated 2D rendering. You’ll find that this hardware-accelerated 2D rendering is now optimized for the stream of drawing commands.
While this doesn’t end up changing a lot for those of you that just want to open their phone and kick up some dust with a high-powered graphics-intensive game, your device’s GPU will thank you for the more efficiently rearrangement and merging of draw operations. This renderer can also now use multithreading across multiple CPU cores to perform “certain” tasks.
You know what that means?
If you’re all about making the most of your multi-core processor (like most hero phones these days employ), you can now make them dance for your 2D rendering! Of course, again, that may not mean a lot for the lay person, but check down in the GPU profiling area in the Developer Bits section later in this run-down – see how you can see it with pretty live graphs and rings!
Google’s Android 4.3 adds on improved rendering across the board, but centers again on the rendering of shapes and text. Efficiency in these areas allow circles and rounded rectangles to be rendered with higher quality, while text optimizations come into play when multiple fonts are used near one another, when text is scaled at high speed (think about zooming in) and when you’ve got oddities like drop shadows and CJK (complex glyph sets) lurking around.
This all ties in with OpenGL ES 3.0 and Google’s adoption of said system for Android 4.3 Jelly Bean. We’ll be attacking this bit of system integration, that is Khronos OpenGL ES 3.0, in a separate article – for now you’ll just want to know that this expands developer abilities to bring high-quality graphics and rendering to apps with new tools included in the official Android Native Developer Kit (NDK).
You’ll also find that custom rotation animation types have been added with Android 4.3, meaning you’ll be seeing apps choosing to use “jump-cut” and “cross-fade” when you turn your device on its side rather than just “standard” as you’re seeing now. Along with this, believe it or not, the ability to lock the screen to its current orientation has only just been introduced with Android 4.3 – helpful for camera apps, especially.
UI Automation
Android 4.3 Jelly Bean builds on an accessibility framework allowing simulations to be run on devices – this means your device will believe it’s being tapped, touched, etcetera, while you’re running these commands from a separate machine. Google notes that the user can: “perform basic operations, set rotation of the screen, generate input events, take screenshots,” and a whole lot more.
We’ll be waiting for this set of abilities to be expanded beyond the developer realm and into the remote control Android smartphone universe. This sort of usability has already begun with display mirroring – now it’s time to get weird with it.
Developer Bits
Developers will now be able to make user of On-screen GPU profiling. This data comes up in real time and shows what your device’s graphics processing unit(s) are doing and can be accessed in your Developer Options under settings. If you do not see these settings right out of the box, it’s just because you’ve not un-hidden them yet (this is default in all Android iterations above 4.2).
To un-hide Developer Options, go to Settings – About phone – Build number, and tap the Build number of your device 7 times quickly. From there you’ll be in business. Android 4.3 offers a collection of developer abilities behind the scenes, also including a set of enhancements to Systrace loggin.
With the Systrace tool, developers are able to visualize app-specific events inside the software they create, analyze the data that’s then output, and use Systrace tags with custom app selections to understand the behaviors and performance of apps in ways that are both easy to understand and in-depth enough to expand well beyond analysis tools of the past.
Security Systems
One of the most important additions to Android in this update for the business owner or employee that needs a bit more security than the average user is the addition of Wi-Fi credential configurations for individual apps to connect with WPA2 enterprise access points. Google adds API compatibility with Extensible Authentication Protocol (EAP) and Encapsulated EAP (Phase 2) credentials, just like they’ve always wanted.
Android 4.3 adds KeyChain enhancements which allow apps to confirm that commands entered into them – passwords, for example – will not ever be exported off the device itself. This is what Google calls a “hardware root of trust” for the device, and they suggest that it cannot be broken, “even in the event of a root or kernel compromise.” That’s hardcore.
This security is expanded with an Android Keystore Provider which can be used by one app that will then store a password that cannot be seen or used by any other app. This key is added to the keystore without any user interaction and locks the the data down the same way the KeyChain API locks down keys to hardware.
You’ll also want to have a peek at our exploration of Restricted Profiles and Google’s expanded vision for multiple users on one device. Built-in kid-proofing!
Where and When
Google will be pushing Android 4.3 over the air to Nexus devices starting today – for models like the Nexus 4, Nexus 7, and Nexus 10, and SOON for the HTC One Google Play edition and Samsung Galaxy S 4 Google Play edition. As for the rest of the Android universe – we’ll just have to wait and see! There’s always the hacker forums, and stick around our Android portal for the news when it pops up!
SlashGear 101: Android 4.3 Jelly Bean, what’s new? is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2013, SlashGear. All right reserved.
The Nokia Lumia 1020 is a smartphone with a 41-megapixel camera introduced by the company with intent on having it carried by AT&T here in 2013. This device works with a unique blend of abilities, tending not only to the massive photos produced when it takes 34MP and 38MP photos, but 5 megapixel photos as well. And why would Nokia suggest taking 5 megapixel photos when they’ve got a 41 megapixel sensor on this camera? It’s the sweet spot!
As suggested by Nokia’s own in-depth talks on the subject, the “sweet spot” in 5-megapixels exists for both image quality and for sharing purposes. You can print this size photo up to A3 side with ease and they’re well and above high-quality enough for slapping up on Facebook and Google+. The key with Nokia’s release of the 1020 and the 41MP / 5MP tie in lies in one word: Oversampling.
Oversampling
This is not a brand new concept for the camera industry – it’s not even new to Nokia, if you consider devices like the Nokia 808 PureView – but what’s happening with this device is a rebirth of efforts in the space. We’ll be having a chat on the possibilities of this setup with “lossless” or high-res zooming-in on photos as well, but for now, it’s all about the “amazing detail” Nokia promises in the everyday common 5 megapixel size shot.
The image you’re seeing below is one coming straight from Nokia’s white paper on the subject, suggesting that their technology kicks 5 megapixel photos into gear. With Oversampling – capitalized here so you know it’s Nokia’s unique software attacking the situation, in this article, you’re in for a very obvious different league with clarity.
Nokia suggests that with the technology appearing in the Nokia Lumia 1020, you’ve got a high resolution sensor bringing in one whole heck of a lot more information for images than what’s offered with a “standard” 5 megapixel sensor. That makes sense on a very basic level – you’ve got a more megapixels, so you have a better photo, right? It’s not quite that simple, actually, and it’s not just dependent on the number of megapixels either.
The big difference between a standard 5 megapixel shot and one produced by this new system from Nokia is in the amount of image data spread out across the photo. A standard system – here referring to technology appearing in basically every device in the market through history, especially in smartphones – takes, for example, “5 megapixel” photos but does not work with 5 million pixels of independent data.
Five megapixel photos can look like the image above on the left or the image above on the right, it all depends on how much data is given to each pixel. (Figure 3, that is)
Am I having deja-vu?
This system is extremely similar to what’s been described and implemented by HTC this year with the HTC One. In their case it’s called “UltraPixel” technology, and it’s created a device that’s been held in high regard for its photo capturing abilities, even with what the company calls it’s 4 UltraPixel (or 4 megapixel) camera on its back. Have a peek at our SlashGear 101: HTC UltraPixel Camera Technology post for more information on that alternate vision.
You’ll also be able to find more information on the brand-name PureView from our SlashGear 101: Nokia PureView considering the Nokia 808 PureView as well. Keep it all straight and you’ll do a lot better than the vast majority of lay people in the public – good luck!
SlashGear 101: Nokia Lumia 1020 Oversampling and the 5MP “Sweet Spot” is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2013, SlashGear. All right reserved.
SlashGear 101: Nokia PureView
Posted in: Today's ChiliSmartphone buyers pick handsets on the basis of cameras, that’s what the big manufacturers have realized, and Nokia is determined not to be left behind. As well as transitioning to lead the Windows Phone charge, the Finnish company is also positioning itself as the most imaginative firm in mobile photography, putting snapshots at the core of every recent device. One name stands out as special to any mobile photo pro, however, and that’s PureView, expected to crop up again with the imminent launch of the Nokia Lumia 1020. There’s a lot to be said for 41-megapixel cameras: read on, as we walk you why PureView is special, and what might come next.
41-megapixel photos, right? Who needs that?
Don’t get too hung up on the headline-grabbing number: PureView photos aren’t really about raw megapixels. Instead, you need to start looking at megapixels as a means to an end, and in that respect there are several ways you can use a surfeit of imaging data.
Nokia’s analogy is putting out buckets in the rain. If you have a regular number of buckets, you’ll catch a regular amount of water. If you have many, many more buckets, you’ll catch even more water. In this case, the PureView’s 41-megapixel sensor is the field of buckets, and the rain is light hitting the CMOS. More light means more imaging data, and that data gives extra flexibility for Nokia’s processing to work with.
So, the original PureView system was never intended to produce 41-megapixel images (in fact, it technically couldn’t: the sensor may have had that many, but captured either 38- or 34-megapixel images at most, depending on whether they were 4:3 or 16:9 aspect ratio). Instead, it used pixel oversampling: combining the data from, say, seven pixels in close proximity on the CMOS, for a single pixel in a roughly 5-megapixel end image.
By comparing what light seven pixels have captured, PureView can iron out any glitches – say, pixels that erroneously see more light than they should – and get a more accurate result on things like color, brightness, and other imaging detail. That makes the final photo more accurate too.
But does it work?
Nokia’s 808 PureView proved that it does. The bulky 2012 smartphone may only have really found buyers among true converts to the PureView system, but that was more down to it being Symbian’s last real hurrah than any shortcomings in the camera technology. Released while most attention was on Nokia’s Windows Phone efforts, sticking with Symbian was a practical decision rather than a preferable one: PureView had been in development for five years, and Nokia simply wanted to get it out the door.
“Nokia never expected the 808 PureView to be a best-seller”
Sales figures for the 808 PureView haven’t been released, but Nokia never expected it to be a best-seller. Instead, it was more a proof-of-concept for the PureView system, and in that respect it was a roaring success.
The 808 PureView actually had two different modes. As well as taking photos in the PureView system, it could shoot full-resolution stills; the latter didn’t get any of the benefits of pixel oversampling, but they did show off the core aptitude of the specially-designed sensor. In PureView mode, the 808 produced roughly 2-, 5-, or 8-megapixel photos, but Nokia’s boast was that an image at each resolution would likely out-class a comparison shot from a rival device a megapixel-tier up.
There are sample images in our original Nokia 808 PureView review, but the takeaway is that, for all its faults as a smartphone, as a camera it proved superb. It took no small amount of engineering, but Nokia and its imaging team had come up with a photo experience that rivaled dedicated Micro Four Thirds cameras and above.
What about this lossless digital zoom?
Pixel oversampling is only one way to use all those extra megapixels. The other, Nokia decided, was to create a zoom system with the best of both optical and digital methods. For photographers, optical zooms are generally preferable, since they don’t result in any quality loss. Digital zooms, in comparison, don’t need any moving lenses, which makes them more straightforward and less prone to damage, but since they basically enlarge a portion of the frame, you end up with a picture at half the quality for every 2x you zoom in.
PureView allows for a digital zoom with no loss in quality and no extra moving parts. It’s easiest to imagine it as a progressive cropping of the full-resolution image the sensor is capable of: taking, say, a 5-megapixel section out of a maximum-resolution still. The 808 PureView topped out at 3x digital zoom, since that was the level Nokia could reach before it would have had to start enlarging the picture and thus losing quality.
ImpureView
For PureView purists, there’s the “golden age” of the technology and then a dark period where simply the name – but not the true magic – has been used. Nokia was keen to carry over the halo effect of PureView to its Windows Phone range, and so the Lumia 920 became the first device to bear the brand, even though it didn’t have a 41-megapixel sensor.
Instead, the Lumia 920 used a new type of lens assembly, aiming to deliver better quality images than rivals but using a different system again. The Lumia 920 has optical image stabilization, by physically suspending the sensor on a moving jig that can be quickly shifted as the user’s hand shakes. By ironing out those judders, the end picture can have less blur; it also makes for better low-light performance, as the Lumia 920 can use longer shutter speeds without worrying about shake.
The same system was used on other Windows Phones, most recently the Lumia 925, and Nokia actually described the system as the “second phase” of PureView, pushing the term to refer to a more over-arching attitude toward mobile photography than the system we’d been wowed by on the original 808. In a white paper [pdf link] on the technology, the company argued that its OIS sensor could, with 8.7-megapixels, deliver the same sort of quality as had been achieved with the 41-megapixels of the first phone.
“Many PureView converts were unconvinced by Nokia’s recent use of the name”
However, the lack of oversampling and the complete absence of a lossless digital zoom left many PureView converts unconvinced by Nokia’s more recent use of the name. For them, PureView means packing in the pixels, just as the 808 demonstrated.
So why hasn’t everyone slapped a massive sensor in their phones?
The clue is in the question: the 808 PureView’s sensor was physically huge, since Nokia realized it would need a 1/1.2-inch, 7728 x 5368 CMOS in order to deliver on the 3x optical zoom goal it had set itself. That made for a materially bigger handset, since the large sensor also had to be paired with lenses with sufficient focal length.
Even with a custom Zeiss lens assembly, the 808 PureView turned out to be a big device. Not quite as large as the average compact camera, but not far off, and in a world where slimline smartphones still command a premium, the chunky PureView system looked old-fashioned despite its cutting-edge guts.
Instead, we’ve seen other manufacturers follow different routes to improve mobile photography. Samsung, LG, and Sony, for instance, have chased higher and higher resolutions, each with 13-megapixel models on the market (and Sony expected to have a 20-megapixel phone next). Obviously, more megapixels means more data, but if you’re aiming for a phone that isn’t unduly bulky, it also means the pixels themselves have to be small and densely packed onto the CMOS. That can cause issues when it comes to low-light performance, as you end up with lots of pixels grabbing very little light in each exposure.
Another route is HTC’s with the One. Dubbed UltraPixel, it echoes Nokia’s decision to ‘maximize the buckets’ but does that with bigger individual pixels rather than a bigger overall CMOS to accommodate more of them. So, the HTC One has a mere 4-megapixel sensor, but where the average phone camera of twice the resolution would have roughly 1.2 micron pixels, those in the One measure in at 2 microns. That might not sound much, but it means considerably more light, allowing for faster shutter speeds or, thanks to the inclusion of optical image stabilization, longer exposures without blur for bette low-light shots.
So what’s next?
In the short term, it’s the Nokia Lumia 1020, codenamed “EOS”, and widely expected to be the first Windows Phone to use “proper” PureView. A new 41-megapixel sensor and lens assembly is predicted, with Nokia using what it learned from the 808 PureView to slim down both components and make for a phone that’s not outlandishly large. It’ll still be on the bulky side for a modern smartphone, most likely, but not the pocket-buster the 808 was.
Beyond that, it’s all about light. PureView’s goal is getting as much light as possible, and Nokia is already investing in the next-generation CMOS technologies that will allow it to do that. One such example is the array camera system developed by Pelican Imaging, which clusters 25 sensors and lenses into a single unit.
The advantage of Pelican’s camera module is that as well as combining the raw data from each sensor into a single frame, traditional PureView style, it can also be used to create 3D images, photos that can have their point-of-focus changed after they’ve been captured, and elements of the frame digitally excised without any loss in overall quality.
Then there’s so-called quantum-dot sensors, developed by Nokia-invested InVisage Technologies. They throw existing CMOS out the window, replacing them with a so-called QuantumFilm sensor that’s hugely more sensitive to light. In fact, InVisage claims, its QuantumFilm sensors can capture as much as 95-percent of the light that falls upon it, versus around 25-percent for a standard CMOS.
That could mean 4x sharper sensors with twice the dynamic range, but in a smaller overall package. Even the reduced bulk of 2013′s PureView could be slimmed down further again by junking the CMOS and replacing it with QuantumFilm sensors. Pair it up with advanced software processing, such as the Scalado technology Nokia acquired the rights to in 2012, and you have a new age of what Nokia calls “computational photography”, where the point where the image is captured is no longer the end of how the raw data is processed.
SlashGear 101: Nokia PureView is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2013, SlashGear. All right reserved.
Since the beginning of June, the public has been privy to an ever-expanding flower of information springing from the NSA tagged with the code name PRISM. This keyword is attached to a program that whistleblower Edward Snowden is said to have been the sole leaker of for reports leading to the Guardian story on the
SlashGear 101: Mac Pro 2013
Posted in: Today's ChiliApple doesn’t normally preview upcoming hardware, so when Tim Cook & Co. whipped out the new Mac Pro 2013 at WWDC 2013 this week you knew the company was particularly proud. Throwing away the old-style tower and completely rethinking not only the design, but the internal architecture, cooling, expansion, and ethos of a workstation, the
Today is the day Facebook Home is released for Android devices, and though it may seem possible to download the app for your smartphone or tablet, it won’t necessarily be in full working order this afternoon. Why would that be – you might ask? Because Facebook’s launch of Facebook Home is limited to just five devices – and one of them was just released to the market today.
With the HTC First you’ll have the full Facebook Home experience from top to bottom – even when you download the Facebook Home app and load it to your device that’s not an HTC First, you’ll still not have full notifications for apps in your News Feed. Other than that, it’s basically the same experience. And what about your Motorola DROID RAZR HD? You’re out of luck – for now, anyway.
The Facebook Home app is working today for four devices other than the HTC First:
• Samsung Galaxy S III
• Samsung Galaxy Note II
• HTC One X
• HTC One X+
Why these four devices? The first two devices are some of the best-selling smartphones in the past year. The HTC One X and the HTC One X+ are also some of the highest-powered smartphones on the market – and they’re all four carried by AT&T along with the HTC First. Sound like a deal behind the scenes to you?
UPDATE: According to Facebook, the following devices will be available for Facebook Home compatibility very soon! Can’t wait!
• Samsung GALAXY S4 (Future)
• HTC One (Future)
If you’re hankering for a partial Facebook Home experience before Facebook updates their development to handle more than just the five (make that seven) devices above, you’ll want to update your regular Facebook app and download Facebook Messenger. With Facebook Messenger you’ll get what’s easily the best part of the Facebook Home experience without needing the Facebook Home launcher: Chat Heads. Hear all about it in our Chat Heads post from earlier today!
Have a peek below at some additional Facebook Home insight as well – don’t forget to check out the HTC First review we’ve got along with our full Facebook Home review too!
PSA: Why doesn’t Facebook Home work on my smartphone? is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
This week at NVIDIA’s own GPU Technology Conference 2013, we’ve been introduced to no less than the company’s first end-to-end system: NVIDIA GRID VCA. The VCA part of the name stands for “Visual Computing Appliance”, and it’s part of the greater NVIDIA GRID family we were re-introduced to at CES 2013 earlier this year. This VCA is NVIDIA’s way of addressing those users – and SMBs (small-to-medium businesses) – out there that want a single web-accessible database without a massive rack of servers.
What is the NVIDIA GRID VCA?
The NVIDIA GRID VCA is a Visual Computing Appliance. In it’s current state, you’ll be working with a massive amount of graphics computing power no matter where you are, accessing this remote system over the web. As NVIDIA CEO Jen-Hsun Huang noted on-stage at GTC 2013, “It’s as if you have your own personal PC under your desk” – but you’re in a completely different room.
You’re wireless, you’re in a completely different state – NVIDIA GRID VCA is basically whatever you need it to be. The first iteration of the NVIDIA GRID VCA will be packed as follows:
• 4U high.
• Sized to fit inside your standard server rack (if you wish).
• 2x highest-performance Xeon processors.
• 8x GRID GPU.
• 2x Kepler GPU.
• Support for 16 virtual machines.
You’ll be able to work with the NVIDIA GRID VCA system with basically any kind of computer, be it a Mac, a PC, mobile devices with Android, ARM or x86-toting machines, anything. With the NVIDIA GRID VCA, your remotely-hosted workspace shows up wherever you need it to. Each device you’ve got simply needs to download and run a single client going by the name “GRID client.” Imagine that.
If you’ve got a company using NVIDIA’s GRID, you’ll have access to mega-powerful computing on whatever machine you’ve got connected to it. One of the use-cases spoken about at GTC 2013 was some advanced video editing on-the-go.
Use Case 1: Autocad 3D and remote Video Editing
On-stage with NVIDIA’s CEO Jen-Hsun Huang spoke James Fox, CEO of the group Dawnrunner. As a film and video production company (based in San Francisco, if you’d like to know), workers at Dawnrunner use Adobe software and Autodesk. As Fox notes, “Earth Shattering is what gets talked about in the office.”
Fox and his compatriots use their own GRID configuration to process video, head out to a remote spot and show a customer, and change the video on the spot if the customer does so wish it. While processing video of the monster sizes Dawnrunner works with, still needs relatively large computing power – “Hollywood big” we could call it – NVIDIA’s GRID can make it happen inside the NVIDIA GRID VCA.
With the processing going on inside the VCA and shown on a remote workstation environment (basically a real-time window into the GRID), you could potentially show real-time Hollywood movie-sized video editing from your Android phone. In that one image of a situation you’ve got the power of this new ecosystem.
Use Case 2: Hollywood Rendering with Octane Render
Of course no big claim with the word “Hollywood” in it is complete without some big-name movie examples to go with it. At GTC 2013, NVIDIA’s CEO Jen-Hsun Huang brought both Josh Trank and Jules Urbach onstage. The former is the director of the upcoming re-boot (2015) movie The Fantastic Four (yes, that Fantastic Four), and the latter is the founder and CEO of the company known as Otoy.
Both men speak of the power of GPUs, Trank speaking first about how people like he, the movie director, use CGI from the beginning of the creation of a film with pre-visualization to bid it out to studios, getting funding before there is any cash to be had. Meanwhile Urbach spoke of how CGI like this can be rendered 40-100 times faster with GPUs than CPUs – and with that speed you’ve got a lot less energy spent and far fewer hours used for a final product.
With that, Urbach showed Otoy’s Octane Render (not brand new as of today, but made ultra-powerful with NVIDIA GRID backing it up). This system exists on your computer as a tiny app and connects your computer to a remote workstation – that’s where NVIDIA’s GRID comes in – and you’ll be able to work with massive amounts of power wherever you go.
Octane Render allows you to use “hundreds or thousands” of GPUs to be used by renderers in the cloud. Shown on-stage was a pre-visualization of a scene from the original Transformers movies (which Otoy helped create), streamed in real time over the web from Los Angeles to the location of the conference: San Jose.
What they showed, it was made clear, is that the power of GPUs in this context cannot be denied. With the power of 112 GPUs at once, it was shown that a high-powered Hollywood-big scene could be rendered in a second where in the past it would have taken several hours. And here, once again, it can all be controlled remotely.
Cost
There are two main configurations at the moment for NVIDIA’s GRID VCA, the first working with 8 GPU units, 32GB of GPU Memory, 192 GB System Memory, 16 thread CPU, and up to 8 concurrent users. The second is as follows – and this is the beast:
GPU: 16 GPUs
GPU Memory: 64 GB
System Memory: 384 GB
CPU: 32 thread CPU
Number of users: up to 16 concurrent
If you’re aiming for the big beast of a model, you’re going to be paying $39,900 USD with a $4,800-a-year software license. If you’re all about the smaller of the two, you’ll be paying $24,900 USD with a $2400-a-year software license.
Sound like the pocket-change salvation you’ve been waiting for? Let us know if your SMB will be busting out with the NVIDIA GRID VCA immediately if not soon, and be sure to let us know how it goes, too!
SlashGear 101: Remote Computing with NVIDIA GRID VCA is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.