104 Ways to Hilariously Ruin the Watchmen Movie

This is, without a doubt, the best Photoshop Contest we’ve ever run. We received over 300 entries for this one, and I pared it down to the 104 that really blew me away.

Because the entries were so amazing this week, I couldn’t pick just 3 winners. So, for this week only, you have six winners. Huzzah!

First Place — Frank Chezem
Second Place — Kaiser-Machead
Third Place — Jeff Fang
Fourth Place — Frank Chezem
Fifth Place — Joe Corsi
Sixth Place —Brook Boley
Thanks to everyone who entered! If your entry didn’t make it to the Gallery of Champions, don’t despair! We got a crazy amount of entries this week, so try again next week!

Six Technologies That Passed America By

With America’s status as a technological superpower comes a tendency to occasionally straight ignore the rest of the world. For better or for worse, here are technologies we’ve all but completely missed out on.

Laserdiscs

When Laserdisc player production finally spun down a month or so ago, it wasn’t much of an occasion. I mean, aside from inspiring a little grade-school nostalgia and upsetting a hobbyist or three, the event wasn’t materially notable. For us, that is. It turns out that Laserdiscs were much more popular in Japan than America during their heyday—about 500% more popular.

Why? The Japanese success of the Laserdisc (or Videodisc, as they were marketed there) comes down to the two things: money and anime. From launch, Laserdisc prices were lower in Japan than in most other markets, which accelerated adoption. Anime fans appreciated the format’s improved fidelity, which drove sales at the time and eventually led to the still-active secondhand LD market. Laserdisc players, though no longer produced, are still available in the shops of Akihabara and elsewhere. At a Best Buy in Akron? Not so much.

Nokia Phones

When Nokia does something interesting, we take notice. Otherwise, in the US the company exists in an awkward netherworld of ultra-high name recognition and almost infinitesimal relevance. To most Americans, Nokia looks like a budget-phone maker. To most of the rest of the world, they’re the undisputed king of cellphonery, and not just in name—they’re by far the largest manufacturer of handsets on the planet. They literally dwarf their competition, selling double the volume of their nearest competitor, Samsung.

By the numbers: Nokia moved 113 million mobile devices in the last quarter alone, their entry-level 1100 handset has sold over 200m units, and at one point the N95, a precocious, clunky do-it-all handset topped the mobile phone sales charts in the UK. Where does the US stand in all of this? Of those 113 million mobile devices sold last quarter, just five million found their way to North America. Even the iPhone matched those numbers while RIM’s BlackBerry nearly doubled them. Nokia is the gadget equivalent of the BBC—most Americans know about it, but the rest of the world depends on it.

Mobile TV

I’m not talking about expensive, pixelated video-over-3G services here. No, I mean full-fledged digital TV streamed straight to your handset, PC or PMP. Brazil has it, South Korea has it, and of course, so does Japan. The tech used in Japan and Brazil is known as 1seg, and it broadcasts over UHF alongside regular HD content. In Japan, more than two thirds of new mobile phones support the standard, which is a part of daily life for many people. Here, it’s basically unheard of.

DMB is a alternative standard, targeted at a much wider audience. Developed in South Korea, the satellite and terrestrial version of the tech (S-DMB and T-DMB, respectively) are already in widespread use there and T-DMB is being deployed across much of Western Europe—trials appear to be going fairly well. Unfortunately for us, the VHF and UHF bands used by the T-DMB standard have already been claimed by preexisting TV programming and the military, so don’t expect to see terrestrial TV on AT&T or Verizon phones anytime soon, though yours might be capable of the pay-for-play MediaFlo service that nobody uses.

Osaifu-Keitai, or, Your Phone Is Your Wallet

In much of the world, including the US of A, mobile payment systems have been ignored or abandoned after fitful starts. Not in Japan (if you’re noticing a trend here, good job!). Osaifu-Keitai, the e-wallet standard adopted by Japanese telecom heavyweights NTT DoCoMo, SoftBank and au, essentially renders wallets obsolete. Phones equipped with Osaifu-Keitai can be charged with money, download tickets for anything from a sporting event to a plane trip, serve as official identification or link to a credit card.

Due to uncertainties about demand for such a service and loads of red tape , no comparable standard has emerged stateside, and it’s a shame: If you can come to terms with the nebulous privacy issues associated with carrying so much private information on a losable device, it does seem like the plain, obvious and fundamentally good type of technological progress that is probably, with or without our assent, inevitable. Oh well.

Next-Gen Instant Messaging

AOL (emphasis on the A), burdened with decades-old stereotypes about its tech-tarded users and a persistent association with both geriatrics and late-’90s Meg Ryan movies, doesn’t have the best public image. But they do still run the nation’s most popular messaging platform! AIM, despite being a vestige of a service that its parent company doesn’t really care much about anymore, is the de facto standard for messaging in the US (and Israel, strangely). As we saw earlier though, that doesn’t always mean much.

Worldwide AIM/ICQ/iChat numbers are massively outclassed by MSN, or Windows Live as it’s been called for the last few years. In China, the largest IM market, most people don’t bother with either, opting for the Tencent QQ service. Both were born a solid five years after AIM, but their extra features—mostly messaging add-ons meant to appeal to a younger set—are questionably useful. It’s not so much that sticking with AIM has left Americans on an inferior service, it’s that it has isolated us, in a small way, from the rest of the messaging world.

MiniDisc

The story of the MiniDisc epitomizes tech regionalism: A solid, capable contender for recordable audio format dominance, the MD was met with enthusiasm in Japan. It was extremely advanced for its time, rolling fantastic, CD-like audio quality with the recording abilities of a cassette, all in a package that was more portable than either. Despite being introduced in the early ’90s, the format held up well against the first generation of MP3 players, which, with their limited capacities, slim feature sets and high prices, didn’t really provide a perceptible advantage over the venerable MD units. Sony had a solid product—and even a bit of a hit—on its hands.

At least, that’s how the story went in Tokyo. Despite Sony’s best efforts—and what seemed like an endless string of product revamps—the MiniDisc was never more than a marginal player in the US. Sure, it earned plaudits from audiophiles and musicians (check out the recording information for the thousands of concerts on Archive.org if you don’t believe me), but the format never took off, either as a recording medium or, due to risk-averse record companies and the high cost of the actual media, as a competitor for the CD. When MP3 players came of age, the MD’s door to America finally latched shut for good. Sony, of course, took a while to get the message, and Steve Jobs was laughing the whole time.

Kindle 2 Review: Sheeeyah, More Like Kindle 1.5

After spending a week with Amazon’s $360 Kindle 2, I’d like to say we were wrong about it not being a big step forward, but for better or worse, it’s the same Kindle as before.

The annals of gadgetry are littered with revisions that just aren’t meaningful, like the 3rd Gen iPod with its solid-state buttons, or the slimmer, lighter but substantially unchanged PSP-2000. But after waiting a year and change for Amazon to get serious about its Kindle platform—serious enough to keep the thing in stock—I was surprised at how banal the modifications were. Why didn’t they just lower the price of the $400 original to something like $300 or $250, and build more?

Let’s recap the new stuff:
• Slimmer rounded aluminum-backed body
• Smaller inward-clicking buttons
• Text-to-speech book reading
• A USB-based charger
• More memory and longer battery life
• A leather cover that locks on—nowsold separately for $30

What’s not there:
• No SD card slot
• No rubber backing
• No sparkly sparklemotion cursor
• No free cover

Two Thanksgivings ago, I reviewed the first Kindle, calling it “lightweight, long-lasting, and easy-to-grip… in bed.” The same holds true for this Kindle. In fact, everything I liked about that Kindle is still the same: an E-Ink screen that’s easy on the eyes, fast EVDO downloads of books, super-long battery life (it really wasn’t a problem before), plenty of storage for books, and a nice service for buying new books, magazines and otherwise-free blog subscriptions.

Some people love the Kindle for all of the reasons above, and I still think it’s a marvelous product for a certain type of reader, a person who reads multiple books at once, and reads them in order, from page 1 to page 351, without skipping around.

Somewhere into my fourth or fifth book, I stopped reading Kindle 1, and the same basic issue hampered my enjoyment of literature in Kindle 2: You can’t jump around. There’s no way to read what actually counts as literature on a Kindle, because that takes the ability to leaf around, matching passages from different parts of the book, identifying key characters’ surreptitious first appearances, etc. This is something the codex lets people do very well, and it’s something no single-surface digital screen comes close to getting right, even when making it up partly with search, notes and bookmarks.

Amazon boasts 20% faster page turning on this new baby, but you can see in the video that page turning is still painfully slow, and would need to be 100 or 1000 times faster to mean anything. Going from Kindle 1 to Kindle 2, the experience stays the same—there are no new convenience features that actually help you read books more easily. The last one held several hundred books, this one holds well over 1000. The last one’s battery lasted nearly a week, this one lasts over a week. Big deal.

In the video below, you can see the most annoying features of the Kindle 2:

• It’s slow to wake from sleeping
• Page turning is slow and flashes inverted text every time
• The ridiculous computer voice with an Eastern European accent that is impossible to listen to for more than three paragraphs (at least you can stop and start it by pressing spacebar)

There’s no video for the best features of the Kindle 2 because they’re so apparent:
• The clear text on a non-flickering panel
• The compact size that can hold all the books you need
• The great battery life and internal storage for text-and-picture files
• The updated look meets even Jesus Diaz’s strenuous requirements for aesthetic awesomeness

You may be reading this as a slam on Amazon and Kindle, but the fact is, I am a proponent of pushing forward with the ebook concept. I think it’s still easier to read books on E-Ink screens than it is to read them on an iPhone’s LCD, and while there’s no perfect ebook reader, E-Ink and other electronic paper technologies do have an advantage in energy consumption.

Kindle remains by far the best dedicated ebook reader out there, and based on how often they sold out of original Kindles, Amazon will sell as many of these as they can make. I even think the soon-to-come ability to read Kindle content on phones will help Kindle sales rather than hurt them, because more affluent readers, finding more freedom to use their ebook purchases as they like, will want a Kindle as an option.

A mostly cosmetic upgrade, the Kindle 2 is just another step towards some revolution in reading that none of us, not even Amazon chief visionary Jeff Bezos, can yet see or understand. [Kindle 2 Product Page]

In Summary
Still easy on the eyes

Still nice and compact

Even more internal storage and longer battery life

No meaningful change from the first Kindle

Still hard to read longer, more complex books

Cost still too high for most people

How To: Rip Blu-ray Discs

Included digital copies are still the exception rather than the norm in the Blu-ray world. Lame. You’d like to rip those discs for playback elsewhere, right? But there is something you should know first.

And that is this: Ripping Blu-ray discs sucks. Hard. It takes forever, eats up a ton of hard drive space, and for all practical purposes requires software that isn’t free. It’s like trying to rip a DVD in 1999: computers still have a long way to go before this is easy.

But just because it’s hard doesn’t mean it’s impossible, and once your system is set up it’s something you can start before you go to bed and have finished for you in the morning. Here we’ve outlined exactly what you need to rip your 1080p Blu-ray discs (the ones you own, of course) and then convert the video into a more manageable file size for watching on a computer, phone, game console or PMP. Because hey, you own this movie, and you should be able to watch it on whatever device you want.

But you’ll have to earn that right. Let’s start this painful process, shall we?

What’s you’ll need:

• A Windows PC (the Blu-ray ripping process is, at the moment, Mac-unfriendly. I used Windows 7 Beta 64-bit and all the following software is Windows-only)

AnyDVD HD (free fully-functional 21-day trial, $80 to keep) for ripping and decrypting BD discs

RipBot264 (free) for transcoding from AVC (you’ll also need a few codecs to go along with it: .NET Framework 2.0, the avisynth and ffdshow codec packs, and the Haali media splitter)

tsMuxeR (free) for muxing (may not be necessary)

• A Blu-ray drive (I used OWC’s Mercury Pro external)

• A ton of free hard drive space (80GB or so to be safe)

• A decent understanding of how video codecs and containers work (Matt’s Giz Explains has everything you need)

How it Works
AnyDVD HD is a driver that sits in the background, which automatically removes the AACS or BD+ security lock and the region code from any BD disc you load, allowing it to be ripped. The video on most Blu-ray discs is encoded in the MPEG4 AVC format in .m2ts files, so it will need to be transcoded from AVC to something else (like an H.264 MP4 file) for playback on other devices. MPEG4 AVC doesn’t have wide support in all of the best video transcoders we alread love, like Handbrake. This makes finding a free and easy transcoding solution a little tougher, but thankfully RipBot264 seems competent.

You can then either transcode directly from the disc, or go the route I took and rip the disc to your hard drive before running it through the transcoder, which reduces the chance for errors. Give both a shot to find what’s easiest.

Thanks to poster Baldrick’s guide on the Videohelp.com forums and the folks at Doom9—these instructions are based on info found there. Check them out if you get stuck.

Rip Your BD Disc
Again, if you want to try transcoding directly from the disc at the sacrifice of speed or the chance of corruption, you can skip this part (except for step 1) and go to step 4.

1. First up, download and install all the necessary software: AnyDVD HD and RipBot264, which also requires .NET Framework 2.0, the avisynth and ffdshow codec packs, and the Haali media splitter. (All links lead to their Videohelp.com pages, a fantastic resource). These codecs, nicely enough, should give AVC decoding capabilities system wide, so apps like VLC and Windows Media Player should be able to play them without problems.

2. Fire up AnyDVD if it’s not running yet, and from the fox icon in the system tray, choose “Rip Video DVD to Harddisk.” Choose a save point where there’s a healthy 40-50GB free and start it a-rippin’. It’ll probably take around an hour.

3. When it’s done, open up the BDMV/STREAMS directory and try to play the largest .m2ts in VLC or WMP. It should play fine with sound, but if anything’s fishy, you may want to try re-loading RipBot264’s required codecs or trying another AVC codec like CoreCodec’s CoreAVC. This is more paid software, but like AnyDVD, it comes with a free trial period. You need to be able to see and hear an .m2ts file normally during playback before you proceed.

Transcode Your Rip
Now, the fun part.

4. Open up RipBot264. When you try to run RipBot264 the first time, it may say you haven’t installed ffdshow even if you have. If this is the case, open the RipBot264.ini file in Notepad and change “CheckRequiredSoftware=1” to “CheckRequiredSoftware=0” and save it.

5. Click “Add” and select the largest *.m2ts file found in your ripped BD disc’s BDMV/STREAMS folder. RipBot will then analyze it and find the various programs available to encode—you want the one that matches the runtime of your movie, and not one of the special features. RipBot will chew on this file for a long time, and hopefully when it’s done, will present you with this dialog:


6. If RipBot throws an error of any kind here, first make sure you’ve got a bunch of HD breathing room on the volume you’re using.

If errors still come up, you may have to mux your rip. To put that in English: Blu-ray discs have a lot of different files on them representing several different audio and video streams. The process of joining all of these disparate elements into a single stream (usually a .ts file) is called multiplexing, or muxing, and its necessary to do before transcoding. RipBot264 can do this on its own, but it has problems with certain discs. So if any of the above fails, download tsMuxeR, select the biggest .2mts file in the BDMV/STREAM folder in your rip or on your disc, choose the appropriate language, and hit “Start Muxing.” You can then add the resulting .ts file to RipBot264 as the source.


7. Now you can choose how you want to convert the video. RipBot gives you presets for Apple TV, iPod or iPhone, PSP or a high-res file which can then be re-burned to a new BD disc. I chose the iPod/iPhone level.

8. Click “Properties”—here you can fine tune the output size of your video (I chose a nice 640×360 file) and preview it before you begin. MAKE SURE you preview your choices using the “Preview Script” button, because you don’t want to sit through the eternity of transcoding only to find that your dimensions are messed up and everything is in the wrong aspect ratio.

9. If all looks and sounds good, press OK, then “Start” and watch as your system transcodes the massive 1080p AVC stream into a new MP4 file. On my 2.53GHz Macbook Pro, it averages around 20fps, which is actually slower than real time playback. Yuck. So you’ll want to set this and forget it.


10. Wake up the next morning, have your coffee, and check your output file. It should play beautifully in your media player of choice, and look crisp as a kettle chip. My 640×360 encode of the Dark Knight was around an even 1GB in the end, which is not bad at all. Copy it to your device of choice and enjoy.

As you can see, this process is a bitch. It takes an hour to rip the disc, another hour and change for all the software to read your rip and get ready, then an amount of time equal to or even longer than the movie itself to transcode it, depending on your system. So hey, movie studios: how about making digital copies standard features on your BD discs so we don’t have to go through this, mmkay?

Note to Mac Users
While the BD-ripping world is largely a Windows one, you may want to fiddle around with DumpHD, a ripping tool written in Java that supposedly works with OS X. I couldn’t get it to work, but you can read more here to try for yourself.

If you manage to rip your BD disc, you’ll then have to find an AVC converter that works with OS X. Most of these are paid and I haven’t used any, but they exist. If anyone has had luck with a particular tool, let us know.

This method was tested and worked perfectly for me, but if you’re a video jockey and know of any additional software or methods that I didn’t cover that may help, PLEASE tell us about it in the comments. The knowledge dropped in the comments of these Saturday how-tos are a huge help to everyone, so please be constructive and provide links to other tools you’ve had success with. Have a good weekend everyone!

Inside the Mind of Microsoft’s Chief Futurist

If I encountered Craig Mundie on the street, met his kind but humorless gaze and heard that slight southern drawl, I’d guess he was a golf pro—certainly not Microsoft’s Chief of the future.

As chief research and strategy officer at Microsoft, Mundie is a living portal of future technology, a focal point between thousands of scattered research projects and the boxes of super-neat products we’ll be playing with 5 years, 20 years, maybe 100 years from now. And he’s not allowed to even think about anything shipping within the immediate 3 years. I’m pretty sure the guy has his own personal teleporter and hoverboard, but when you sit and talk to him for an hour about his ability to see tomorrow, it’s all very matter of fact. So what did we talk about? Quantum computing did come up, as did neural control, retinal implants, Windows-in-the-cloud, multitouch patents and the suspension of disbelief in interface design.

Seeing the Future
Your job is to look not at next year or next five years. Is there a specific number of years you’re supposed to be focused on?

I tell people it ranges from from about 3 to 20. There’s no specific year that’s the right amount, in part because the things we do in Research start at the physics level and work their way up. The closer you are to fundamental change in the computing ecosystem, the longer that lead time is.

When you say 3 years, you’re talking about new UIs and when you say 20 you’re talking about what, holographic computing?

Yeah, or quantum computing or new models of computation, completely different ways of writing programs, things where we don’t know the answer today, and it would take some considerable time to merge it into the ecosystem.

So how do you organize your thoughts?

I don’t try to sort by time. Time is a by-product of the specific task that we seek to solve. Since it became clear that we were going to ultimately have to change the microprocessor architecture, even before we knew what exactly it would evolve to be from the hardware guys, we knew they’d be parallel in nature, that there’d be more serial interconnections, that you’d have a different memory hierarchy. From roughly from the time we started to the time that those things will become commonplace in the marketplace will be 10 to 12 years.

Most people don’t really realize how long it takes from when you can see the glimmer of things that are big changes in the industry to when they actually show up on store shelves.

Is it hard for you to look at things that far out?

[Chuckles] No, not really. One of the things I think is sort of a gift or a talent that I have, and I think Bill Gates had to some significant degree too, is to assimilate a lot of information from many sources, and your brain tends to work in a way where you integrate it and have an opinion about it. I see all these things and have enough experience that I say, OK, I think that this must be going to happen. Your ability to say exactly when or exactly how isn’t all that good, but at least you get a directional statement.

When you look towards the future, there’s inevitability of scientific advancement, and then there’s your direction, your steering. How do you reconcile those two currents?

There are thousands of people around the world who do research in one form or another. There’s a steady flow of ideas that people are advancing. The problem is, each one doesn’t typically represent something that will redefine the industry.

So the first problem is to integrate across these things and say, are there some set of these when taken together, the whole is greater than the sum of the parts? The second is to say, by our investment, either in research or development, how can we steer the industry or the consumer towards the use of these things in a novel way? That’s where you create differentiated products.

Interface Design and the Suspension of Disbelief
In natural interface and natural interaction, how much is computing power, how much is sociological study and how much is simply Pixar-style animation?

It’s a little bit of all of them. When you look at Pixar animation, something you couldn’t do in realtime in the past, or if you just look at the video games we have today, the character realism, the scene realism, can be very very good. What that teaches us is that if you have enough compute power, you can make pictures that are almost indistinguishable from real life.

On the other hand, when you’re trying to create a computer program that maintains the essence of human-to-human interaction, then many of the historical fields of psychology, people who study human interaction and reasoning, these have to come to the fore. How do you make a model of a person that retains enough essential attributes that people suspend disbelief?

When you go to the movies, what’s the goal of the director and the actors? They’re trying to get you to suspend disbelief. You know that those aren’t real people. You know Starship Enterprise isn’t out there flying around—

Don’t tell our readers that!

[Grins] Not yet at least. But you suspend disbelief. Today we don’t have that when people interact with the computer. We aren’t yet trying to get people to think they’re someplace else. People explore around the edges of these things with things like Second Life. But there you’re really putting a representative of yourself into another world that you know is a make-believe environment. I think that the question is, can we use these tools of cinematography, of human psychology, of high-quality rendering to create an experience that does feel completely natural, to the point that you suspend disbelief—that you’re dealing with the machine just as if you were dealing with another person.

So the third component is just raw computing, right?

As computers get more powerful, two things happen. Each component of the interaction model can be refined for better and better realism. Speech becomes more articulate, character images become more lifelike, movements become more natural, recognition of language becomes more complete. Each of those drives a requirement for more computing power.

But it’s the union of these that creates the natural suspension of disbelief, something you don’t get if you’re only dealing with one of these modalities of interaction. You need more and more computing, not only to make each element better, but to integrate across them in better ways.

When it comes to solving problems, when do you not just say, “Let’s throw more computing power at it”?

That actually isn’t that hard to decide. On any given day, a given amount of computing costs a given amount of money. You can’t require a million dollars worth of computer if you want to put it on everybody’s desk. What we’re really doing is looking at computer evolutions and the improvements in algorithms, and recognizing that those two things eventually bring new problem classes within the bounds of an acceptable price.

So even within hypothetical research, price is still a factor?

It’s absolutely a consideration. We can spend a lot more on the computing to do the research, because we know that while we’re finishing research and converting it into a product, there’s a continuing reduction in cost. But trying to jockey between those two things and come out at the right place and the right time, that’s part of the art form.

Hardware Revolutions, Software Evolutions
Is there some sort of timeline where we’re going to shift away from silicon chips?

That’s really a question you should ask Intel or AMD or someone else. We aren’t trying to do the basic semiconductor research. The closest we get is some of the work we’re doing with universities exploring quantum computers, and that’s a very long term thing. And even there, a lot of work is with gallium arsenide crystals, not exactly silicon, but a silicon-like material.

Is that the same for flexible screens or non-moving carbon-fiber speakers that work like lightning—are these things you track, but don’t research?

They’re all things that we track because, in one form or another, they represent the computer, the storage system, the communication system or the human-interaction capabilities. One of the things that Microsoft does at its core is provide an abstraction in the programming models, the tools that allow the introduction of new technologies.

When you talk about this “abstraction,” do you mean something like the touch interface in Windows 7, which works with new and different kinds of touchscreens?

Yeah, there are a lot of different ways to make touch happen. The Surface products detect it using cameras. You can have big touch panels that have capacitance overlays or resistive overlays. The TouchSmart that HP makes actually is optical.

The person who writes the touch application just wants to know, “Hey, did he touch it?” He doesn’t want to have to write the program six times today and eight times tomorrow for each different way in which someone can detect the touch. What we do is we work with the companies to try to figure out what is the abstraction of this basic notion. What do you have to detect? And what is the right way to represent that to the programmer so they don’t have to track every activity, or even worse, know whether it was an optical detector, a capacitive detector or an infrared detector? They just want to know that the guy touched the screen.

Patents and Inventor’s Rights
You guys recently crossed 10,000 patent line—is that all your Research division?

No, that’s from the whole company. Every year we make a budget for investment in patent development in all the different business groups including Research. They all go and look for the best ideas they’ve got, and file patents within their areas of specialization. It’s done everywhere in the company.

So, take multitouch, something whose patents have been discussed lately. When it comes to inevitability vs. unique product development, how much is something like multitouch simply inevitable? How much can a single company own something that seems so generally accepted in interface design?

The goal of the patent system is to protect novel inventions. The whole process is supposed to weed out things that are already known, things that have already been done. That process isn’t perfect—sometimes people get patents on things that they shouldn’t, and sometimes they’re denied patents on things they probably should get—but on balance you get the desired result.

If you can’t identify in the specific claims of a particular patent what it is novel, then you don’t get a patent. Just writing a description of something—even if you’re the first person to write it down—doesn’t qualify as invention if it’s already obvious to other people. You have to trust that somehow obvious things aren’t going to be withheld from everybody.

That makes sense. We like to look at patents to get an idea of what’s coming next—

That’s what they were intended to do; that was the deal with the inventor: If you’ll share your inventions with the public in the spirit of sharing knowledge, then we’ll give you some protection in the use of that invention for a period of time. You’re rewarded for doing it, but you don’t sequester the knowledge. It’s that tradeoff that actually makes the patent system work.

Windows in the Cloud, Lasers in the Retina
Let’s get some quick forecasts? How soon until we see Windows in the cloud? I turn on my computer, and even my operating system exists somewhere else.

That’s technologically possible, but I don’t think it’s going to be commonplace. We tend to believe the world is trending towards cloud plus client, not timeshared mainframe and dumb display. The amount of intrinsic computing capability in all these client devices—whether they’re phones, cars, game consoles, televisions or computers—is so large, and growing larger still exponentially, that the bulk of the world’s computing power is always going to be in the client devices. The idea that the programmers of the world would let that lie fallow, wouldn’t try to get any value out of it, isn’t going to happen.

What you really want to do is find what component is best solved in the shared facility and what component is best computed locally? We do think that people will want to write arbitrary applications in the cloud. We just don’t think that’s going to be the predominating usage of it. It’s not like the whole concept of computing is going to be sucked back up the wire and put in some giant computing utility.

What happens when the processors are inside our heads and the displays are projected on the inside of our eyeballs?

It’ll be interesting to see how that evolution will take place. It’s clear that embedding computing inside people is starting to happen fairly regularly. There’s special processors, not general processors. But there are now cochlear implants, and even people exploring ways to give people who’ve lost sight some kind of vision or a way to detect light.

But I don’t think you are going to end up with some nanoprojector trying to scribble on your retina. To the extent that you could posit that you’re going to get to that level, you might even bypass that and say, “Fine, let me just go into the visual cortex directly.” It’s hard to know how the man-machine interface will evolve, but I do know that the physiology of it is possible and the electronics of it are becoming possible. Who knows how long it will take? But I certainly think that day will come.

And neural control of our environment? There’s already a Star Wars toy that uses brain waves to control a ball—

Yeah, it’s been quite a few years since I saw some of the first demos inside Microsoft Research where people would have a couple of electrical sensors on their skull, in order to detect enough brain wave functionality to do simple things like turn a light switch on and off reliably. And again, these are not invasive techniques.

You’ll see the evolution of this come from the evolution of diagnostic equipment in medicine. As people learn more about non-invasive monitoring for medical purposes, what gets created as a byproduct are non-invasive sensing people can use for other things. Clearly the people who will benefit first are people with physical disabilities—you want to give them a better interface than just eye-tracking on screens and keyboards. But each of these things is a godsend, and I certainly think that evolution will continue.

I wonder what your dream diary must look like—must have some crazy concepts.

I don’t know, I just wake up some mornings and say, yeah, there’s a new idea.

Really? Just jot it down and run with it?

Yeah, that’s oftentimes the way it is. Just, wasn’t there yesterday, it’s there today. You know, you just start thinking about it.

The 17-Inch MacBook Pro Review

While Apple grandly updated their notebook line to the new unibody design, the 17-inch MacBook Pro, Apple’s granddaddy of mobile computing, was left behind. Now, the 17-inch model joins its siblings—with promising bonus features.

Design

With nothing to scale this image, it’s nearly impossible to tell the new 17-inch MBP from the 13- or 15-inch unibody macs. From the outside, it’s the same thing, only bigger. At first it’s a little intimidating to see such a large, unadorned block of metal. But at 6.6 lbs, it’s actually not as heavy in your hands as you’d expect. And at .98-inches in stature, it’s only ever so slightly thicker (.03 inches) than the other two MacBooks.

Apple will tell you that the MBP17 is the thinnest, lightest 17-inch notebook in the world. We’ll tell you that for a monster of a laptop, it manages to not be too monstrous. The 17-inch (1920×1200) screen is a sharp, contrasty and colorful panorama, but it’s the little touches that make the MBP17 manageable: The system’s near-silent operation (using a 256GB SSD instead of a hard drive) is almost unnerving. Its underside gets warm, but never hot. And the unibody design makes particularly good sense in this larger size, as the wide chassis does not flex to your grip as you might expect.

The battery is one of the only components that’s significantly different than that of the smaller machines. Striving for 8-hours of battery life the newly designed power pack screws right into the chassis. (Lots more on that topic below.)

What’s missing, however, is the underside hatch that made for easy hard drive and battery replacement. This smart design feature, recently introduced in Apple’s 13- and 15-inch unibody laptops, has been replaced by a series of screws to remove the bottom panel, and another series of screws to remove the battery. Removing a few screws is by no means a horrendous exercise, but we can’t help but feel that it’s a step in the wrong direction. The most spend-happy pro users will be the most likely to crack the lid of their laptops—so this design choice will likely annoy a key part of the MBP17’s target audience.

What’s Different About It?

Compared to the MBP15

• Supports 8GB of RAM; the MBP15 only supports 4GB

• Includes a 256GB SSD option; the MBP15 only a 128GB

• The MBP17 includes five speakers with a wider frequency response

• There’s one extra USB port (3 total)

• Slightly faster processor options

(note: shot comparison of 13-inch model)

Compared to the old 17-inch MBP

• 40% larger battery (95WH vs. 68WH)

• Glossy and matte screen options are now available

• The screen has equal resolution, but a 60% wider color gamut

• Unibody structure, of course

Performance

The MBP17 features a 2.66 or 2.93GHz processor, up to 8GB of RAM and dual Nvidia 9400M (integrated) and 9600 (discrete) graphics cards. A 320GB 5400RPM hard drive comes standard, but that can be upgraded to a 320GB 7200RPM drive or a 128GB/256GB solid-state drive. (Note: There’s no option for a 500GB hard drive, though they are readily available if you want to swap one in.)

The model we tested was fully loaded, with a 2.93GHz processor, 8GB RAM and 256GB SSD.

Still, because the MBP17 is so similar to the 15 internally, we’re going to point you in the direction of our last review for benchmarks on the dual Nvidia 9400M and 9600 graphics cards. We also ran Xbench and uploaded the predictably impressive results to their database. However, one feature we wanted to be sure to check out was the new 256GB SSD option, a drive made by Toshiba. It’s a $750 upgrade that we were able to test in our review model.

SSD Speed Benchmarks:

Against the stock drive that comes with MBPs, the speed gains are obvious. However, the SSD market is still very young. There are only a handful of drives out there, so how do you know if Apple’s $750 offering is price competitive?

Searching through the XBench results forums, we found a user who tested out a G. Skill Titan 256GB SSD on a unibody mac. It’s not rated to be as fast as Samsung’s $1000 SSD gold standard, but according to these benchmarks, it’s still considerably faster than the drive Apple will sell you. The catch? The Titan runs $500, or $250 less than Apple’s bundled Toshiba. In other words, as with most upgrades, you’re still better off going through a third party for your SSD.

The other bonus to SSDs is how quickly they boot. From the picture, you can see that our MBP17 booted in 31 seconds, despite me having a few hundred icons on the desktop. The MBP15 (normal hard drive, 4GB RAM) took about 90 seconds to load a similar configuration, or “three times longer” in marketing speak.

Battery Life

Everything so far about the new MBP17 is all well and good, but we think there’s one claim in particular that’s going to interest consumers the most: A 7-8 hour battery life*.

*Assuming screen at half brightness, Wi-Fi on, light browsing, light word processing (so no Bluetooth but otherwise a standard configuration). 8 hours on integrated graphics, 7 hours with more beefy discrete GPU.

Indeed, the MBP17’s battery is huge. It takes up roughly the whole bottom half of the computer’s underside. To make the battery as big as possible, Apple removed even the battery’s removal mechanism. Apple’s lithium polymer pack screws in and promises a shelf life of 1000 complete charges—which also means 2000 half recharges or 4000 quarter recharges—before the battery depletes to 80% capacity.

And while we didn’t have the time to test Apple’s 1000 recharge claim, we were able to run some battery tests.

First we put the system up against a day of blogging. This test was admittedly harder than Apple’s cushy benchmarking, but I wanted to see how it would stand up to true pro use. So with the screen just a hair above half brightness, Wi-Fi on, Bluetooth off, backlit keyboard on, discrete graphics on, heavy web browsing and occasional Photoshop work, we achieved 3 hours 57 minutes of run time.

Should we be pissed? After all, Apple offers 7-8 hours in their ads! That’s your call. In truth, we’ve found that most laptops hit about half their rated battery life under real world conditions (cough, netbooks, cough). If we can only cover our ears and hum through Apple’s latest marketing campaign, we’re actually fairly pleased with about 4 hours of heavy use from a fully loaded 17-inch laptop—especially since that metric includes no real compromises to our workflow.

We also wanted to simulate watching a movie on the plane. So we played back an MPEG4 with the screen at half brightness, discrete graphics off, backlit keyboard off, Wi-Fi off, Bluetooth off and headphones in. We received 4 hours 39 minutes of run time. That’s nearly two hours longer* than we received from the MBP15, and 2 hours 30 minutes longer than we received from the MB13. That’s basically the difference between watching one movie and watching two.

*The previous MBP tests had Wi-Fi on, the backlit keyboard on and speakers on. These alterations should account for a small amount of the increase, but my no means a majority. The 17-inch unit also has an SSD, but these non-spinning drives don’t necessarily mean power savings.

For the Lazy Readers Needing a Summary

A 17-inch notebook has never been designed for the mainstream consumer. But then again, nothing about this MacBook Pro is aiming for the mainstream. It’s a laptop that starts at $2800, and our fully loaded test model runs a hair over $5,000. Its screen is as big as most CRT monitors from just a few years back.

The thing should feel like a beast on the couch, but it actually doesn’t. It’s almost frightening how quickly you adjust, appreciating the extra screen space while disregarding that this system is supposed to be a “laptop” in name only.

If you can get over the purported 8 hours of battery life and settle for longevity around half that number, you’ll be welcomed with a laptop that feels like a desktop but is actually a laptop. It’s a Cadillac that you can just about park, a triple cheeseburger in the bun of a double, a stocky man in a well-tailored suit. And we’re liking it. We just can’t help but ask, why can’t Apple fit a 256GB SSD or 8GB of RAM—or even a 4+ hour battery—into a 15-inch MBP?



It’s a big honking computer in a smallish package



As with the other unibody systems, the MBP17 runs cooler and quieter than past MacBooks



Battery life is reasonable, but will fall short for pro users looking for a true day of use



Apple’s Toshiba SSD upgrade is pricey for its performance



There’s no easy pop-off bottom panel like in other unibody models

External Sources [ifixit, Xbench]

Giz Explains: Why Lenses Are the Real Key to Stunning Photos

When most of us talk digital cameras, we talk megapixels, ISO, image noise, shot-per-second speed and image processing. We’re tech geeks. But really, none of that stuff matters as much as your camera’s lens.

The lens is, after all, your camera’s eyeball—the image sensor or film can only record what comes in through the lens. It’s what defines the picture’s perspective, clarity and way more.

Lenses are actually a really complicated thing to talk about—if your job was to steer photons through tunnels of stretched glass, people would call you complicated too—so we’re gonna try to keep it to field basics, you should know to get around, rather than dive into the crazy physics and mathematical ratios and stuff.

Lens Terminology
Before we get into the basic lens types, you should know the two major numbers you’re looking at you when you talk about lenses: Focal length and aperture.

Focal length is the distance between the optical center of the lens and the point where it focuses the light coming into the lens (when a shot is in focus, that’s the image sensor or film). The diagram above, from Cambridge In Colour shows, very simply, what focal length refers to, and how it affects your pitchas. Here’s another pretty excellent, easy to understand explanation, with pictures showing the results of using different focal lengths on the same shot.

Practically, what you need to know is that focal length measured in millimeters, and that’s where you get, say, an 18-55mm lens, a 400mm telephoto or a 28-560mm lens found in a super-zoom camera. (You probably know this, but when you see “20x zoom lens,” the spec refers to the ratio of the longest focal length to the shortest—so 560 divided by 28.) Basically, the longer the focal length, the more magnified or “zoomed in” your photo can be.

Aperture is the other major spec on a lens, and something you deal with most on DSLRS. The aperture is the hole that actually lets the light into the camera, and you make can make it bigger or smaller. The size of the hole is expressed in terms of F-stops, or as you might see a lot F/2.8 or F2.8 or F8 or F11 or whatever.

The bigger the F number, the smaller the aperture, or hole. The smaller the number, the bigger the hole, which means the more light it lets in. The reason that’s good is that means you can shoot with a faster shutter speed, so you don’t get blurry photos, or when you’re shooting in low light, since more light can get through, which means you’re not forced to choose between shooting dark, blurry things or excessively grainy photos as you crank up the ISO (light sensitivity) to compensate for the lack of light. So, when someone’s talking about a “fast” lens, they’re talking about one with a big aperture, like F/1.8—easy to remember, you can shoot with faster shutter speeds with less light.

With a big aperture, you also have a shallower depth of field—subjects in focus are sharp, but everything around it is soft and blurry. A tighter aperture (higher F-stop number) lets you focus more at once, as you can see in the diagram above combined from Wikipedia. There’s more on depth of field here. Overall, we’re staying on the easy-to-swallow side, but if you’ve really got a hankering for F-stop knowledge, here’s a crazy detailed explanation.

Lens Types
Having fun yet? There are a few basic types of lenses, and of course, a whole bunch of specialized ones beyond that, like macro or tilt lenses. But here are the basics.

A normal lens is one with a perspective that looks a lot the perspective of the human eye. With a 35mm or full-frame camera, that’s about a 50mm lens, though it varies depending on the size of the film or image sensor. For instance, this 35mm Nikon lens is for their DX cameras, DX meaning it has a sensor that’s not “full” (equal to 35mm film). When that lens is attached to a DX camera, it’s the equivalent to a 50mm lens on a full-frame camera—making it normal.

A wide-angle lens is, most basically, one with a focal length that’s way shorter than a normal lens (which, again, varies depending on the size of the film or sensor). Wide angles are useful for take wide shots—-panoramas, or just trying to squeeze a huge group of people in a single picture without being 10 light years away. You can also do neato distortion tricks—a fisheye is just a crazy kind of wide-angle lens. Example Image: Ekilby/Flickr

A telephoto lens is one with a really long focal length (like 400mm). Since they’re designed like telescopes, they are physically more compact than their focal lengths, but they can still get pretty damn massive. They’re good for shooting stuff far, far away. Example Image Shiny Things/Flickr

A prime lens is just one with a fixed focal length—you can’t zoom in or out—and typically they produce sharper pictures than all-but-the-priciest zoom lenses. Any of the above lens types can be prime lenses, or zoom, below. This fisheye is a prime lens.

A zoom lens is one you can adjust the focal length on—zoom in and out—so you can shoot a variety of stuff with a single lens. The aperture tends to vary based on the focal length, unless you get a really pricey zoom lens that’s also “fast.”

Lens Brands and Compatibility
But, even looking at one company at a time, lenses are complicated and sticky. Take Canon, for instance. They’ve got a million different kinds of lens mounts (where the camera and lens fit together) for their single-lens reflex cameras, depending on how far back in time you go. Currently they’ve got two major kinds of lens mounts: EF (electro-focus because the focusing motor action is built into the lens) and and EF-S. The latter is for their entry-level to mid-range DSLRs only, because it’s made for their smaller (not full-frame, i.e., not 35mm equivalent) image sensors. Standard EF lenses will work on cameras with an EF-S mount, but EF-S lenses won’t work on cameras with a regular EF mount. And before that, there was the FD mount, which totally doesn’t work on DSLRs without an adapter.

Nikon isn’t quite as bad as here—they’ve had the same F-mount for over 40 years, so all their lenses with physically fit on the camera, but with their DSLRs, you’ve gotta watch out for their FX lenses (full-frame lenses like for the D700) vs. their DX lenses (like Canon, meant for their cameras with smaller APS-C sensors). When used on full-frame cameras, DX lenses will block out the corners of the picture since they’re supposed to cover a smaller image area. But overall, with Nikon you have the advantage of being able to use older lenses in a way you can’t with Canon gear. Ken Rockwell has a comprehensive tome about Nikon lenses and types for more.

The High Cost of Optics
Okay, great. Here’s a real question: Why are lenses so goddamned expensive? Well, as Steve Heiner, Nikon SLR-division technical marketing manager, told us, “You’re paying for materials and the process of creating the lens,” which, as you might guess, improves image quality. Faster apertures—which require larger glass elements in pro zoom lenses—heavier materials like metal, for durability, and touches like a nano-crystal coating that minimizes reflections for low-light shooting are things that make lenses cost hundreds or thousands or dollars. As a rep from Canon told us, there’s no real getting cheaper over time, like most other mechanical components. Precision optical glass just doesn’t work that way.

Materials are also what separate crummy lenses from good ones, which is why cheap lenses in cellphones suck—they’ve gotta be cheap, really tiny and really light and well, you can’t change physics—and why even cheap DSLR lenses aren’t as good as expensive-as-hell ones. Update: Daniel pointed out this pretty excellent video showing how lenses are made, which shines more light on why they’re so damn pricey:

At the same time, there is a lot of progress in lens tech happening—look at all the ultra wide-angle lenses popping up in point-and-shoots now. Canon says that’s cause you’ve got smaller image sensors (which as we noted above, changes the relation of the focal length), more aspherical lens elements (which are cheaper to make), a new kind of ultra high refractive index aspherical optical glass (uhhhh, don’t ask me) and the miniaturization of mechanical parts like AF motors.

There’s a lot we had to leave out, like chromatic aberration and lens flare, but we hope we gave you a pretty good starting point to learn about lenses. Real camera pros, feel free to leave more in the comments.

Still something you still wanna know? Send any questions about lenses, upskirts, or crazy weird Japanese photographers who swarm cosplayers to tips@gizmodo.com, with “Giz Explains” in the subject line. Also, thanks to Nikon for the lens diagrams!

Canon 5D Mark II vs. Nikon D700 Review Shoot-Out

For the last few months, we’ve been shooting with the two hottest cameras on the market. Lucky us. If you’ve been eyeing either one of these for purchase, here’s everything you need to know.

Camera makers love to invent new categories. And while that can often lead to endless bloat, the Canon 5D Mark II and the Nikon D700 represent a sweet spot that had never been hit before—the semi-pro body with a full-frame sensor. And it’s the category with the most bang for buck we’ve seen to date.

Yes, let’s just get this out of the way: Both the D700 and the 5D Mark II give you more for your dollar in terms of features, image quality and overall excellence than anything else we’ve used. Period. We know not everyone is prepared to drop $2,000 to $3,000 on a camera body these days, but if you’re thinking of investing for the long haul and, more importantly, have a good collection of either Nikon or Canon lenses, these are the two cameras you want to look at.

Why? Because they give you almost everything from Nikon and Canon’s uber-pro top end for a whole lot less, most importantly the full-frame sensor (FX in Nikon parlance). With a sensor the same size as a piece of 35mm film, your old Nikon or Canon glass will produce beautiful results on these new bodies (assuming they’re new enough to autofocus and couple to the cameras’ meters). And if you don’t have a collection built up already, your choices for new lenses will be significantly more exciting without the APS-C (DX, again, in Nikon’s world) sensor’s 1.6x crop factor changing their effective focal lengths.

The sensors in these two cameras are also responsible for their absolutely stunning high-ISO sensitivity performance—if you would have told me a few years ago that I could get 100 percent usable and almost noise-free shots in the dark at ISO 4000 with hand-holdable shutter speeds, I would have laughed in your Nostradamus-looking face. But that’s the reality here, and it’s awesome.

But of the two, which to choose? Now that’s the question, isn’t it. Here we’ll share what we’ve learned from shooting with the 5D Mark II and D700, for work and for play, and hopefully you’ll be able to make your own call.

Image Quality/Sensor Sensitivity
Again, both of these cameras will blow your mind with their high-ISO performance. Both go up to a ridiculous 25,600 ISO rating. The magic does not lie in their gaudy top-range though, which as you can see in our galleries below is still prit-tay, prit-tay noisy. No, the crazy thing here is that with both of these DSLRs, you can shoot at 3200, 4000, even 6400 in the right light and still have photos that look practically noise-free on screen. That’s just crazy, and you can’t accurately describe what this means to you as a photographer until you’ve shot your friends—handheld at quick enough shutter speeds—around a candle-lit table, and gotten photos that look absolutely gorgeous. Before, it took a crazy expensive lens to even come close to this, and even then, sensors (or even high-ISO film) couldn’t keep up.

With the 5D Mark II and the D700, you’re basically shooting with night vision. Like I was doing here at Snowscrapers a few weeks back. As you can see, there are floodlights, but it’s dark. These guys are moving fast. But I can crank up the ISO high enough to pan with them without blurring them out, and grab stuff like this, without the sky turning into a snowstorm of noise.

Let’s compare the high-ISO range of both cameras head-to-head, shall we:

As you can see from these unprocessed (save for JPEG conversion and re-sizing with Aperture) RAW shots files from each camera with high-ISO noise reduction at its highest setting on both, the D700 has a slight edge. The 5D Mark II’s higher resolution leaves lots of room for chroma noise, the bursts of mostly red and green you see in the full crops.

But still, unless you look at them at full-res, both cameras produced almost noiseless images up to ISO 3200. I exposed each shot at f/5.6 so the shutter speeds for the ISO 1600 and 3200 shots were upwards of a few seconds each, which makes the fact that they’re almost noise-free at any decent print or display size a phenomenal sign of both cameras’ noise-busting powers.

One big difference head to head in the imaging department is resolution. At 21 megapixels, the 5D Mark II has almost double the pixels of the 12.1 megapixel D700. As you well know by now, megapixels are not as important as sensor size/quality, but here, we’re dealing with two evenly matched, high-performance sensors, both of them full-frame. So in this case, an extra 9 million pixels does give you something: The added ability to heavily crop down shots without losing detail, like I did here with Mr. Shaun White.

Folks shooting in RAW will also notice the extra resolution with added RAW headroom (meaning, more detail can be salvaged in post-processing from highlights that would be blown out to flat white in a JPG). But on the other hand, a 21-megapixel RAW file from the 5D Mark II weighs in at around 30MB give or take, so unless you’re ready to buy a huge RAID drive to go along with it, the higher resolution may not be your choice in most situations. In all honesty, for 800-pixel-wide shots intended for Gizmodo pages, I never shot above the smallest JPG size, which is still a massive 2784×1856.

Advantage: Draw The D700 does slightly better at high ISO, but the 5D Mark II has a significant upper hand in resolution.

Shooting Features
Here’s one area where there is a definitive leader, and it’s the D700. Its Multi-Cam 3500 auto-focus processor has 51 AF points, compared the the 5D Mark II’s nine (it inherited the same autofocus system from the original 5D, which was itself a bit outdated). It is decidedly better at tracking moving objects with all of these focus points, and also tends to lock in to the correct focus considerably faster.

Even without shooting, it’s easy enough to spot the difference by looking through the viewfinders. The 5D Mark II’s focus points are concentrated mostly in the center of the frame in a diamond shape, whereas the D700’s central points cover far more ground, and zone points cover the outer areas of the frame. So with autofocus, this is cut and dry: Although the 5D Mark II’s AF is quite competent, the D700 wins if you frequently shoot fast-moving kids, animals (same thing?) or sports. The D700 does have a focus-assist lamp (the 5D Mark II doesn’t) to help get this level of detail in low-light, but you can shut it off.

As far as metering and image processing goes, I also lean toward the D700. As most Nikons do, the D700 tends to saturate colors more in its default settings (which of course can be changed). I’m a fan of this look, but that’s all about settings, which should most likely be done on the computer. So a toss-up there, for the most part.

More important is my completely unscientific but still notable feeling that the D700 tends to meter scenes with more skill than the 5D. Here the difference is subtle, but I feel like I had to hit the exposure compensation knob a bit more frequently on the Canon to keep it from blowing out highlights, where the D700 would expose the frame more naturally.

Here are some D700 shots that Matt and I took:

Advantage: D700

Interface/Handfeel/Menus
Here’s where the Nikon vs. Canon flames start to get intense—both cameras take a decidedly different approach to menus and basic shooting controls. For me, I like a dedicated button wherever possible, even if this means the body is littered with switches and knobs of all sorts. This is the Nikon approach, more or less; ISO, file size, white balance, autofocus point selection, metering mode and even mirror lock-up get their own dedicated switches on the D700, which makes switching all of these things easier. On the 5D Mark II, all of these major settings share a button—press it, then rotate the thumb wheel to change one setting, and the index-finger wheel on the front for the other. I almost never remember which dial changes which setting, so that can be annoying.

On the other hand, I am a huge fan of Canon’s jumbo thumb wheel in general—something no Nikon has. Being able to always change EV with the thumbwheel is huge, and in manual mode, you can’t beat having that big knob down there. It’s also great for quickly scrolling through your images. I also much prefer the traditional Canon shooting-mode selector wheel; on the Nikon, you have to press down a button and turn a wheel at the same time. But these all come down mostly to personal preference. And in the on-screen menus, again, preference: Canon tends to split their menus out into multiple screens with every option on the screen at once without scrolling, where Nikon gives you long scrolling lists. Canon did provide a nice quick-access menu to most major settings via the LCD, which is an improvement for them.

And even though the D700 is a good 300 grams heavier than the 5D Mark II with lens, it feels a bit more balanced (almost gyroscopic) in your hand, so that’s good, if you don’t mind the extra weight.

Advantage: D700 Another tough call, but I’m giving it to D700 by a nose for all the dedicated switches, even though I like several of Canon’s choices better.

Extras
Let’s not ignore the elephant in the room: The 5D Mark II is the first DSLR in the world to shoot 1080p (30fps) HD video through its live view mode (see here for more on how this works). By now you’ve thoroughly ogled Vincent La Foret’s amazing demo film—let me tell you, nothing I shot can come close to that. But what even the least video-inclined person will find is that videos look absolutely incredible shot with the limited depth of field of an SLR lens feeding a big full-frame sensor.

There are some drawbacks though, which will ensure your HD camcorder still has some time left: Autofocus is non-existent. When you press the autofocus button during video capture—and that’s the only way to activate it—you’ll need to be prepared to edit out the part in your video where either the mirror slaps up to expose the AF sensor or the contrast detection system cranks the exposure way up (accompanied by the sound of your lens squeaking into position after a good 4 to 5 seconds of hunting). If you’re cool with that, then you’ll be OK, because the autofocus does work, after mangling your videos for a few seconds. Thankfully, manual focus works just fine. You can zoom in with the LCD to make sure you’ve nailed the focus, although this can be kind of tricky to juggle while twisting the focus ring and trying to keep the shot framed at the same time.

Another drawback is that, aside from shifting the EV exposure compensation, you can’t change exposure or ISO settings while you’re filming.

But for grabbing quick 90 to 120 second clips of relatively stationary subjects, or things far enough away to be covered by your infinity focus (like the snowboarders here), then you’re absolutely golden. Your clips, with their popped colors, low-light sensitivity and limited depth of field will be far more beautiful than anything spit out by a Flip video cam or, in some cases, your dedicated camcorder.

Here’s a quick montage of some stuff I shot around town:


On top of the video shooting, the 5D Mark II also comes with a better kit lens than the D700, a 24-105mm L-series that’s f/4 throughout the range. I would normally feel the constraints of f/4 pretty hard and long for my f/1.4 50mm, but with the 5D Mark II you almost don’t notice, when you factor in VR and the crazy-bananas low-light sensitivity. The D700 comes with a capable but not nearly as performance-centric 24-120 f/3.5-f/5.6 zoom.

Oh, and I almost forgot: The D700 has a built-in flash, and the 5D Mark II does not. Which is something I would only ever think of using as a fill in the day time. So that’s not that big of a deal for me, but it might be for you. Update: As many of you guys have pointed out, the D700 on-board flash can also be used to wirelessly trigger Nikon’s CLS compatible strobes. Cool feature.

Advantage: 5D Mark II Easy call there.

Conclusion
Today in the DSLR world, wedged somewhere in the middle of entry-level APS-C, semi-pro APS-C, pro APS-C and pro full-frame, we now have a nice semi-pro full-frame option to consider. The categories may be piling up, but we are so happy this new one came along. Now that getting a full-frame sensor doesn’t require going into hock to get the same body that photojournalists are taking to Iraq, we serious-but-still-recreational shooters can use these full-framers to get great shots at ISO settings so high it’s ridiculous to think about. And on the 5D Mark II, we can film 1080p video. Holy crap.

In the end, for me, I give the trophy to the 5D Mark II for the 1080p video. It’s got its drawbacks, sure, but being able to switch seamlessly from stills to beautiful movies with my awesome 35mm DSLR lens is just too good to be true. The D700 has an edge, albeit a slight one, in some categories like high ISO, but in the new world order, 1080p video from your DSLR is an ace that can’t be beat.

Things are even more clear cut when both cameras were selling for MSRP: Adding insult to injury after nearly matching Nikon in ISO sensitivity and adding 1080p video and almost double the resolution, Canon priced the 5D $300 lower than the D700 ($2700 vs $3000 for body-only). The D700’s been on the market a few months longer, though, and the prices are starting to come down—you can get the body for $2450 at B&H right now. With the 5D Mark II still backordered just about everywhere, this disparity will probably last for a while.

So for you it may be an interesting decision. The option to save a few hundred bucks and get a smidge less noise at high ISO is surely attractive. Either way, you’ve got an absolutely amazing camera. I would imagine most people considering a $2,000 to $3,000 body already have a lens or two of one of these two systems—so in the end, you may go with the one you already have glass for. If you’re a Nikon person, this may mean holding off on the D700 and waiting a while for 1080p.

We based this review on real-world experience, and we didn’t spend any time in a lab for testing. Consider supplementing our impressions here with the good work done by our friends at digitalcamerainfo,com (read their 5D Mark II review and their D700 review), as well as teh labcoated folks over at DPReview’s takes: here is their D700 review and their 5D Mark II review. And, while we focused here on Canon and Nikon, the Sony Alpha 900 is also a contender in this price range for a full-frame shooter, though in our experience, it doesn’t touch either of these two, especially in high-ISO performance.

Panasonic’s latest Toughbook 30 unboxing and hands-on

In preparation for a feature on rugged, semi-rugged, and generally brawny laptops that should be appearing here in the coming weeks, Panasonic was kind enough to send us a little overnight love in the form of a Toughbook 30. That it was packaged in a box labeled “handle with care” that was itself bundled in another box full of foam peanuts didn’t exactly make us think “durable,” but as soon as we got our hands on that magnesium alloy case with its rubberized edges we knew we were dealing with a serious laptop. More pics and impressions of this and other macho machines coming soon.

Filed under:

Panasonic’s latest Toughbook 30 unboxing and hands-on originally appeared on Engadget on Wed, 25 Feb 2009 12:03:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Gizmodo’s Amazon Kindle 2 Review Matrix

You don’t wanna wear out your eyes reading superlong Kindle 2 reviews before you get one, right? Well here’s our review matrix for quick, easy-on-the-eyes digestion of reviews from tech’s biggest names.

We’ve got reviews here from the NYT’s David Pogue (no musical, sadly), Wired’s Steven Levy and USA Today’s Ed Baig. Mr. Mossberg is MIA, probably waiting until the regular run of his column tomorrow. Update: Fixed a quote accidentally swapped between Pogue and Levy.


There’s Jon Stewart’s take on it, as an alternative. [Wired, NYT, USA Today]