How To Install Windows 7 On Almost Any Netbook

Windows 7 is free for now, and works extremely well on netbooks. That said, installing the OS on these tiny laptops—especially low-end models—can be daunting. Here’s how to do it, the easy way:

If the Release Candidate is any indication (and it should be), then Windows 7 will be a nice upgrade for any Windows user. The new OS, however, is a huge step up for netbook users. Vista is notoriously poorly suited to netbooks; a buggy resource hog that subjects its users to incessant dialog boxes and requires far too many clicks to perform basic tasks, it’s kind of a nightmare to use on a 9-inch laptop with a 1.5-inch trackpad.

Windows XP has been given a boost by netbooks, as its system requirements—more-or-less decided in 2001—are more in line with the specs hardware like the Eee PC and Mini 9. But let’s face it: XP is nearly a decade old. Its user experience is trumped by free alternatives like Ubuntu Netbook Remix and Linpus, and it’s not at all optimized for solid-state drives—especially cheap ones. This means that on low-end, SSD-based netbooks, it borders on unusable.

Hence, Windows 7. It’s noticeably faster than Vista on low-spec machines, properly optimized for netbook hardware, and, most importantly, free (for now). Thing is, installation isn’t quite as easy as it is on a regular PC—in fact, it can be a pain in the ass: netbooks don’t have DVD drives, which means you’ve either got to get your hands on an external drive or boot from a USB stick for a clean install. Furthermore, smaller SSDs, like the 8GB units in popular versions of the Dell Mini 9 and Acer Aspire One, make a default installation impossible, or at least impractically tight. Luckily, there are simple methods to deal with both of these problems. Let’s get started.

What You’ll Need

• A netbook (Minimum 1GB of RAM, 8GB storage space)

• A 4GB or larger USB drive

• A Windows 7 RC Image (details below)

• A Windows XP/Vista PC or a Mac to prepare the flash drive

• For low-end netbooks, lots (and lots) of time

Getting Windows 7

Downloading Windows 7 is a piece of cake. Just navigate to this page and download the 32-bit version. You’ll need to get a free Windows Live ID if you don’t already have one, but this takes about two minutes.

Microsoft will then give you your very own Windows 7 License key, valid until June 1st of next year. (Although after March 1st, it’ll drive you to the edge of sanity by shutting off every two hours. But that’s a different story, and March is a long way off). Microsoft will then offer up your ISO through a nifty little download manager applet, complete with a “resume” function. There are ways to sidestep this, but don’t: you’d be surprised how hard it is to keep a single HTTP connection alive for long enough to download a 2.36GB file.

Preparing Your Flash Drive

This is the annoying part, but it’s not necessarily that difficult. Here are some guides, by OS (some linked for length):
Windows XP
Windows Vista
• Mac OS X (courtesy of Ubuntu, funnily enough):

1. Open a Terminal (under Utilities)

2. Run diskutil list and determine the device node assigned to your flash media (e.g. /dev/disk2)

3. Run diskutil unmountDisk /dev/diskN (replace N with the disk number from the last command; in the previous example, N would be 2)

4. Execute sudo dd if=/path/to/downloaded.iso of=/dev/diskN bs=1m (replace /path/to/downloaded.iso with the path where the image file is located; for example, ./windows7.iso)

5. Run diskutil eject /dev/diskN and remove your flash media when the command completes (this can take a few hours on slower drives)

As some commenters have pointed out, you can also make a flash drive bootable with utilities like LiveUSB Helper. Once you’ve done this, you can mount your Windows 7 ISO with a utility like DaemonTools Lite (For Windows) or MountMe (for Mac), and just copy over all the files to your newly-bootable drive.

Starting Your Install

Ok! Now you’ve got a bootable flash drive, and you’re ready to start installing. It should go without saying, but once you start this process, you’ll lose all existing data on your netbook, so you should back up any important files before going through with anything from here forward.

Insert your USB drive and reboot your netbook. As soon as your BIOS screen flashes, you should see instructions for a) changing your netbook’s boot order or b) entering its BIOS setup. In the first situation, simply assign the USB drive as the first boot device. In the second, navigate through your BIOS settings until you find a “Default Boot Order” page, and do the same thing there.

From there, you should see the first Windows 7 installation screens. Anyone with a 16GB or larger storage device in their netbook can just follow the instructions until the installation completes, and skip the next step.

If your SSD is smaller than 16GB, or if you just want to save some space, do what they say, but only until the first reboot. After the Windows 7 installer has restarted your computer, you’ll need to modify the boot order again. Do not allow installation to continue! Manually change the boot order to prioritize the USB drive again, just as you did at the beginning of the installation.

Compression!

Once the Windows 7 installer has copied most of its system files to your drive, you’re going to tighten them up with Windows’ trusty old “Compact” command. Here’s what you do, as described by Electronic Pulp:

Choose “Repair” at the Windows 7 Setup screen, go to “Command Prompt” and enter the following code:

d: (or whatever drive letter is assigned to your SSD)
cd \windows\system32
compact.exe d:\*.* /c /s /i

And wait. And wait and wait and wait. This can take anywhere from eight hours to two days, so you’ll want to set your netbook down in a corner and forget about it for a while. [Note: compressing so many of your system files does have a performance cost, but in day-to-day use, it’s negligible]

Once this is done, reboot the netbook again and let it continue the installation as normal. That’s it!

All said and done, an 8GB SSD should have nearly 2GB of free space left—not much, but enough to work with. And given that most netbooks come with inbuilt, flush SD expansion slots, and that high-capacity SD cards are extremely affordable, having a small amount of space on your root drive isn’t at all prohibitive.

There are other ways to slim down a Windows 7 install—namely by using programs like vLite, which can strip out some of Windows’ fat directly from the ISO—but Windows’ built-in file compression is the easiest way to squeeze Windows 7 onto your skimpy 8GB SSD.

Setup and Customization Help
Windows 7 runs fairly well out of the box, but as with any new Windows installation, you’re going to need to download some drivers to get things working properly. Vista drivers usually do the trick, but sometimes workarounds are necessary. Thankfully, most popular netbooks have spawned helpful fan forums, many of which have active Windows 7 subforums. Some of the best:

Aspire One
ASUS Eee Pc
Dell Mini
MSI Wind
HP Mini-Note

So there you go! Enjoy your new Windows 7 netbook! Please share your experiences in the comments-your feedback is a huge benefit to our Saturday guides. And of course, have a great weekend!

10 Examples of Summer Movie Merchandising Run Amok

The summer movie season is getting heated up, and the recent release of Star Trek has me thinking about all of the absurd merchandise that has come out over the years.

Star Trek certainly isn’t the only franchise that has gone way too far in the quest to make a buck—it’s bitter rival Star Wars also comes to mind. The battle between the two franchises has been fought on many fronts, but the question about who has the stupidest merchandise has yet to be debated.

[Original Image via Flickr]

The Great MP3 Bitrate Test: My Ears Versus Yours

There will be no judgment in this post. No sound snobbery. I’m simply asking the age-old question: At what bitrate should we encode MP3s? And I need your help.

This test is occurring in two parts. In part one, I’m sampling three songs chosen from vastly different genres, encoded from CD and transcoded into the various popular bitrates available for MP3s (64k, 96, 128, 160, 192, 256, and 320kbps with VBR off). I tell you what I hear, then you sample the files yourself, and tell me what you hear.

Part I – My Test
I’m sitting here with Pioneer’s brand-new VSX-1019AH-K receiver, a $500 model that actually pulls the MP3 data off of USB drives and iProducts for decoding within the receiver itself. (According to Pioneer, this “Advanced Sound Retriever” technology restores sound lost in the MP3 conversion process, so I figure it’s the best MP3 experience I’m gonna get.)

The sound is being sent through 14 gauge Monoprice speaker wire to twin Definitive Mythos STS Supertowers ($3,000/pair). We wanted to assemble an ideal, nice home audio system that could make MP3s sound their best. We feel that this combination of superb speakers and MP3 decoding reaches a reasonable benchmark for the reasonable enough price of around $3,500. Since most readers including me aren’t going to run out and buy anything nicer, it represents a decent ceiling of audio quality.

Pure Prairie League – Woman
My first pass, I couldn’t hear a difference beyond 128. And it was a little worrisome. But no judgment, that’s the rule! I took another pass…things did seem to get better…but was I imagining it?

So I skipped from 128 to 192. Then I could hear an improvement as the instruments were unchanged but the vocals grew more lifelike. Songs encoded beyond 192 sounded different in terms of balance, but not necessarily any better. I wonder if, since the song was “digitally remastered,” studio technicians compressed the audio to begin with.
My conclusion: 192

Gorillaz – Feel Good Inc.
It was a total shock. I could hear the differences in bitrates, all the way to the top, the first pass through the list. I had assumed, whatever, some electronic type music. It won’t matter. But even the jump from 192 to 256 was dramatic on my system, with every enhancement giving me more detail in the laugh and a richer, wetter bass line.
My conclusion: 320

Bizet – Carmen Suite #1
During my quick first pass, I didn’t hear a difference beyond 160. Skipping intervals, I found no improvement going from 160 to 192, but a noticeable improvement from 160 to 256. The middle just feels fuller, with a far more lifelike reverb to the low to mid horn section. I’d like to say that I heard a difference up to 320, but I’m willing to chalk that up to the power of suggestion.
My conclusion: 256

Also, I compared the 320kbps recordings to their uncompressed WAV counterparts. The only difference I could hear was in the Pure Prairie League’s Woman. The vocals and high level instrumentation felt ever so less harsh. It’s a bit ironic, as that was the song I had the biggest problem distinguishing bitrates in the first place.

Back when I tested my ear in college, I found the cutoff to be 160, and have since encoded all of my music at that level (though it’s become less of an issue now that MP3s are more often downloaded than ripped from CDs). Now, however, it’s pretty apparent that with more hard drive space and a nicer audio system—my earlier testing was just on a set of decent computer speakers—it might be worth reassessing my encoding rates. In just these three songs, I found a huge fluctuation, and not in any way I intended. Honestly, I figured that Carmen would require the best bitrate to assuage my ear.

Now, I wouldn’t encode lower than 192kbps, and I’d be tempted to push the boundaries to 256kbps and 320kbps on the music I planned on listening to very closely, though my laptop’s hard drive would probably hate me for it.

Part II – Your Test
Enough with me talking, now it’s your turn. You’ll find the files you need below alongside an accompanying poll. Please don’t vote based upon past experience or my subjective impressions, and feel free to test on any system you like (as long as you note it in the survey).

Oh, and the easiest way to peruse the files quickly is to click the first audio link, let it load in your browser, then just change the bitrate number in the filename up in the address bar—fast and easy to do any side-by-side comparison you like. Well, at least on your crappy computer speakers.

TEST FILES

DOWNLOAD THEM IN ONE BIG ZIP HERE (MediaFire), or use individual links through your browser below:

Woman 64
Woman 96
Woman 128
Woman 160
Woman 192
Woman 256
Woman 320
Woman WAV

Feel Good 64
Feel Good 96
Feel Good 128
Feel Good 160
Feel Good 192
Feel Good 256
Feel Good 320
Feel Good WAV

Carmen 64
Carmen 96
Carmen 128
Carmen 160
Carmen 192
Carmen 256
Carmen 320
Carmen WAV

And here is the survey (CLICK THROUGH TO NEW PAGE)

Giz Explains: GPGPU Computing, and Why It’ll Melt Your Face Off

No, I didn’t stutter: GPGPU—general-purpose computing on graphics processor units—is what’s going to bring hot screaming gaming GPUs to the mainstream, with Windows 7 and Snow Leopard. Finally, everbody’s face melts! Here’s how.

What a Difference a Letter Makes
GPU sounds—and looks—a lot like CPU, but they’re pretty different, and not just ’cause dedicated GPUs like the Radeon HD 4870 here can be massive. GPU stands for graphics processing unit, while CPU stands for central processing unit. Spelled out, you can already see the big differences between the two, but it takes some experts from Nvidia and AMD/ATI to get to the heart of what makes them so distinct.

Traditionally, a GPU does basically one thing, speed up the processing of image data that you end up seeing on your screen. As AMD Stream Computing Director Patricia Harrell told me, they’re essentially chains of special purpose hardware designed to accelerate each stage of the geometry pipeline, the process of matching image data or a computer model to the pixels on your screen.

GPUs have a pretty long history—you could go all the way back to the Commodore Amiga, if you wanted to—but we’re going to stick to the fairly present. That is, the last 10 years, when Nvidia’s Sanford Russell says GPUs starting adding cores to distribute the workload across multiple cores. See, graphics calculations—the calculations needed to figure out what pixels to display your screen as you snipe someone’s head off in Team Fortress 2—are particularly suited to being handled in parallel.

An example Nvidia’s Russell gave to think about the difference between a traditional CPU and a GPU is this: If you were looking for a word in a book, and handed the task to a CPU, it would start at page 1 and read it all the way to the end, because it’s a “serial” processor. It would be fast, but would take time because it has to go in order. A GPU, which is a “parallel” processor, “would tear [the book] into a thousand pieces” and read it all at the same time. Even if each individual word is read more slowly, the book may be read in its entirety quicker, because words are read simultaneously.

All those cores in a GPU—800 stream processors in ATI’s Radeon 4870—make it really good at performing the same calculation over and over on a whole bunch of data. (Hence a common GPU spec is flops, or floating point operations per second, measured in current hardware in terms of gigaflops and teraflops.) The general-purpose CPU is better at some stuff though, as AMD’s Harrell said: general programming, accessing memory randomly, executing steps in order, everyday stuff. It’s true, though, that CPUs are sprouting cores, looking more and more like GPUs in some respects, as retiring Intel Chairman Craig Barrett told me.

Explosions Are Cool, But Where’s the General Part?
Okay, so the thing about parallel processing—using tons of cores to break stuff up and crunch it all at once—is that applications have to be programmed to take advantage of it. It’s not easy, which is why Intel at this point hires more software engineers than hardware ones. So even if the hardware’s there, you still need the software to get there, and it’s a whole different kind of programming.

Which brings us to OpenCL (Open Computing Language) and, to a lesser extent, CUDA. They’re frameworks that make it way easier to use graphics cards for kinds of computing that aren’t related to making zombie guts fly in Left 4 Dead. OpenCL is the “open standard for parallel programming of heterogeneous systems” standardized by the Khronos Group—AMD, Apple, IBM, Intel, Nvidia, Samsung and a bunch of others are involved, so it’s pretty much an industry-wide thing. In semi-English, it’s a cross-platform standard for parallel programming across different kinds of hardware—using both CPU and GPU—that anyone can use for free. CUDA is Nvidia’s own architecture for parallel programming on its graphics cards.

OpenCL is a big part of Snow Leopard. Windows 7 will use some graphics card acceleration too (though we’re really looking forward to DirectX 11). So graphics card acceleration is going to be a big part of future OSes.

So Uh, What’s It Going to Do for Me?
Parallel processing is pretty great for scientists. But what about those regular people? Does it make their stuff go faster. Not everything, and to start, it’s not going too far from graphics, since that’s still the easiest to parallelize. But converting, decoding and creating videos—stuff you’re probably using now more than you did a couple years ago—will improve dramatically soon. Say bye-bye 20-minute renders. Ditto for image editing; there’ll be less waiting for effects to propagate with giant images (Photoshop CS4 already uses GPU acceleration). In gaming, beyond straight-up graphical improvements, physics engines can get more complicated and realistic.

If you’re just Twittering or checking email, no, GPGPU computing is not going to melt your stone-cold face. But anyone with anything cool on their computer is going to feel the melt eventually.

Apple’s Tablet: The Story So Far

With so many rumors about an Apple tablet buzzing around, it’s hard to believe Apple wouldn’t announce one this year. But what do we really know about this thing?

Apple fans are an expectant bunch, and one thing or another has gotten their hopes up nearly every year since the death of the Newton. But more recent—and especially post-iPhone—tablet rumors have become so intense, varied and inconsistent that it’s hard to come away with a coherent picture of what to expect. Here’s what we’ve got, and what it means.

Patents
Patent applications have kindled more bizarre Apple rumors than I can count, but there has been an undeniable cluster of activity around tablet-oriented tech as of late.

The earliest seeds of the current tablet frenzy can be traced back to 2004, when Apple filed for a European design trademark on a device that looked like “an iBook screen minus the body of the computer.” It was much larger than what people are expecting now, but in some ways the design prefigured the aesthetic of the next few generations of iMac, and even the iPhone.

Skip forward to 2006, when Apple filed for a patent for an onscreen keyboard, gesture recognition and a virtual scroll wheel. Again, some of these technologies would find their way into the iPhone before too long, but the application contained a telling mockup of a tablet-esque product, smaller than the 2004 version, but which fit most of its description.

A flurry of offhand “tablet” shout-outs in tangentially related patents followed, but none carried much weight. It wasn’t until August of ’08 that something truly momentous passed in front of the weary eyes of a Patent Office employee: A huge, generously illustrated filing describing how OS X could be adapted to touch input. In it were descriptions of iPhone-like interface element magnification, a full-sized multitouch onscreen keyboard, and finally, plenty of drawings of a tablet device being prodded by inexplicably troll-like horror-fingers (shown at left). A hardware patent—kind of like the 2004 tablet patent—surfaced a few months later, outlining a keyboardless device not unlike the one sketched previously.

In a nutshell, even though an Apple touchscreen tablet doesn’t yet exist, your lawyer would probably still advise you against trying to knock one off.

Rumors (and Facts)
Companies file patents for all kind of reasons, and when you’re as big as Apple plenty of them go unused. They only provide context for other juicier rumors—employee leaks, coded statements from company leadership, hardware orders processed through three layers of Taiwanese press—that can really grow legs. Apple tablet rumors have short lifespans—they either come true within a reasonable timeframe or they fizzle out. Point is, right now there’s a glut of them.

The current groundswell of wild speculation harks back to late 2007, when AppleInsider conjured a rumor that Apple was working on a slightly larger version of the iPhone. This was the first time in a while that anyone had talked about such a product, and it was exciting: Jesus mocked up a beautiful version himself, which led to a massively popular Photoshop contest.

In 2008, a loose-lipped German Intel executive let slip that Apple may be working on an Atom-based unit, which he referred to as a “version of the iPhone.” This odd outburst was quickly minimized, but was soon followed by a full-throated alert from MacDailyNews that an OS X-equipped MacBook Touch would drop by October.

Next came a NYT report in October that a “Macbook Nano or iPhone Slate” device had been discovered in the traffic logs of a major search engine. As was the tendency those days, people honed in on the possibility of a Mac netbook, to which Steve Jobs cryptically replied that Apple would “wait and see” how sales held up, and that in the event that they enter the ultraportable market, they’ve “got some pretty interesting ideas…” Oh good gracious, what could that mean?

This is when things really picked up. TechCrunch stuck their necks out too, saying that they’d talked to “three different sources” close to Apple, all of whom confirmed an iPod Touch-like device. This means—counter to MacDailyNews’ talk of a fully operational tablet computer—that it would run a stripped down mobile OS X like the one in the iPhone.

Just a few months ago, something resembling hard evidence emerged: The Commercial Times, Dow Jones news wire and Reuters all reported that Apple had ordered 9.7″ multitouch panels from Wintek. These would be the displays in a device set for a Q3 release. Shortly after, the WSJ reminded us that Steve Jobs was still pulling all the strings at Apple, and went out on a limb to say that he was working on something:

People privy to the company’s strategy say Apple is working on new iPhone models and a portable device that is smaller than its current laptop computers but bigger than the iPhone or iPod Touch.

BusinessWeek then put on their rumor-blog hat too, recently corroborated these rumors with sourced rumors of their own, fingering Verizon as a potential carrier for a 3G-enabled “Media Pad”. They were even so bold as to peg the summer of ’09 as a possible release date.

Deja Vu?
Something striking about these rumors is how conceptually similar they are to rumors from 7 or 8 years ago. This is from a 2002 eWeek “hunch” post, the last time that Mac tablets seemed “inevitable”, mostly on account of Apple’s rival Microsoft, and its over-hyped promotion of all things tablety:

This pre-release hardware combines a next-generation, low-power Motorola PowerPC chip and formidable screen real estate into a typically impressive Apple industrial design. The hardware is lightweight and slender, and the battery life skunks comparable Tablet PCs…the software is homegrown, pairing Mac OS X with the company’s impressive handwriting-recognition technology

The writer, Matthew Rothenberg, later specified:

[It’s a] device that superficially resembles a large iPod with an 8-inch diagonal screen, lacks a keyboard, packs USB and FireWire ports and runs Mac OS X along with a variety of multimedia goodies

A large screen that serves as the primary input device, a minimalist design, a proprietary Apple input system and better-than-average battery life? That describes the theoretical devices of 2009 nearly as well as it does those of 2002. Anyway.

The Most Compelling Evidence
Hidden somewhere amidst all the patent-filing and reputation-staking are some legitimately convincing pieces of information:

• Steady allegatons of Apple’s long, storied interest in tablets—buoyed by occasional patent filings—count for something, as does their consistent cynicism about netbooks (the only real alternative to tablets in the ultramobile computing space).
• The late 2008 patent app for a multitouch tablet interface is thorough, practical, timely and contains a plausible (if basic) mockup.
• The Wintek 9.7″ panel order is the closest thing to hard evidence that we’ve got. It’s a good bet that Apple has them, or will soon, and that they’re putting them to use—but not a sure one.
• That the device has no keyboard, is moderately sized, and that it’s media-centric are all ideas shared by those who’ve separately floated sourced tablet rumors (TechCrunch, BusinessWeek, MacNewDaily).

It looks like there’s a good chance a tablet is on its way. Separate rumors point to similar launch dates: Some say Q3, some say June, but they all could be talking about the same date, or at least the same swath of time.

What to expect as an OS is more difficult to divine from the above speculation, but common sense is instructive: iPhone OS wouldn’t work on a larger device. It’d be more trouble than it’s worth to reconfigure the core interface for a 10″ screen, and all the thousands of third-party apps written with the iPhone’s screen size and shape in mind would becoming all but useless. Barring some kind of app-in-a-window workaround—which doesn’t sound very Apple-like—or an entirely new version of OS X—which doesn’t seem necessary—desktop OS X with a modified shell, as shown in the 2008 interface patents, stands as the most likely candidate. It works pretty well on 9″ netbooks as is, so a 10″ screen with smart multitouch interface would make for a solid user experience.

Another common thread that runs through most of these rumors is the sense that this device would (or will) be a disruptive, industry-altering product, like the iPhone or iPod. But it’s difficult to see exactly how it would be: Far from setting new standards for smartphones or revolutionizing the portable music player industry, an Apple tablet would be treading where many others have before. It will be smaller than older tablet PCs and lack the keyboard, but that’s not worlds different, functionality-wise than MIDs and UMPCs like the OQO. It’d be thinner, wouldn’t have a keyboard and would pack OS X, sure, but it might not be distinguishable enough from existing hardware to really shake things up.

On the other hand, the disruption could come from the way it is introduced. Wireless carriers are eager to expand revenue streams and keep people under contract, and many rumors and abstract executive comments focus around the idea that tablets—not just Apple’s—will be inherently wireless devices, and they will be sold by carriers. That may seem far fetched now, since we’re generally used to buying laptops without a service plan, but it could easily be the next revolution in wireless hardware.

There is plenty we don’t know, and very little we can depend on. In the end, we have a screen size, a likely form-factor, an OS and a probable release window. Past that, the info is all chaff, and your guess as to how this thing will look—or if it will ever come out—is as good as ours. And guess you have—over the past few years everyone and their mom has mocked up an Apple Tablet. Here are our favorites from readers and industry insiders alike:

Cineplexes Getting IMAX, But Is It IMAX or CONSPIRACY?

You’ve probably seen the new phenomenon with your own eyes: A cineplex IMAX that doesn’t have the monster screen you grew up with in science-museum IMAX theaters. Here’s the what, the how and the why.

Just last night, comedian Aziz Ansari (from Parks and Recreation) published this piece describing the conspiracy of paying an extra $5 to see an “IMAX” movie that really wasn’t much bigger than a normal screen.

I actually visited IMAX HQ a few weeks back, and a major point of discussion was the retrofitting process so lovingly described by Aziz. Basically, IMAX used to build their own massive theaters in their own buildings. But now, in order to expand, the company has made a deals with major theater chains like AMC in which they’ll provide and install their proprietary mix of projectors, screens, speakers and hardware if the theater will foot the bill for the necessary structural renovations.

This plan, for better or worse, is IMAX’s only current design for expansion in the US.
This conversion process, which has a patented geometry, includes installing a screen that’s only slightly bigger (as little as 10 feet wider than before), but this screen is coupled with the removal of several rows of seats which allows it to be scooted roughly 30 feet closer to the audience, creating a sort of sitting too close the TV effect with a screen that, I was told, is perceived as 75 feet wider than before.

When the process was described to me, I thought it all sounded a bit hokey. But walking into IMAX’s test multiplex, an otherwise typical AMC located in a Canada, I was shown a side-by-side of the same theater before and after the retrofitting process.

I will say, the new screen looked much bigger and far more imposing—”night and day” would make for a fair analogy. My mind wasn’t mentally prepped for such a tangible difference, though I’d agree that it still fell short of, say, the unbelievable, multi-story beast of a screen that I watched Star Trek on several days later at a classic, standalone IMAX.

But the change I didn’t expect (and I can’t pretend to have perceived this tidbit up on my own) was a remarkable difference from acoustic paneling. Clapping in the original theater revealed a very live environment with a frightening amount of echo. The retrofit, however, absorbed the sound in a pleasant way, reminiscent of more than one acoustically-planned stage I performed on back in my band days.

There are other improvements as well, including a specifically non-THX-certified sound system, reaching up to 14,000W, that offers 117db of uncompressed digital sound without distortion. Engineers claimed that in a normal theater, the sweet spot for audio is in the dead center, and technicians make no effort to tend to those sitting in the back. Meanwhile, IMAX’s system promised the same surround experience anywhere in the theater.

I tested that theory during a screening of some Rolling Stones at the Max footage by moving from the center of the theater to the back right corner. And there’s absolutely no doubt, I lost a good deal of the side channels while the rear channel (in this case, it was the lead guitar, I believe), dominated the audio spectrum. I wouldn’t have expected IMAX to have achieved the impossible unless, you know, they claimed that they had.
The other chief part of this retrofitting process is the new digital IMAX projector. Since its debut in the 70s, the Xenon-lamp-powered projector has stayed mostly unchanged. But with film prints reaching around $40,000 apiece, IMAX has embraced the digital revolution in their theaters (the cameras are still film with no plans mentioned to change that).
With the digital installations, films arrive on a standard hard drive, encrypted with DRM provisions that state just when a theater is authorized to play a film…errr…video.
Their projector is actually two, 2K Christie projectors that spit out the same image at the same time. A camera is positioned in between the projector lenses, tracking screen brightness in real time. An integrated server aggregates this and other data, adjusting both projectors for thermal shift, making sure the images don’t change as they play. There are also a slew of other, top secret proprietary imaging adjustments going on at all times.

I know what you’re thinking: Why didn’t IMAX just use a 4K projector and save the hassle, especially with AMC announcing that all of their theaters would be equipped with 4K Sony projectors by 2012? IMAX does believe their projector offers a sub-pixel accuracy that, when combined with some extra imaging processing, looks better than Sony’s 4K.

You can see imperfections in their digital projection system just like any digital system. The screen door effect, while minimized, can be noticed in bright spots of the image—if you’re looking as closely and skeptically as I was. And you only need to move back in the theater to realize that the picture does appear sharper as you step away from the screen. In other words, it’s not hitting some theoretical maximum perceived resolution…or even the best of what IMAX film can show. (As IMAX archives their own film into 8K and 12K prints, you can assume that the company feels the resolution of their product is much higher that their digital projectors may show).

The good news is that IMAX’s digital projection system is “projector agnostic,” meaning if a more suitable base projector comes around (be it 2K, 4K or higher), the realtime syncing and adjustment system can scale accordingly. In other words, when every AMC is stocked with 4K projectors in a few years, hopefully IMAX will be upping the ante as necessary by dual wielding 4K+ projectors instead.

So is this new IMAX, with smaller screens, with digital projection, still IMAX? Honestly, there are probably only a small handful of technicians—who aren’t exactly sharing proprietary knowledge and decisions—capable of answering that question with scientific earnestness. To my eyes and my gut, it’s more IMAX Lite or Normal Theater Enhanced. Is a retrofitted theater worth your extra $5? For the movies most likely to make it to the screen (big budget action), I think so…though maybe not for a family of four.

The price probably shouldn’t be the same as a standalone IMAX theater, but I think that the point Ansari misses is that cineplexes are already benefiting from a pricing structure that makes viewers pay the same amount no matter what screen they see a movie on (how many times do beautiful art films get shunned to a broom closet of a theater while summer blockbusters are played on a plex’s largest screen?). At minimum, the $5 IMAX premium ensures you see a movie on a screen that’s better than the best AMC or whoever has in their building.

Personally, I hate to know that we will probably never see another 12,700sqft foot IMAX screen built (like that found in Mumbai), and that 70mm film projection is being traded for digital before digital is undeniable image perfection. But if the compromise is that more people will be seeing movies in theaters with bigger pictures and tighter quality control, then maybe it’s a compromise worth making.

Look for lots more on our IMAX visit in the coming weeks.

22 Dream Homes That Range From the Fanciful to the Insane

For this week’s Photoshop Contest, I asked you to design your dream home, no matter how impractical it may be. It turns out that a lot of you want to live on the Death Star.

First Place – Vince Versna
Second Place – Richard Green
Third Place – Jeff

All-American Tech: What’s Hot Here (and Nowhere Else)

People are always eager to point out cool technologies that America ignores, but what about the ones that we—and only we—use? Enough with the grousing: Here’s what we’ve got that they don’t.

TiVo
For a long while, TiVo was the undisputed king of TV recording. Other DVRs have come a long way in the last ten years, but they’re all late to the party, and still playing catchup: The TiVo name is now permanently tattooed into the public’s consciousness, synonymous with recording shows and backed up by still-impressive hardware.

But the fact that TiVo has attained a near-Kleenex level of brand recognition in the US doesn’t mean a thing overseas. As of writing, the service is only available in a few other places—Canada, the UK, Mexico, Taiwan and Australia—where it has been met with limited enthusiasm. While the US, with its huge, old, fragmented cable industry, offers a fantastic opportunity for a meta-service like TiVo, smaller countries with one or two dominant pay-TV providers—which have their own increasingly formidable DVR alternatives—are tougher nuts to crack.

The Kindle
This choice might seem odd—or at least inconsequential—on account of the steady stream of new e-reader hardware available all over the world, but Kindle exclusivity is actually a technological feather in America’s cap. Why? Because the source of the Kindle’s importance isn’t its hardware, but its connectivity and the service it’s tied to.

Anyone can slap a case around a panel of E-Ink and add an off-the-shelf Linux OS—and plenty of companies have. But being linked wirelessly to a massive library of legal downloads, bestselling books, magazines and newspapers, is what will make a reader great. For now, the only mainstream reader that can claim such a feature is the Kindle, and the only country that can claim the Kindle is the US. Not that it can’t go global—similar services for music and TV, like the iTunes store, have found ways to deal with tricky licensing and gone global—it’s just that it probably won’t for a while.

Push-to-Talk
Without a doubt, this is the technology that feels the most American on this list. Intended primarily for the workplace, push-to-talk technology has tragically seeped into the mainstream, subjecting millions of innocent mall shoppers to that incessant, inane chirping, and the shouting at the handset that accompanies it. Who hasn’t been inadvertently pulled into the middle of a heated, long-distance argument about novelty Jimmy Dean breakfast sandwiches flavors while waiting in line at Walmart? Well, pretty much anyone who doesn’t live in America—and not just because they don’t have Jimmy Dean, or Walmart.

As it turns out, PTT’s Amerophilia can be explained by little more than poor marketing. According to ABI Research:

In other world regions MNOs have failed to market PTT successfully to business users or have opted to market to consumers, and it just hasn’t taken off.

Nextel, which was inherently crippled by a proprietary network technology that wasn’t built out in any other country but the US, found success with PTT by pitching handsets to businesses as turbocharged Walkie-Talkies, not by marketing them directly to consumers, most of whom would have trouble imagining a more efficient way to make themselves look like brash assholes.

Video On Demand
iTunes has gone worldwide and services like BBC’s iPlayer have brought the Hulu model overseas, but America still has the best VOD situation in the world, bar none. The problem is simple: Even countries with a healthy entertainment industry import a tremendous amount of American TV, often well after it was originally broadcast. This regional disparity seems kinda stupid in the age of the internet and VOD, but it’s just as severe as it ever was.

European or Asian viewers have to wait for painful weeks or months for a domestic channel to license, schedule and dub international American hits like Lost or Mad Men, and hope, assuming their stations have a VOD service, that the show eventually finds its way online. As an ad-supported service and a product owned by the networks who profit from the above arrangement, Hulu’s reluctance to stream content to countries is understandable, but the despair is deeper than that: You can’t even pay for TV if you want to. People without American billing addresses are barred from VOD services like Amazon’s Unbox, and will find their iTunes video selections sorely lacking.

Satellite Radio
Since is smells distinctly like a waning technology, satellite radio might not do much to stir your techno-patriotism, but goddernit, it’s ours. The US has far more satellite radio subscribers than the rest of the world combined, all through the remains of Sirius and XM, now merged under the lazy moniker of “Sirius XM”. Why? We have lots (and lots) of cars.

Satellite radio actually has roots as a proudly international service—after all, it is broadcast from frickin’ space—having been developed in part by a humanitarian-initiative company called 1Worldspace, which was established to broadcast news and safety information to parts of the globe without reliable terrestrial radio infrastructure. They still exist today, but they broadcast to fewer than 200,000 subscribers, mostly in India and parts of Africa. Satrad’s American success can be solely credited to our auto manufacturers, who eagerly installed satellite units in new cars for years, healthily boosting subscription numbers (but not necessarily car sales). With no comparably pervasive car culture to take advantage of anywhere else in the world, satellite radio is a tough sell.

Why We Need to Reach the Stars (and We Will)

We reached the Moon in a tin can, built a humble space station, and have a plan to reach Mars in a bigger tin can. But we need to reach the stars. And we will.

Yes, I know what you are thinking: “It’s impossible.”

And right now, you are right. Our current propulsion engines are, simply put, pathetic. We are still in the Stone Age of space travel. As cool as they are, rocket engines—which eject gas at high speeds through a nozzle on the back of a spacecraft—are extremely inefficient, requiring huge volumes of fuel runs out faster than you can say “Beam me up, Scotty.”

We have cleared the tower

Solid boosters, hybrid, monopropellant, bipropellant rockets… all these would be impossible to use in interstellar travel, with maximum speeds going up to a maximum of 9 kilometers per second. Rockets won’t work even using the effect of planetary gravity to gain impulse. Voyager—the fastest man-made spacecraft out there racing at 17 kilometers per second—would need 74,000 years in deep space to reach Proxima Centauri, the red dwarf star located at 4.22 light-years in the Alpha Centauri system, the closest to our Sun.

But even if we were able to build a massive spacecraft with today’s experimental—but feasible—propulsion technology, it will still take thousand of years to reach Alpha Centauri. Using nuclear explosions—like the ones proposed in the Orion project—would be more efficient than rockets, achieving a maximum of 60 kilometers per second. That’s still a whopping 21,849 years and a couple months.

Using ion thrusters—which use electrostatic or electromagnetic force to accelerate ions that in turn push the spacecraft forward—would only reduce that amount marginally. Even theoretical technology—like nuclear pulse propulsion, with speeds up to 15,000 kilometers per second—won’t cut it. And that’s assuming we can find a way for these engines to last all that time. And let’s not even get into the resources and engineering needed to create a vessel capable of sustaining life for such a long period of time.

All to reach a stupid red dwarf with no planets to explore. We may as well not go, really. You know, let’s just save Earth from our own destruction and colonize Mars or Titan or Europa (if the aliens let us do that.)

Our ignorance is our only hope

It gets even worse. Our current understanding of physics—which says that nothing can travel faster than light—basically establishes that we will never be able to achieve space travel in a way that is meaningful to Humanity. In other words, even if we are able to discover a propulsion method that could get a spacecraft close to the speed of light, it will still take hundred of years to reach an star system with planets similar to Earth. By the time the news get back to us, we all will be dead.

And that’s precisely the key to our only hope to reach the stars: Our ignorance. As much as we have advanced, we are still clueless about many things. Physicists are still struggling to understand the Universe, discovering new stellar events that we can’t explain, and trying to make sense of it all, looking for that perfect theory that will make everything fit together.

That fact is that, since we don’t know how everything works, there still may be something that opens the way to faster-than-light space travel. Discovering the unknown—like physicists have been doing since the Greeks—and harnessing new math and theories into new technology is our only way to spread through the Universe in a way that makes sense to Humanity as a whole. You know, like Star Trek or Battlestar Galactica or Star Wars: Travel across the Universe in hours or days, not in centuries or millennia.

I’m giving her all she’s got!

One of those yet-to-be-unraveled things is the Big Bang, the origin of the Universe itself. Our origin, the final question that we have been trying to answer since we came out of the cave and looked up the night sky. We still don’t know exactly what happened, but the observation of the Universe from Earth and space probes have caused some physicists to propose many different models. One of these models says that, during the initial inflation period of the Universe, space-time expanded faster than light. If this turns out to be the case, it would make possible the creation of warp drives.

Yes, the warp drives.

Warp drives were first proposed in a logical way by Mexican physicist Miguel Alcubierre. He theorized that, instead of moving something faster than the speed of light—which is not possible under Einstein’s relativity theory—we could move the space-time around it faster than the speed of light itself. The spacecraft will be inside a warp bubble, a flat space that will be moved by the expansion of the space behind it and the contraction of space in front of it. The spacecraft won’t move faster than light, but the bubble will. Inside the bubble, everything would be normal.

A way to understand the effect, as Marc Millis—former head of the Breakthrough Propulsion Physics Project at NASA’s Glenn Research Center—explains, is to look at the way a toy boat reacts in the tub when you put some detergent behind it. The bubbles will expand the space behind the boat, impulsing it forward. In the same way, a spaceship with a warp drive would be able to do the same thing.

But while there have been already experiments in the laboratory that suggest that this may indeed be possible, we are still far, far away from developing the technology that would make warp drives a reality. To start with, the amount of energy necessary to bend space like this is way beyond anything we can produce today. Some scientists, however, suggest that antimatter may be the fuel that will make this possible.

Again, there are a lot of question marks surrounding antimatter, but this is precisely part of our only hope: Somewhere, still hiding, is the breakthrough that will make interstellar travel possible. The possibility is still there.

Why should we go to the stars?

So call me an optimist if you have to. It may be all this sun shining in New York right now. Or maybe it is because I saw Star Trek yesterday (and it was as good as I hoped it to be and then some more.) The fact is that I’m convinced that interstellar travel will happen. You and I will probably not see it, but if Humanity can survive self-annihilation, I’m sure we will achieve it.

No, “will we reach the stars?” is not the question to answer. We will. The more important question is why do we need to go?

The answer to this is the reason why we have celebrated humans in space all this week, now coming to its end. As I said when we started Get Me Off This Rock, space exploration is the most epic and most important adventure Humanity has ever embarked upon. When we travel to space we are opening the way to the preservation of Humanity. We are trying to contact other civilizations. We are trying to answer the biggest questions of them all: Who are we? Why are we here? How did we get here? Are we alone in this rock we call Earth?

But there is more. A lot more. Ultimately, the most important thing will not be getting the answers to these eternal questions. The most important thing will be the process of reaching for the stars. Because if we manage to get there, it would mean that we managed to survive as a species. That is the only way we can develop the engineering and the resources needed to build something like the Enterprise. Survive self-destruction, solve the problems we have here, collaborate, work as species, not as countries or corporations.

That’s what space exploration and interstellar travel is all about. Only if we manage to go beyond our petty fights and stupid wars, only if we work together towards a better future, we will be able to go where no one has gone before. And be back to tell about it before dinner gets cold.

Recommended reading: Wikipedia, The Warpdrive: Hyper-fast travel
within general relativity by Miguel Alcubierre (PDF)
, Assessing Potential Propulsion Methods (PDF)

From Earth To Moon Redux: How The Next Moonshot Will Happen

May 2019: Our scheduled return to the moon. There’s plenty of laboring to be done on the Constellation Program before then, but the foundation is set. Here’s how you—as an astronaut—would experience the mission:

Ares V Unmanned Cargo Rocket, EDS and Altair: The Gear Goes Up First
First it’s the turn of the giant unmanned Ares V, carrying most of the real hardware you’ll need on your journey. You and the rest of your astronaut compadres walk around the pad hours ahead of the launch—a metaphorical kicking of Ares’ tires. Man, that thing seems hellish big.

Six hours later you’re watching the countdown from VIP bleachers, and all 360 feet of rocket looks even more ominous. You all have on the “spaceman” face for the news cameras—confident, professional, all smiles. But when the five RS68 engines at the bottom of that rocket light up, followed by the two solid boosters, and that thundering noise finally reaches you, you’re all suddenly kids on Christmas morning. Literally tons of fuel is burned every second, pushing a blunt needle skywards. It makes a heck of a show, and the noise of Ares V racing to space barely covers your whoops. Quickly you remember to use your crappy little digicam to snap the rocket’s launch—there’ll be thousands of official photos, but these will be yours.

Minutes later, you and the crew watch monitors in a nearby viewing room as the rocket makes it to orbit. Everyone’s quiet, as they see the final stage, the Earth Departure Stage, fire its engines. The huge aerodynamic nose cone isn’t needed any more and it pops off, revealing the lunar lander, an Altair. It’s bolted at the top of the EDS, and looks more like a sci-fi fantasy than a real moon ship. Eventually, the instruments aboard the EDS all phone home to NASA with a digital OK, and the spacecraft pauses. It’s waiting for you to join it out in space.

Ares I Crew Rocket, Orion Capsule: Time For You To Hit the Road
Twelve hours later, it’s your turn to go up. All six of you are suited-up and sardined into an Orion capsule, 280 feet above the launch pad at the top of an Ares I rocket. While ticking off mission control’s checklist, you think about the imminent journey. If Ares V is a giant space truck, the smaller Ares I you’re strapped to is a crazy-ass custom-engined dragster—a dragster without a parachute brake, that is.

Eventually the time ticks down to T-Zero: The booster’s solid fuel is ignited, and acceleration slams you and the crew in the back as “The Stick” races skywards. Holy crap, it’s a wild ride: Pure rocket chemistry, raw chest-squeezing thrust from a giant Roman candle. The booster burns out in just 150 seconds, and detaches with a wrenching noise and a jolt—the external camera view you see of it tumbling away behind you is awesome. Then comes thrust from the liquid-fuel J2X engine—the first taste of Apollo-era tech, updated for the 21st century. The ride is now smoother, a little less like Aliens, a little more like 2001.

Rendezvous in Orbit: The Delicate Mating Dance of Spaceships
Switches are thrown and your ship’s computer matches the Orion’s orbit with the waiting Earth Departure Stage with Altair moon ship. Your skin feels alternately hot and cold, which has nothing to do with the air conditioning or the sunlight stabbing through the capsule window—just excitement. And finally there it is: The EDS, clear in the sunlight, spinning gently as the laser-guided rendezvous process with your capsule begins. At one point the Altair’s given name is visible, hand painted in copperplate by some techie a thousand miles away: Rama. That had given you a shiver. You hear the clunk of mating adapters as Orion joins the EDS, greeted by cheers from Houston over the radio and a bunch of zero-g hand shaking with the rest of the crew.

Moon Shot: Leaving Earth’s Orbit
“The Stick” has become “The Stack,” and all is ready to leave earth orbit, and head out toward the moon. The mood is calm: No one aboard will let themselves believe it yet. But twelve hours later, when long checklists are complete, and the magic words, “Go for lunar orbit burn,” come over the radio, emotion arrives with a rush. “Want a drink?” comes a request from behind you, and the accompanying wink made you curious. Sipping at the plastic squeeze bag you suddenly weren’t surprised to taste a tiny stab of whiskey: Totally against the rules, but frankly the people who made those rules weren’t riding a flimsy steel, titanium and composite can mated to a couple dozen tons of explosive gases in outer space.

The EDS’s engine fires up again, this time pushing the Altair and the Orion forward and you—tucked inside—into a head-back, eyeballs-out position as you fly, backwards as it were, to your date with history.

When its fuel is gone, the EDS is ejected, leaving you racing to the moon for three days in the combined Altair/Orion moon ship at 25,000 miles per hour. You’re just desperate to take a walk.

The Lunar Landing: Pulling a Neil Armstrong, 50 Years Later
40m… 35m… The counter in the middle of the Altair’s hi-res display screen has simulated LEDs, like an old alarm clock, and it makes you smile. Those numbers are a serious wake up call though: They’re exactly how far above the dusty surface of the moon this little spacecraft hovers. Altair—wasn’t that the name of an old computer? Probably had more CPU power than the original Eagle did, you think. Armstrong landed that old thing on a wing and a prayer. Now it’s your turn, and your mind’s free to wander because computers are largely in control, steering, firing the RL10 rockets and monitoring radar. It’s just a question of checking in case you need to intervene. Your hand hovers over that big red “LANDING ABORT” button, which you hope never to push.

25m… 20m… A lateral shove from a thruster shakes you and your fellow moonwalkers behind you, a minor course correction. 15m… “Kicking up a little dust,” you say over the radio, and you know the guys behind are grinning. “Aye captain!” quips back the mission’s chief engineer.

10m… 5m… And here came history. Dust really does stream up in the bright sunlight past the windows as the final meters pass. At least you know the surface you’d be arriving on—the Apollo guys had no idea if they were landing on concrete or cake icing.

0.8m… 0.6m… 0.4m… The Altair’s descent rocket shuts down so very suddenly that the silence is a shock. With less of a jolt than you get when riding on a roller coaster, it’s touchdown. Velcroed to the control panel, the tiny nodding dog trinket—a present from some young fan—had been wobbling broken-necked in zero-gravity, but now it begins to behave properly, and nods its approval of the landing.

You’re on the moon.