Crank High Voltage Trailer Gives Jason Statham One Hour To Live

The brand new Crank High Voltage spot shows the not-dead-yet Chev Chelios sporting a robotic heart with the battery power of one hour. When the power’s out, it’s time to taste a power cable.


Jason Statham has one hour to live in this exclusive spot that can only be found at io9 right now…and it looks like he’ll rub up against anything to stay alive.

I am so ready for Crank High Voltage to come out, we desperately need more Chev. I especially enjoyed the moment where our dear anti-hero is getting tasered by a group of cops and then proceeds to take down all five of them in one swift movement. Or is it Bai Ling running around in a bikini with guns? No, it’s definitely getting glimpse of what I’m hoping is another segment of gettin’ busy in public with the delightfully dirty Amy Smart.

Crank High Voltage opens April 17th.

Best Buy Says Goodbye to Circuit City

Reader Sean sends in these photos taken outside his local Circuit City store in Amherst, as Best Buy‘s Geek Squad pay their final respects to Circuit City. And by that, I mean they bought stuff.

Sean tells us that the store was empty down to 3 carts, which meant Circuit City did the only thing they could: They sold their fixtures.

Those yellow price tags you see in the image below are how much the shelves went for, which is what the BB people were there to buy. Everything was somewhere between $75 and $250, in case you were wondering.

Goodbye Circuit City. You were a store we went to before.

All of Giz’s Circuit City coverageThanks Sean!

Watchmen’s Old School Macintosh SE/30

Here is Ozymandias’—Steve Jobs alter ego—computer: A Macintosh SE/30. All in black, because in Nixon’s 1985, Macs are black. It is one of the many Apple references Watchmen.

In the movie, the computer runs the classic Macintosh System in inverted video mode, white over black. Don’t forget to check io9’s Watchmen review and coverage, as well as our Steve Jobs conspiracy theory and multiple babblings on the movie.

Update: VERY sorry for the spoilers. Took those out. – JC

First Hands On: Touch Book Is Part-Netbook, Part-Tablet

The Always Innovating Touch Book does something I’ve never seen from a netbook: it has a fully detachable keyboard dock and transforms from a standard looking 8.9-inch netbook, to a stand-alone tablet.

Spearheaded by Gregoire Gentil, the man behind the Zonbu Desktop and Laptop, the Touch Book is his latest project, and a promising one at that. Gentil says the Touch Book’s hardware and software are fully open source and ready for modifications. While the device will come preloaded with a custom Touch Book OS, Gentil says this machine is capable of running mobile operating systems such as Android or Windows CE.

The hardware I saw wasn’t quite complete—the software was demoed on a prototype, and the final hardware above were just empty shells to give an idea of the design—so I cant comment too much on how well the end product performs, but I saw enough to consider this thing more than vaporware.

The Touch Book is the first netbook powered by a 600 MHz TI OMAP3 processor (built around ARM technology), 256 MB RAM, 3-axis accelerometer, an 8-gigabyte microSD card for storage and two batteries providing up to 15 hours of usage between charges. The 8.9-inch screen can display resolutions up to 1024×768 and uses a resistive touch panel.There’s also the usual offerings of 802.11b/g/n wi-fi and Bluetooth.

As a standalone tablet, the Touch Book is roughly 9.5″x7″x1″ and weighs about a pound. When docked to the keyboard, it is about 1.4-inches thick and weighs 2 pounds. All of the Touch Book’s guts, except for one of the batteries, are housed in the tablet portion of the device, so that it’s fully functional while detatched from the keyboard.

The chipset fits on a motherboard about the size of an index card, and is heavily optimized to get the best performance out of the hardware. Part of this involves stacking the RAM directly on top of the processor in a package on package configuration. The lid of the touchbook also pops off, so you have easy access to the hardware and it’s two internal USB ports you can use for dongles you dont want hanging off the side of the tablet.

As far as software goes, the OS is based around the Open Embedded Linux platform, but fully customized for the Touch Book hardware. As such, the Touch Book has the power to handle full screen video, and render OpenGL 3D graphics. Gentil says the Touch Book can run some of the same games found on the iPhone and plans to offer them in the future.

The Touch Book UI design depends on what configuration the hardware is in. When docked to the keyboard, the Touch Book uses a standard, cursor-based UI that looks like other Linux desktops. However, when in tablet mode, it uses a custom-designed, touch-based UI. The touch UI is based around spherical icons that rotate in a circular fashion as you swipe to the next one. Content is divided into three categories: web, apps and settings.

On the apps side, Touch Book will ship with both Firefox and Fennec (Mobile Firefox), games that will make use of the accelerometer, plus various sorts of web and productivity apps, such as word processor and spreadsheet-type programs.

Always Innovating plans to start shipping the Touch Book in late May or early June, priced at $300 for the tablet alone, or $400 for the tablet and keyboard dock combination. Pre-ordering will begin next week, and you can order the Touch Book in either red or dark grey colors. Gentil says he would also like to release future iterations that include support for GPS and 3G mobile broadband. [Always Innovating]

NEW TOUCHBOOK COMBINES NETBOOK AND TOUCHSCREEN TABLET; PROVIDES THREE TIMES THE BATTERY LIFE AT UNDER TWO POUNDS

PALM DESERT, Calif. March 2, 2009: Always Innovating today unveiled the Touch Book, a versatile new device that works as both a netbook and a tablet thanks to a detachable keyboard and a 3D touchscreen user interface. The Touch Book, previewed at DEMO 09, weighs less than two pounds as a netbook and has a battery life of 10 to 15 hours – three times longer than most netbooks.

“The Touch Book is perfect for these tough economic times because you can use it in so many ways,” said Gregoire Gentil, founder of Always Innovating and creator of the Touch Book. “You can use it as a netbook computer, a hand-held game device, or a video player. You can even reverse the keyboard to prop it up on a table in an inverted ‘V’. Finally, because it is magnetic, you can remove the keyboard and put the tablet on the fridge to serve as a kitchen computer or digital frame.”

The Touch Book combines the best of open source software and open hardware with a sleek industrial design by designer Fred Bould. The innovative design includes internal USB plugs. “I hate having dongles hanging from my laptop – I often end up disconnecting them accidentally – so we opted to put the USB inside,” said Gentil.

The Touch Book is the first netbook featuring an ARM processor from Texas Instruments, resulting in outstanding battery life, and a fan less, heat-and-noise-free system.

According to Chris Shipley, executive producer of the DEMO Conferences, the Touch Book’s innovative architecture and industrial design earned it a spot on the DEMO conference stage. “The longer battery life is a boon to netbook users. But the Touch Book’s versatility – its ability to function as a netbook as well as a standalone touchscreen tablet – makes it a breakthrough product,” said Shipley

The Touch Book is expected to ship in late spring and will start at $299. Advance orders can be placed at http://www.alwaysinnovating.com/store/.

Inside the Mind of Microsoft’s Chief Futurist

If I encountered Craig Mundie on the street, met his kind but humorless gaze and heard that slight southern drawl, I’d guess he was a golf pro—certainly not Microsoft’s Chief of the future.

As chief research and strategy officer at Microsoft, Mundie is a living portal of future technology, a focal point between thousands of scattered research projects and the boxes of super-neat products we’ll be playing with 5 years, 20 years, maybe 100 years from now. And he’s not allowed to even think about anything shipping within the immediate 3 years. I’m pretty sure the guy has his own personal teleporter and hoverboard, but when you sit and talk to him for an hour about his ability to see tomorrow, it’s all very matter of fact. So what did we talk about? Quantum computing did come up, as did neural control, retinal implants, Windows-in-the-cloud, multitouch patents and the suspension of disbelief in interface design.

Seeing the Future
Your job is to look not at next year or next five years. Is there a specific number of years you’re supposed to be focused on?

I tell people it ranges from from about 3 to 20. There’s no specific year that’s the right amount, in part because the things we do in Research start at the physics level and work their way up. The closer you are to fundamental change in the computing ecosystem, the longer that lead time is.

When you say 3 years, you’re talking about new UIs and when you say 20 you’re talking about what, holographic computing?

Yeah, or quantum computing or new models of computation, completely different ways of writing programs, things where we don’t know the answer today, and it would take some considerable time to merge it into the ecosystem.

So how do you organize your thoughts?

I don’t try to sort by time. Time is a by-product of the specific task that we seek to solve. Since it became clear that we were going to ultimately have to change the microprocessor architecture, even before we knew what exactly it would evolve to be from the hardware guys, we knew they’d be parallel in nature, that there’d be more serial interconnections, that you’d have a different memory hierarchy. From roughly from the time we started to the time that those things will become commonplace in the marketplace will be 10 to 12 years.

Most people don’t really realize how long it takes from when you can see the glimmer of things that are big changes in the industry to when they actually show up on store shelves.

Is it hard for you to look at things that far out?

[Chuckles] No, not really. One of the things I think is sort of a gift or a talent that I have, and I think Bill Gates had to some significant degree too, is to assimilate a lot of information from many sources, and your brain tends to work in a way where you integrate it and have an opinion about it. I see all these things and have enough experience that I say, OK, I think that this must be going to happen. Your ability to say exactly when or exactly how isn’t all that good, but at least you get a directional statement.

When you look towards the future, there’s inevitability of scientific advancement, and then there’s your direction, your steering. How do you reconcile those two currents?

There are thousands of people around the world who do research in one form or another. There’s a steady flow of ideas that people are advancing. The problem is, each one doesn’t typically represent something that will redefine the industry.

So the first problem is to integrate across these things and say, are there some set of these when taken together, the whole is greater than the sum of the parts? The second is to say, by our investment, either in research or development, how can we steer the industry or the consumer towards the use of these things in a novel way? That’s where you create differentiated products.

Interface Design and the Suspension of Disbelief
In natural interface and natural interaction, how much is computing power, how much is sociological study and how much is simply Pixar-style animation?

It’s a little bit of all of them. When you look at Pixar animation, something you couldn’t do in realtime in the past, or if you just look at the video games we have today, the character realism, the scene realism, can be very very good. What that teaches us is that if you have enough compute power, you can make pictures that are almost indistinguishable from real life.

On the other hand, when you’re trying to create a computer program that maintains the essence of human-to-human interaction, then many of the historical fields of psychology, people who study human interaction and reasoning, these have to come to the fore. How do you make a model of a person that retains enough essential attributes that people suspend disbelief?

When you go to the movies, what’s the goal of the director and the actors? They’re trying to get you to suspend disbelief. You know that those aren’t real people. You know Starship Enterprise isn’t out there flying around—

Don’t tell our readers that!

[Grins] Not yet at least. But you suspend disbelief. Today we don’t have that when people interact with the computer. We aren’t yet trying to get people to think they’re someplace else. People explore around the edges of these things with things like Second Life. But there you’re really putting a representative of yourself into another world that you know is a make-believe environment. I think that the question is, can we use these tools of cinematography, of human psychology, of high-quality rendering to create an experience that does feel completely natural, to the point that you suspend disbelief—that you’re dealing with the machine just as if you were dealing with another person.

So the third component is just raw computing, right?

As computers get more powerful, two things happen. Each component of the interaction model can be refined for better and better realism. Speech becomes more articulate, character images become more lifelike, movements become more natural, recognition of language becomes more complete. Each of those drives a requirement for more computing power.

But it’s the union of these that creates the natural suspension of disbelief, something you don’t get if you’re only dealing with one of these modalities of interaction. You need more and more computing, not only to make each element better, but to integrate across them in better ways.

When it comes to solving problems, when do you not just say, “Let’s throw more computing power at it”?

That actually isn’t that hard to decide. On any given day, a given amount of computing costs a given amount of money. You can’t require a million dollars worth of computer if you want to put it on everybody’s desk. What we’re really doing is looking at computer evolutions and the improvements in algorithms, and recognizing that those two things eventually bring new problem classes within the bounds of an acceptable price.

So even within hypothetical research, price is still a factor?

It’s absolutely a consideration. We can spend a lot more on the computing to do the research, because we know that while we’re finishing research and converting it into a product, there’s a continuing reduction in cost. But trying to jockey between those two things and come out at the right place and the right time, that’s part of the art form.

Hardware Revolutions, Software Evolutions
Is there some sort of timeline where we’re going to shift away from silicon chips?

That’s really a question you should ask Intel or AMD or someone else. We aren’t trying to do the basic semiconductor research. The closest we get is some of the work we’re doing with universities exploring quantum computers, and that’s a very long term thing. And even there, a lot of work is with gallium arsenide crystals, not exactly silicon, but a silicon-like material.

Is that the same for flexible screens or non-moving carbon-fiber speakers that work like lightning—are these things you track, but don’t research?

They’re all things that we track because, in one form or another, they represent the computer, the storage system, the communication system or the human-interaction capabilities. One of the things that Microsoft does at its core is provide an abstraction in the programming models, the tools that allow the introduction of new technologies.

When you talk about this “abstraction,” do you mean something like the touch interface in Windows 7, which works with new and different kinds of touchscreens?

Yeah, there are a lot of different ways to make touch happen. The Surface products detect it using cameras. You can have big touch panels that have capacitance overlays or resistive overlays. The TouchSmart that HP makes actually is optical.

The person who writes the touch application just wants to know, “Hey, did he touch it?” He doesn’t want to have to write the program six times today and eight times tomorrow for each different way in which someone can detect the touch. What we do is we work with the companies to try to figure out what is the abstraction of this basic notion. What do you have to detect? And what is the right way to represent that to the programmer so they don’t have to track every activity, or even worse, know whether it was an optical detector, a capacitive detector or an infrared detector? They just want to know that the guy touched the screen.

Patents and Inventor’s Rights
You guys recently crossed 10,000 patent line—is that all your Research division?

No, that’s from the whole company. Every year we make a budget for investment in patent development in all the different business groups including Research. They all go and look for the best ideas they’ve got, and file patents within their areas of specialization. It’s done everywhere in the company.

So, take multitouch, something whose patents have been discussed lately. When it comes to inevitability vs. unique product development, how much is something like multitouch simply inevitable? How much can a single company own something that seems so generally accepted in interface design?

The goal of the patent system is to protect novel inventions. The whole process is supposed to weed out things that are already known, things that have already been done. That process isn’t perfect—sometimes people get patents on things that they shouldn’t, and sometimes they’re denied patents on things they probably should get—but on balance you get the desired result.

If you can’t identify in the specific claims of a particular patent what it is novel, then you don’t get a patent. Just writing a description of something—even if you’re the first person to write it down—doesn’t qualify as invention if it’s already obvious to other people. You have to trust that somehow obvious things aren’t going to be withheld from everybody.

That makes sense. We like to look at patents to get an idea of what’s coming next—

That’s what they were intended to do; that was the deal with the inventor: If you’ll share your inventions with the public in the spirit of sharing knowledge, then we’ll give you some protection in the use of that invention for a period of time. You’re rewarded for doing it, but you don’t sequester the knowledge. It’s that tradeoff that actually makes the patent system work.

Windows in the Cloud, Lasers in the Retina
Let’s get some quick forecasts? How soon until we see Windows in the cloud? I turn on my computer, and even my operating system exists somewhere else.

That’s technologically possible, but I don’t think it’s going to be commonplace. We tend to believe the world is trending towards cloud plus client, not timeshared mainframe and dumb display. The amount of intrinsic computing capability in all these client devices—whether they’re phones, cars, game consoles, televisions or computers—is so large, and growing larger still exponentially, that the bulk of the world’s computing power is always going to be in the client devices. The idea that the programmers of the world would let that lie fallow, wouldn’t try to get any value out of it, isn’t going to happen.

What you really want to do is find what component is best solved in the shared facility and what component is best computed locally? We do think that people will want to write arbitrary applications in the cloud. We just don’t think that’s going to be the predominating usage of it. It’s not like the whole concept of computing is going to be sucked back up the wire and put in some giant computing utility.

What happens when the processors are inside our heads and the displays are projected on the inside of our eyeballs?

It’ll be interesting to see how that evolution will take place. It’s clear that embedding computing inside people is starting to happen fairly regularly. There’s special processors, not general processors. But there are now cochlear implants, and even people exploring ways to give people who’ve lost sight some kind of vision or a way to detect light.

But I don’t think you are going to end up with some nanoprojector trying to scribble on your retina. To the extent that you could posit that you’re going to get to that level, you might even bypass that and say, “Fine, let me just go into the visual cortex directly.” It’s hard to know how the man-machine interface will evolve, but I do know that the physiology of it is possible and the electronics of it are becoming possible. Who knows how long it will take? But I certainly think that day will come.

And neural control of our environment? There’s already a Star Wars toy that uses brain waves to control a ball—

Yeah, it’s been quite a few years since I saw some of the first demos inside Microsoft Research where people would have a couple of electrical sensors on their skull, in order to detect enough brain wave functionality to do simple things like turn a light switch on and off reliably. And again, these are not invasive techniques.

You’ll see the evolution of this come from the evolution of diagnostic equipment in medicine. As people learn more about non-invasive monitoring for medical purposes, what gets created as a byproduct are non-invasive sensing people can use for other things. Clearly the people who will benefit first are people with physical disabilities—you want to give them a better interface than just eye-tracking on screens and keyboards. But each of these things is a godsend, and I certainly think that evolution will continue.

I wonder what your dream diary must look like—must have some crazy concepts.

I don’t know, I just wake up some mornings and say, yeah, there’s a new idea.

Really? Just jot it down and run with it?

Yeah, that’s oftentimes the way it is. Just, wasn’t there yesterday, it’s there today. You know, you just start thinking about it.

Terminator Ending “Might Piss Off A Lot Of People”

Last night we were treated to about 15 minutes of Terminator Salvation footage. Spoilery details of what’s going on below, plus McG’s confession that the ending might piss people off.

James Cameron loyalists, rest assured: it’s going to be a fun ride.

Before the screening, director McG sat us all down, and told us the tale of getting Terminator Salvation made. The producers approached him with the idea, and he was initially skeptical — as I would be if I heard someone pitch another Terminator movie.

But McG liked the angle, and went about procuring the best and the brightest, starting with seeking out James Cameron for a pseudo-blessing, and ending with Christian Bale telling him to rewrite the entire thing, or he’s out. McG’s a smart director, he knew he needed someone with crazy acting chops to make fighting a giant robo-puppet believable these days. So he hired Jonah Nolan, and they wrote the story that we all love to watch: the creation of a hero.

“We started working on this becoming story of how Connor indeed became the leader of the resistance,” McG explained. “And we were both passionate about those genesis stories where you think, “I’m just a high school newspaper guy.” “No, you’re Peter Parker. With great power comes great responsibility.” “I’m just a computer hacker.” “No, you’re Neo. You’re the one.” We like that idea. This is the story about how Connor became the leader of the resistance.”

But on to the goods, first the was a quickie compilation that showed a cavalry of helicopters coming to the rescue, and general John Connor ass-kickery. But finally they got to the clips: the first scene was all about Sam Worthington’s character Marcus Wright, meeting Kyle Reese. To put it McG style:

When you meet Marcus in the beginning of the movie, he’s being put to death in the modern day. And he’s down on himself… He’s in this life of privilege that he only ever saw the bad side to. Then he wakes up in this world of duress, after the bombs have gone off. And he discovers what is worthwhile about humanity. The courage of this little kid, the kindness of an elderly woman…”

Marcus, Kyle Reese and an unknown floppy-haired kid descend upon the hollowed out, worn-down 7-11 (that we’ve seen in set pictures before). The two scavenge for food, as a bewildered Marcus looks on. Reese is practically telling Marcus what to do and what not to do, which jibes with what McG told us about Marcus waking up in another world in the future, unaware of what has happened to him, or around him. The look itself is silvery and dirty, thanks to the specially tinted film McG mixed up himself. Everyone has a bit of a silver gloss over the shadowy part of his face, and dark circles and wrinkles are amplified to the 1000th degree across the screen. It’s beautifully brutal.

The floppy-haired kid finds a small amount of milk, but they before they can devour it, a group of Mad Max-looking types pop out of their hiding places, guns a-blazin’. They shout that this is their food and fuel. Reese tries to exit without setting off any itchy trigger fingers. Yet a wise old woman stops the gang and offers up some food to the kid with the bad hair. Everyone calms down for a minute, but you all know that when things are too quiet, there’s a big bad on the way. Sure enough a massive metal Harvester rips through the ceiling and carries off the good-natured woman. All the others disperses to their assembly of beat-up Saabs and trailers, and speeds away in fear, to their detriment of course. These Harvesters are wreaking destruction, “all in the interest of collecting humans so they can do nasty things to us, all in the spirit of creating the T-800,” the director explained. Run, humans, run to your dirty cars and grab your shotguns, but it will do you no good, that Harvester is damn near indestructible. The action scene was tightly filmed and, thank god, didn’t have an inch of shaky cam.

Since we got a good look at the really big bots in Terminator Salvation, let’s just nip this whole Transformers versus Terminator controversy in the bud. Even though the effects that I witnessed last night were by no means finished, you could see what McG and friends were trying to do.

Granted, the Harvester does shoot off the wheelie Mototerminators from its legs in a very Transformery manner, but it’s nothing like Transformers. The Harvesters rattle off a guteral moan so frightening, it’d make the Cloverfield monster piss his pants. It’s cold, calculating robots killing and abducting men, the best way they know how. There’s no personality or sassy attitudes, it is simply a gloriously intense moment of robots exterminating and capturing people. If anything, the few moments I caught felt more like the first 15 terrifying minutes of Planet Of The Apes more so than Transformers, especially with the fast pace and the ever-present fear of being dragged about by a robot into a pen filled with humans. It’s cruel and fast, just like a Terminator should be. No room for witty banter or “talk to the hand” references in this movie — it would be out of place in a world where milk is a luxury.

Someone asked McG if he was worried about the Transformers comparison, and he pretty much blew it off, saying the movies are so completely different, that they just couldn’t be compared.

Most of the music was filler, since the great Danny Elfman has just signed on board for Terminator 4. But we got to hear McG’s original idea, of using Gustavo Santaolalla for the human-interaction scenes, paired with Thom Yorke’s Radiohead robot-dream tunes for any Skynet heavy moments. This idea got thrown to the wayside after sitting down and talking with the regular spooky Elfman. Note to Yorke: it’s still totally okay to pursue this idea, in fact, I’ll send you my money now to see what kind of sounds you’d dream up for a Terminator flick.

The next scene pitted John Connor against a Hydrobot’s tentacles, which easily kill off his crew and sinks Connor’s hovering helicopter. Finally after a few more minutes of Hydrobot wrasslin’, we’re treated to a tiny taste of what McG described as a Faustian deal with the devil moment. He was talking about Marcus, who’s been exposed as a Terminator, finding an uneasy truce with Connor so they can bust the young Kyle Reese out of the human containment facility. Yet another awesome action scene, and I admit I had a few “Oh no, look out behind you, J.C.” moments. But my appetite wasn’t quenched with the back-and-forth. McG is making a platform for this movie based on the stand-out dialog and acting, and I wanted to see a lot more of that.

In fact I have a feeling a lot of this movie may be full of Bale-face:

“Christian is so powerful,” McG said. “There are several times in the movie where I stay on him for one shot and I don’t cut. I’m talking 2 or 3 pages of dialogue where you stay on Christian Bale. He’s controlling his breathing, he’s choosing when he blinks, he’s in such command of his physicality that it doesn’t require cutting.”

So does the back-and-forth moment deliver? Sure. Is it the most amazing intense holy-shit-my-mind-is-blown moment? Not yet, but I feel like that is yet to come (probably in the final big reveal). Which is pretty much what I’m hinging this entire movie upon. So far, it is full of good looking adrenaline inducing crazy that hits right in the gut where a great action movie should. So if you believe McG about Bale’s performance, pair that with Bale’s acting on set and you won’t be able to rip your eyes away from him.

Still, we all know about the big twist ending that’s been reported wildly across the internet, which the director insisted again was completely not true, but either way we know there is a twist. A twist that may “piss off a lot of people,” quoth McG. This is what I’ll hang my final decision on, only because it should change the way I view everything about the past and future of the Terminator franchise.

I’m just so happy to report to all of you that it’s really coming together nicely. Our goal was always to make a big movie, one of the best movies. Because for a long time there I think the summer fare really fell off. And summer movies were becoming a little sloppy, a little disposable. I think with the Dark Knight this year that’s an elegant, elegant artistic offering, and the second biggest picture in the history of cinema.

So if we’re clever, we can make a big movie that’s a lot of fun but and certainly a summer movie — but also an important movie, especially a movie in this genre. I think any good science fiction movie really works on two levels. The Matrix is a great example where you can watch it and say “Hey wow, that’s fun that’s really explosive” and then all of us can go to a graduate class at NYU and spend four years discussing the theological implications of what the Wachowskis were discussing.

Me too, McG, me too. No longer shall I join in the “fuck that guy from Charlie’s Angels” chorus (which the director himself pointed out was one of the most hated on things about him thus far, besides the name). God help me, after last night I’m really pushing for a McG victory here.