Xbox’s Aaron Greenberg: Hulu on 360 like “asking out a really hot chick on a date”

Hey, our main dudes over at Joystiq sat down with Xbox’s Aaron Greenberg recently, and there are some great bits of info in the interview. Aaron says the big Xbox price cut has been planned for months, so it’s “just a coincidence” that it happened right on top of the PS3 Slim — and he also says he’s an avid Joystiq reader, so if Redmond had wanted to get the jump on Sony, they’d have been prepared. Other nuggets: the $99 WiFi adapter isn’t coming down in price, the Netflix relationship is going well, and getting Hulu or Amazon Unbox on the 360 is like “asking out a really hot chick on a date, they don’t all say yes.” Yeah, it’s a pretty great interview — hit the read link for the whole thing.

Filed under:

Xbox’s Aaron Greenberg: Hulu on 360 like “asking out a really hot chick on a date” originally appeared on Engadget on Fri, 28 Aug 2009 21:22:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

EVGA Releases Innovative Dual-Display Monitor

EVGAInterView.jpg

Now this is a hard-working monitor. EVGA has just introduced the InterView 1700, a dual-display that lets you do more. The InterView offers two 17-inch flat screen, each with a 1440×900 pixel resolution. They’re both attached to a single center stand that contains a 1.3megapixel webcam. The base includes the monitor controls, as well as three USB 2.0 ports.

Here’s where it gets fun: the displays can each swivel vertically, so you can view both from a comfortable angle, or arrange one for a friend to view. The monitors can run in clone mode, where both screens show the same thing, or span mode (pictured), where they produce one continuous desktop.

But there’s more: the screens can also swivel vertically, so you can show your work to someone sitting on the other side of your desk. The image automatically rotates when a screen is flipped. Two people sharing a desk area can each use one of the screens, thereby saving space.

It’s a beautifully versatile system, easy to configure as needed. The company is pitching it to business users, but I’m betting plenty of home users will also want one. The InterView 1700 is available from the company for $649.99.

EVGA’s quirky InterView dual-LCD display reviewed

Much like Lenovo’s ThinkPad W700ds, we get the feeling that EVGA’s newly launched InterView Dual-Display will only cater to a select niche, but that’s not to say it can’t be a winner to at least a few individuals. The crew over at HotHardware took an in-depth look at the new rotatable, twin-LCD device, and while they certainly appreciated the 34-inch desktop in screen spanning mode, the auto re-orientation and the stunning build quality, a few minor issues held it back from greatness. For starters, the machine requires dual VGA or DVI inputs in order to run both panels from a single machine, and the fact that each LCD is only 17-inches could also turn some folks off. The most egregious choice, however, was to equip each display with just a 1,440 x 900 resolution, which isn’t even enough to showcase 1080p material. At $650, the InterView is tough to recommend to all but those who are certain they’ll take advantage of the nuances, but you can hit the read link for a few more looks and a complete video walk-through before making up your mind either way.

Filed under:

EVGA’s quirky InterView dual-LCD display reviewed originally appeared on Engadget on Thu, 16 Jul 2009 09:26:00 EST. Please see our terms for use of feeds.

Read | Permalink | Email this | Comments

EVGA introduces rotatable dual-LCD InterView system

Man, talk about falling into a black hole at the R&D lab. A staggering 1.5 years after we first caught wind of the altogether intriguing InterView system from EVGA, the company is finally bringing it to market here in the US. Put simply, the device features twin rotatable 17-inch LCD displays, both supported by a single desktop stand. It was conceived in order to suit presentation givers, financial consultants and the elusive “creative professional” crowd, with each panel rocking a 1,440 x 900 resolution. The screens can rotate 180 degrees horizontally, fold 90 degrees from closed to full width apart and can even be controlled by two keyboards and mice, ensuring that sibling arguments reach peaks they’ve never reached before. There’s also a built-in webcam, microphone and three-port USB hub, though it seems as if you’ll be shopping for this thing without an MSRP to go by. Full release is after the break.

Update: The display will list for $649.99.

Continue reading EVGA introduces rotatable dual-LCD InterView system

Filed under:

EVGA introduces rotatable dual-LCD InterView system originally appeared on Engadget on Wed, 15 Jul 2009 16:14:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Roger McNamee says Pre launch was a “dream come true,” hints that all Palm devices will have physical keyboards

There’s never a dull moment when Palm investor Roger McNamee sits down for an interview, and his latest chat with Fox Business is no exception — in addition to saying that the Pre launch was a “dream come true,” he more or less implied that all future Palm devices will have hardware keyboards: “Our goal is to address all of those people who say I cannot have a real life without a keyboard — I can’t live doing one thing at a time.” Yeah, it’s not much, but taken in context it seems like he’s saying that keyboards and multitasking will be Palm’s major differentiators against the iPhone. That’s not to say he thinks the Pre is destined to kill Cupertino’s baby — in addition to calling Apple “the most successful company in the history of Silicon Valley,” McNamee also reiterated Palm’s characterization of the iPhone as primarily a consumer-centric media phone: “If what you care about most is listening to music or playing back videos, the iPhone is probably the right phone for you.” That’s a pretty slickly-delivered backhanded compliment, if you ask us — although from El Rog we’d expect nothing less. Check the whole interview after the break — it’s a good one.

[Via Everything Pre]

Continue reading Roger McNamee says Pre launch was a “dream come true,” hints that all Palm devices will have physical keyboards

Filed under:

Roger McNamee says Pre launch was a “dream come true,” hints that all Palm devices will have physical keyboards originally appeared on Engadget on Tue, 09 Jun 2009 11:08:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Video: White PSPgo hands-on

Sure, speaking with Sony’s John Koller was great and all, but one of the real treats with the interview was some quiet hands-on time with a white PSP Go, unfortunately not turned on but with the same build quality and weight as its functional black model (also found tagging along to the Q&A). Our impression of the device is largely unchanged from the initial experience — a sturdy build that’s surprisingly light, although with this go around we didn’t find the shoulder buttons to be more comfortable this time. Still, our interest wanes pretty dramatically when we’re reminded of its $249 price tag, but enough with our chatter, hit up the break for an up-close video of all its nooks and crannies, and while you’re there, stick around for more tidbits from our interview.

Continue reading Video: White PSPgo hands-on

Filed under:

Video: White PSPgo hands-on originally appeared on Engadget on Fri, 05 Jun 2009 08:54:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Wired for War: Author Explains Revolution in Robotics, Scares Crap Out of Us

If you shrug off Terminator and Battlestar Galactica as never-gonna-happen impossibilities, PW Singer has news for you. His spine-tingling book, Wired For War, carefully explains the robotics revolution that’s gripped our military since 9/11.

If you believe Singer (shown at left with an unarmed robot), the biggest revolution happening in the world today is the one taking place in military robotics, unmanned fighting systems, which were next to non-existent before 9/11, and have multiplied exponentially since the Iraq invasion of 2003.

You don’t have to read Wired for War (or Gizmodo) to know why military robots are awesome: On the battlefield, they won’t hesitate to take a bullet for you, and when they bite it, you don’t have to go and tell their mama how sorry you are. But robots are no longer just an extra layer of protection for our flesh-and-blood warriors, they are a new fighting force—the US has 12,000 on the ground and 7,000 in the air—that are changing the way the generals see the battlefield, and the way soldiers define what it means to fight.

I got in touch with Singer after Wired for War was published, and the cool, calm way he explains how different the world will be from now on—how the extended conflicts in Iraq and Afghanistan have turned robots from novelty items to autonomous killing machines, how cute dormroom debates over Asimov’s Three Laws of Robotics have morphed into heated arguments at the Pentagon—has really got me convinced.

This week we’re celebrating the book with a series of posts on topics it covers, but at first, it’s time for you to hear from Singer himself, and drink in some of that truth. As he himself would say, citing The Matrix, it’s time to swallow the red pill:

Giz: One of the biggest purposes of your book is to make, for the first time, a compelling argument for the reality of the scary sci-fi future, right?

PWS: There are a couple of points of the book. One, to sell lots of books. Two, to get our heads out of the sand when it comes to the massive changes happening in war, to say this is not science fiction but battlefield reality. Next, this is not the revolution that Rumsfeld and his people thought would happen. You may be getting incredible new capabilities, but you’re also getting incredible new human dilemmas to figure out. The fog of war is not being lifted. Moore’s law may be in operation, but so is Murphy’s law. Mistakes still happen. The final aspect is to give people a way to look at the ripple effects that are coming out of this, on our politics, the warrior’s experience, our laws, our ethics.

We’re experiencing something incredibly historic right now, and yet no one is talking about it. Think about the phrase “going to war.” That has meant the same thing for five thousand years. It meant going to a place where there was such danger that they may never come home again, may never see their family again. Whether you were talking about my grandfather’s experience in World War II or Achilles going off to fight the Trojans.

Compare that to what it means in a world of Predator drones, already. One of the pilots I interviewed says you’re going to war—for 12 hours. You’re shooting weapons at targets, killing enemy combatants. And then you get back in your car and you drive home. And 20 minutes later, you’re sitting at the dinner table, talking to your kids about their homework. So we have an absolute change in the meaning of going to war, in our lifetime right now, and nobody was talking about it.

Giz: That’s mind blowing. The thing you’re hitting on here is the role of humans in war. Many argue that you can’t take the human being out of war, but will there be a time when robots just fight robots? And what’s the point? Doesn’t there have to be a human target? If robots fight robots, who cares?

PWS: Basically you’re asking the question that’s the famous Star Trek episode [“A Taste of Armageddon,” TOS 1967], where two machines fight each other, they calculate what would happen, and then a set number of humans are killed based on the computer calculations. That’s how they do the wars.

If we do get to that scenario, is it war anymore? We’d have to reconfigure our definitions. This is something we do. Some people back in the day thought that the use of guns was not an act of war, it was murder. It was a crime to use guns. Only cowards used guns. Well, we changed our definitions.

Giz: But the human has always been in the target of whatever murderous weapon—I’m asking what happens when Predator drones on our side go after Predator drones on their side over the Pacific Ocean.

PWS: It’s not a theoretical thing. Is that war anymore? Or does it take away the valor and heroism that we use to justify war, and just turn it into a question of productivity? Maybe that’s where war is headed.

But things don’t always turn out as you described. Every action has a counter-reaction. You develop these systems that give you this incredible advantage. But as one of the insurgents in Iraq says, you’re showing you’re not man enough to fight us [in person]. You’re showing your cowardice. You’ve also shown us that all we have to do is kill a few of your soldiers to defeat you.

Another one says that you are forcing my hand to become a terrorist. Say you get to drones vs. drones. Someone else will say, “A ha! That’s not the way to win. The way to win is to strike at their homeland.”

And with drones on drones, this very sophisticated technology, you’re also taking war in a whole ‘nother direction. Because now the most effective way of defeating drones may not be destruction, it may be wars of persuasion. That is, how do I hack into your drones and make them do what I want? That may be better than shooting them down.

Or, if they’re dependent on communication back to home, I’ve just pointed out a new vulnerability. The high tech strategy may be to hack them, and disrupt those communications, but of course there’s a low-tech response. What’s an incredibly effective device against the SWORDS system, a machine-gun-armed robot? It’s a six year old with a can of spray paint [says one military journalist]. You either have to be bloody minded to kill an unarmed six year old. Which of course will have all sorts of ripple effects, such as who else will join the war and how it’s covered. Or you just let that little six year old walk up and put spray paint on the camera, and suddenly your robot is basically defused.

Of course, in a meeting with officers from Joint Forces Command, one of them responded, “We’ll just load the system up with non-lethal weapons, and we’ll tase that little six year old.” The point is, robotics are not the end of the story, they’re the start of the new story.

Giz: Okay, so if everyone can get their hands on a crate of AK-47s these days, will robots be traded like that, on the black market? How can countries without technological sophistication make use of robots?

PWS: There is a rule in technology as well as war: There’s no such thing as a permanent first-mover advantage. How many of your readers are reading this on a Wang computer? How many are playing video games on an Atari or Commodore 64? Same thing in war: The British are the ones who invented the tank, but the Germans are the ones who figured out how to use the tank better.

The US is definitely ahead in military robotics today, but we should not be so arrogant as to assume it will always be the case. There are 43 other countries working on military robotics, and they range from well-off countries like Great Britain, to Russia, to China, to Pakistan, to Iran. Just three days ago, we shot down an Iranian drone over Iraq.

The thing we have to ask ourselves is, where does the state of American manufacturing, and the state of our science and mathematics education in our schools take us in this revolution? Another way to phrase this is, what does it mean to use more and more “soldiers” whose hardware is made in China, and whose software is written in India?

A lot of the technology is commercial, off the shelf. A lot of it is do-it-yourself. For about $1,000, you can build your own version of a Raven drone, one of the hand-tossed drones [which you launch it by throwing in the air, shown at left] our soldiers use in Iraq and Afghanistan. What we have is the phenomena that software is not the only thing that has gone open source. So has warfare. It’s not just the big boys that can access these technologies, and even change and approve upon them. Hezbollah may not be a state, may not have a military, but in its war with Israel, it flew four drones.

Just as terrorism may not be small groups but just one lone-wolf individual, you have the same thing with robotics and terrorism. Robotics makes people a lot more lethal. It also eliminates the culling power of suicide bombing. You don’t have to convince a robot that it’s going to be received by 70 virgins in heaven.

And about not being able to get it like an AK-47. Actually, two things. One, there’s a bit in the book about cloned robots. One of the companies was at an arms fair and saw a robot being displayed by a certain nation in their booth. And they’re like, “That’s our robot, and we never sold it to them. What the hell?” It’s because it was a cloned robot.

And two, there’s a quote, “A robot gone missing today will end up in the marketplace tomorrow.” We’ve actually had robots that have been captured. We actually had one loaded up with explosives and turned into a mobile IED.

Giz: So, in other words, only a few years after being deployed, they’re already being turned against us.

PWS: This is war, so of course it’s going to happen. It doesn’t mean the AK-47 is disappearing from war. War in the 21st century is this dark mix of more and more machines, but fights against warlords and insurgents in the slums. Those players are going to be using everything from high-tech to low-tech.

[Wired for War website; Wired for War at Amazon]

Futurama’s Creator Isn’t Afraid of Robots, Doesn’t Own a Roomba

I just bombarded Futurama’s co-creator David X. Cohen with some very important questions, including what he would name his Roomba, why he’s not afraid of robots and what Futurama’s chances are for renewal. (Spoiler: 50/50.)

Mouth: dry. Stomach: queasy. Head: racing. Not only is David X. Cohen the co-creator of one of my favorite shows of all time, he’s a fellow Berkeley computer science alum, fellow nerd, and a tremendously funny guy. He also holds the dream job—comedy writer and creator of a successful Sci Fi TV show. After fully preparing myself by watching the latest Futurama movie—Into the Wild Green Yonder—I had hours worth of questions for the man, but he only had 30 minutes.

I had to get the most important question on everyone’s minds out of the way: Will Futurama be coming back to Fox for a 6th season? Although Fox has indeed been making noises about the show’s return, Cohen said DVD sales of the fourth movie may be a deciding factor in whether or not the project would be profitable. Basically, we need to go out and buy the DVD and Blu-ray if we want to bring Futurama back. Cohen also revealed that although there is a fifty-fifty chance of the show returning, he has yet to hear more concrete details about it from Fox—according to him, though, “No news is good news.”

But how is the movie? In a word, good. In two words, very good. Into the Wild Green Yonder feels as if the Futurama writers used the first three movies as practice for getting back into the groove of writing Futurama episodes and was a final coda to the series. That’s not to say that the first three movies were bad—they were just different.

If the Bender-focused, first half hour of the movie were its own episode, it would solidly land in any “top ten funniest Futurama episodes of all times” list, hands down. However, because the next 58 minutes covered some very familiar, classic Futurama-esque territory, it made Into the Green Yonder feel like the one movie—out of the four—that connected the most with the series. But why this movie, why now?

Bringing this movie back to the feel of the series, as Cohen revealed, was somewhat intentional. For each one of the Futurama movies, the writers decided that they would cover one major area of Sci Fi. The latest one, like the series itself, is more of a large space opera that comfortably cradles you back into the company of the Futurama characters you grew to love. Cohen also pointed out that a scene in the newest movie—the one where Leela is giving out space coordinates—is probably one of the “most hardcore things they’ve done” in terms of showing respect for actual science.

It’s these science fans as well as the more hardcore viewers that would have noticed when Futurama’s writers give shout outs to real-world physics in their jokes—such as when the Professor invoked the observer effect after a horse race. This ability to mix humor with scientific intelligence is one of the greatest benefits of having so many smart writers on staff. The other benefit? The ability to actually have an interesting vision of the future.

And it’s this future that Fry’s trying to save once again. This could be why the Green Yonder felt like it was slightly retreading old territory. If you’ve seen some of Fry’s Nibblonian episodes, I’m sure you’re familiar with the basic premise—we get it: Fry’s special and he’s the only one who can save the universe. But that’s not to say there weren’t some great moments to be had during these 88 minutes. This is more akin to strolling down a familiar street you haven’t seen in years, examining which stores have changed and which haven’t, and reveling in the fact that you’re lucky enough to be back once more.

As the series draws to a (temporary) close, we wonder if we’ve learned the entirety of Fry’s origin story and how he came to be in the year 3000. Not to worry, Cohen assures that he is not finished with that tale quite yet. When asked how much of it was left—after the Nibblonian saga was finished and the “Lars” adventure in the first DVD movie—he responded that there is “one sentence,” uttered in the series that was left unaddressed. But it’s up to superfans to figure out which sentence, not to mention which episode, he is referring to.

Because David X. Cohen helped create the entire world and backstory of Futurama, he’s given a lot of thought to the future. Our future. Because he didn’t want to go to extremes and create either a utopia or a dystopia, Futurama’s universe is only about 50% realistic, according to Cohen. It does, however, borrow some ideas from our own world for both comedic and dramatic effect.

So what, if anything, in our real world future is David X. Cohen most afraid of? It isn’t robots, surprisingly enough. It’s stuff like nuclear bombs. Wars. And technology that kills people, fast. Things that—when taking the fact that Cohen grew up in the cold war and studied physics at Harvard into account—makes a lot of sense. But robots? Nope.

You would think that because Cohen is such a fan of robots, it would make sense that he’d own a Roomba. But he doesn’t. He laughs that Matt Groening gives him shit for this fact (if anyone should have a Roomba, it would be Cohen).

Is there any Futurama left to tell? Cohen thinks so. Besides further expanding on Fry’s origin story, he’s got plans to make the Planet Express crew exhibits in an alien zoo (among other things). However, beyond little ideas here and there, what’s currently occupying Cohen’s mind is how to escape from the crazy corner they’ve painted themselves into at the end of Green Yonder. Given Fox’s recent interest in bringing back the show for another season on television (50/50 chance!), it’s one mess Cohen will likely have to bend his way out of.

As for the Roomba, if Cohen ever were to get one, he’d name it Browser.

The Engadget Interview: Tom Glynn, the voice of the Kindle 2

It looks like Amazon and the Authors Guild have reached a compromise regarding text-to-speech — for now, at least. One person who’s been ironically silent during all of this is the voice of the e-reader itself, Tom Glynn. We’ve just had a little chat with the musician, broadcaster, hardcore Kindle fan, and voice of Nuance’s text-to-speech technology, which we’d like to share with you — and while you’re at it, be sure to check out some of his tunes on MySpace or at tomglynn.com.

Continue reading The Engadget Interview: Tom Glynn, the voice of the Kindle 2

Filed under:

The Engadget Interview: Tom Glynn, the voice of the Kindle 2 originally appeared on Engadget on Tue, 03 Mar 2009 17:00:00 EST. Please see our terms for use of feeds.

Permalink | Email this | Comments

Inside the Mind of Microsoft’s Chief Futurist

If I encountered Craig Mundie on the street, met his kind but humorless gaze and heard that slight southern drawl, I’d guess he was a golf pro—certainly not Microsoft’s Chief of the future.

As chief research and strategy officer at Microsoft, Mundie is a living portal of future technology, a focal point between thousands of scattered research projects and the boxes of super-neat products we’ll be playing with 5 years, 20 years, maybe 100 years from now. And he’s not allowed to even think about anything shipping within the immediate 3 years. I’m pretty sure the guy has his own personal teleporter and hoverboard, but when you sit and talk to him for an hour about his ability to see tomorrow, it’s all very matter of fact. So what did we talk about? Quantum computing did come up, as did neural control, retinal implants, Windows-in-the-cloud, multitouch patents and the suspension of disbelief in interface design.

Seeing the Future
Your job is to look not at next year or next five years. Is there a specific number of years you’re supposed to be focused on?

I tell people it ranges from from about 3 to 20. There’s no specific year that’s the right amount, in part because the things we do in Research start at the physics level and work their way up. The closer you are to fundamental change in the computing ecosystem, the longer that lead time is.

When you say 3 years, you’re talking about new UIs and when you say 20 you’re talking about what, holographic computing?

Yeah, or quantum computing or new models of computation, completely different ways of writing programs, things where we don’t know the answer today, and it would take some considerable time to merge it into the ecosystem.

So how do you organize your thoughts?

I don’t try to sort by time. Time is a by-product of the specific task that we seek to solve. Since it became clear that we were going to ultimately have to change the microprocessor architecture, even before we knew what exactly it would evolve to be from the hardware guys, we knew they’d be parallel in nature, that there’d be more serial interconnections, that you’d have a different memory hierarchy. From roughly from the time we started to the time that those things will become commonplace in the marketplace will be 10 to 12 years.

Most people don’t really realize how long it takes from when you can see the glimmer of things that are big changes in the industry to when they actually show up on store shelves.

Is it hard for you to look at things that far out?

[Chuckles] No, not really. One of the things I think is sort of a gift or a talent that I have, and I think Bill Gates had to some significant degree too, is to assimilate a lot of information from many sources, and your brain tends to work in a way where you integrate it and have an opinion about it. I see all these things and have enough experience that I say, OK, I think that this must be going to happen. Your ability to say exactly when or exactly how isn’t all that good, but at least you get a directional statement.

When you look towards the future, there’s inevitability of scientific advancement, and then there’s your direction, your steering. How do you reconcile those two currents?

There are thousands of people around the world who do research in one form or another. There’s a steady flow of ideas that people are advancing. The problem is, each one doesn’t typically represent something that will redefine the industry.

So the first problem is to integrate across these things and say, are there some set of these when taken together, the whole is greater than the sum of the parts? The second is to say, by our investment, either in research or development, how can we steer the industry or the consumer towards the use of these things in a novel way? That’s where you create differentiated products.

Interface Design and the Suspension of Disbelief
In natural interface and natural interaction, how much is computing power, how much is sociological study and how much is simply Pixar-style animation?

It’s a little bit of all of them. When you look at Pixar animation, something you couldn’t do in realtime in the past, or if you just look at the video games we have today, the character realism, the scene realism, can be very very good. What that teaches us is that if you have enough compute power, you can make pictures that are almost indistinguishable from real life.

On the other hand, when you’re trying to create a computer program that maintains the essence of human-to-human interaction, then many of the historical fields of psychology, people who study human interaction and reasoning, these have to come to the fore. How do you make a model of a person that retains enough essential attributes that people suspend disbelief?

When you go to the movies, what’s the goal of the director and the actors? They’re trying to get you to suspend disbelief. You know that those aren’t real people. You know Starship Enterprise isn’t out there flying around—

Don’t tell our readers that!

[Grins] Not yet at least. But you suspend disbelief. Today we don’t have that when people interact with the computer. We aren’t yet trying to get people to think they’re someplace else. People explore around the edges of these things with things like Second Life. But there you’re really putting a representative of yourself into another world that you know is a make-believe environment. I think that the question is, can we use these tools of cinematography, of human psychology, of high-quality rendering to create an experience that does feel completely natural, to the point that you suspend disbelief—that you’re dealing with the machine just as if you were dealing with another person.

So the third component is just raw computing, right?

As computers get more powerful, two things happen. Each component of the interaction model can be refined for better and better realism. Speech becomes more articulate, character images become more lifelike, movements become more natural, recognition of language becomes more complete. Each of those drives a requirement for more computing power.

But it’s the union of these that creates the natural suspension of disbelief, something you don’t get if you’re only dealing with one of these modalities of interaction. You need more and more computing, not only to make each element better, but to integrate across them in better ways.

When it comes to solving problems, when do you not just say, “Let’s throw more computing power at it”?

That actually isn’t that hard to decide. On any given day, a given amount of computing costs a given amount of money. You can’t require a million dollars worth of computer if you want to put it on everybody’s desk. What we’re really doing is looking at computer evolutions and the improvements in algorithms, and recognizing that those two things eventually bring new problem classes within the bounds of an acceptable price.

So even within hypothetical research, price is still a factor?

It’s absolutely a consideration. We can spend a lot more on the computing to do the research, because we know that while we’re finishing research and converting it into a product, there’s a continuing reduction in cost. But trying to jockey between those two things and come out at the right place and the right time, that’s part of the art form.

Hardware Revolutions, Software Evolutions
Is there some sort of timeline where we’re going to shift away from silicon chips?

That’s really a question you should ask Intel or AMD or someone else. We aren’t trying to do the basic semiconductor research. The closest we get is some of the work we’re doing with universities exploring quantum computers, and that’s a very long term thing. And even there, a lot of work is with gallium arsenide crystals, not exactly silicon, but a silicon-like material.

Is that the same for flexible screens or non-moving carbon-fiber speakers that work like lightning—are these things you track, but don’t research?

They’re all things that we track because, in one form or another, they represent the computer, the storage system, the communication system or the human-interaction capabilities. One of the things that Microsoft does at its core is provide an abstraction in the programming models, the tools that allow the introduction of new technologies.

When you talk about this “abstraction,” do you mean something like the touch interface in Windows 7, which works with new and different kinds of touchscreens?

Yeah, there are a lot of different ways to make touch happen. The Surface products detect it using cameras. You can have big touch panels that have capacitance overlays or resistive overlays. The TouchSmart that HP makes actually is optical.

The person who writes the touch application just wants to know, “Hey, did he touch it?” He doesn’t want to have to write the program six times today and eight times tomorrow for each different way in which someone can detect the touch. What we do is we work with the companies to try to figure out what is the abstraction of this basic notion. What do you have to detect? And what is the right way to represent that to the programmer so they don’t have to track every activity, or even worse, know whether it was an optical detector, a capacitive detector or an infrared detector? They just want to know that the guy touched the screen.

Patents and Inventor’s Rights
You guys recently crossed 10,000 patent line—is that all your Research division?

No, that’s from the whole company. Every year we make a budget for investment in patent development in all the different business groups including Research. They all go and look for the best ideas they’ve got, and file patents within their areas of specialization. It’s done everywhere in the company.

So, take multitouch, something whose patents have been discussed lately. When it comes to inevitability vs. unique product development, how much is something like multitouch simply inevitable? How much can a single company own something that seems so generally accepted in interface design?

The goal of the patent system is to protect novel inventions. The whole process is supposed to weed out things that are already known, things that have already been done. That process isn’t perfect—sometimes people get patents on things that they shouldn’t, and sometimes they’re denied patents on things they probably should get—but on balance you get the desired result.

If you can’t identify in the specific claims of a particular patent what it is novel, then you don’t get a patent. Just writing a description of something—even if you’re the first person to write it down—doesn’t qualify as invention if it’s already obvious to other people. You have to trust that somehow obvious things aren’t going to be withheld from everybody.

That makes sense. We like to look at patents to get an idea of what’s coming next—

That’s what they were intended to do; that was the deal with the inventor: If you’ll share your inventions with the public in the spirit of sharing knowledge, then we’ll give you some protection in the use of that invention for a period of time. You’re rewarded for doing it, but you don’t sequester the knowledge. It’s that tradeoff that actually makes the patent system work.

Windows in the Cloud, Lasers in the Retina
Let’s get some quick forecasts? How soon until we see Windows in the cloud? I turn on my computer, and even my operating system exists somewhere else.

That’s technologically possible, but I don’t think it’s going to be commonplace. We tend to believe the world is trending towards cloud plus client, not timeshared mainframe and dumb display. The amount of intrinsic computing capability in all these client devices—whether they’re phones, cars, game consoles, televisions or computers—is so large, and growing larger still exponentially, that the bulk of the world’s computing power is always going to be in the client devices. The idea that the programmers of the world would let that lie fallow, wouldn’t try to get any value out of it, isn’t going to happen.

What you really want to do is find what component is best solved in the shared facility and what component is best computed locally? We do think that people will want to write arbitrary applications in the cloud. We just don’t think that’s going to be the predominating usage of it. It’s not like the whole concept of computing is going to be sucked back up the wire and put in some giant computing utility.

What happens when the processors are inside our heads and the displays are projected on the inside of our eyeballs?

It’ll be interesting to see how that evolution will take place. It’s clear that embedding computing inside people is starting to happen fairly regularly. There’s special processors, not general processors. But there are now cochlear implants, and even people exploring ways to give people who’ve lost sight some kind of vision or a way to detect light.

But I don’t think you are going to end up with some nanoprojector trying to scribble on your retina. To the extent that you could posit that you’re going to get to that level, you might even bypass that and say, “Fine, let me just go into the visual cortex directly.” It’s hard to know how the man-machine interface will evolve, but I do know that the physiology of it is possible and the electronics of it are becoming possible. Who knows how long it will take? But I certainly think that day will come.

And neural control of our environment? There’s already a Star Wars toy that uses brain waves to control a ball—

Yeah, it’s been quite a few years since I saw some of the first demos inside Microsoft Research where people would have a couple of electrical sensors on their skull, in order to detect enough brain wave functionality to do simple things like turn a light switch on and off reliably. And again, these are not invasive techniques.

You’ll see the evolution of this come from the evolution of diagnostic equipment in medicine. As people learn more about non-invasive monitoring for medical purposes, what gets created as a byproduct are non-invasive sensing people can use for other things. Clearly the people who will benefit first are people with physical disabilities—you want to give them a better interface than just eye-tracking on screens and keyboards. But each of these things is a godsend, and I certainly think that evolution will continue.

I wonder what your dream diary must look like—must have some crazy concepts.

I don’t know, I just wake up some mornings and say, yeah, there’s a new idea.

Really? Just jot it down and run with it?

Yeah, that’s oftentimes the way it is. Just, wasn’t there yesterday, it’s there today. You know, you just start thinking about it.