let’s talk about jumping

You press a button; a character on the television screen jumps. You press the same button again, while the on-screen character is in the air; the character jumps again.

In video-gamer vernacular, this is called a “double jump.” To a gaming outsider, the concept might not make any sense. It might even be flagrantly ridiculous.

To a gaming outsider, actually, that a game character encounters so many situations wherein he needs to jump in order to proceed might in itself be foreign or bizarre. Before we can properly talk about double jumping, we probably have to talk about jumping.

The fact seems to be that, in games of ancient times, jumping was a chief mechanic because it offered a moment-long variation in what would otherwise be a game about getting from point A to point B.

The ancient game template calls for a hero beginning at an origin, seeking a destination, impeded by obstacles. “Obstacles” can be either dangerous objects or bottomless pits. “Dangerous objects” can either be barrels rolling down ramps in Donkey Kong, bullet-like projectiles of mysterious or known origin, or enemy characters. The player deals with dangerous objects by avoiding them. In a two-dimensional game presented at a side-on angle, this means jumping. In a two-dimensional game presented by a top-down angle, this means walking around it. “Enemy characters” can either be grunts, mini-bosses, or bosses. Grunts are easy to deal with. Mini-bosses are more harrowing experiences, and bosses are occasionally cinematic in the struggle they put the player through.

Super Mario Bros. allows the player to surmount obstacles mostly by jumping. In fact, you don’t have to kill a single enemy in Super Mario Bros. You can jump over or otherwise avoid every one of them. Super Mario Bros. is also elegant enough to make the jump function an attack in itself. Most enemies are killed or otherwise dealt with by jumping on top of them. Jump on top of a mushroom-like Goomba, and it flattens and dies. Jump on top of a turtle-like Koopa, and it retracts into its shell. Now you can touch the shell to send it flying in one direction. If it touches other enemies, they die. However, it can ricochet back and hit you, either hurting you or killing you. If you are Little Mario, it kills you. If you are Big Mario, it turns you into Little Mario. If Little Mario eats a mushroom, he turns into Big Mario. If Big Mario touches a Fire Flower, he can throw fireballs, which can kill all enemies on contact, except the black beetle things, which are an exception, because a game like this needs one little exception of every flavor in order to stay spicy. Oh, and there are also the Hammer Brothers, who jump near-randomly and throw hammer projectiles at an eclectic angle. You can kill them by jumping on them, by jumping up at a block they are standing on, thus hitting them from beneath, or by throwing fireballs.

Super Mario Bros. is not the bad guy. In fact, Super Mario Bros. is the ultimate good guy. I might have mentioned it before, though I am adamant in my belief that Super Mario Bros. is a perfect video game design. It is not, however, an inimitable miracle. The problem is that game developers, including Nintendo themselves, have been trying to imitate it in the wrong ways.

What’s perfect about Super Mario Bros. is that it teaches you how to play in its opening ten seconds. We have a character on the left side of the screen, facing right. We instinctively know that we have to move to the right. We move to the right. A block with a question mark on it floats in the air above our character’s head. Two steps to the right of this block is a series of many blocks, some with flashing question marks, and some of solid, scary-looking brick. Floating above this series of blocks is another question mark block.

Just as we see this buffet of blocks, a mushroom-shaped monster-like thing becomes visible, moving from the right to the left.

This thing is moving, then, in the exact opposite direction of the hero. This subliminally communicates to us that this Other Moving Object is our enemy.

We simultaneously wonder: how do I reach that block? And how do I kill that enemy?

In the old times, we only had two buttons on the controller. We’d have plenty of time to press one of them before the enemy killed and/or ate us. This is how we’d learn that we can jump.

The hero’s head bonks into the question mark block. A deliciously crispy ping sound echos as a golden coin pops up and disappears. Where did the coin go? Who knows! At the top of the screen, we see a coin symbol and a number: “01”. We realize we should get more of those.

The enemy is drawing nearer. The only thing we know how to do is jump. We press the jump button. We jump just high enough to jump over the enemy, and survive.

Or maybe we land on the enemy’s head, and he dies. Points are added to our score. We think, “Cool”.

By now, we know that question mark blocks give us good things. What about the ugly brown bricks? We jump at one. It throbs. It doesn’t break.

We hit the next question mark block, and a mushroom pops out and begins spontaneously sliding across the ground. Of course, we pick it up, because the last question mark block only gave us good things. Mario grows larger. If we jump at a solid brick again, we find that we can break it.

The player will learn, when he makes contact with an enemy, that being Big Mario means you can survive two hits before dying. If the player doesn’t pick up the mushroom and instead touches the first enemy, he will also learn that being Little Mario means you die after touching one enemy.

So what we have here is a game with a strictly limited move set (walk, run, jump) in which two of the three actions (walk, run) serve the short and long-term goals (get from point A to point B), albeit in different capacities (run to move quickly, walk to line yourself up minutely for tricky jumps) and the last of the three actions (jump) is linked to multiple (nearly all) immediate-term goals (avoid enemies, kill enemies, avoid obstacles, obtain items).

In this game, the character’s vital status is portrayed entirely through the game graphics: if Little, Mario dies in one hit; if Big, Mario survives two hits. If Big and White, Mario can survive two hits and throw fireballs.

What if the player manages to complete the entire game on his first play-through, without touching a single enemy or even falling into a bottomless pit? Even as a person who has seen or experienced many improbable events, I can’t imagine this ever actually happening. Has a human being of such impeccable reflex and intuition ever existed? This is a crucial question, and the answer is “I don’t think so”. If you or someone you know is this particular human being, plz upload something to YouTube plz. I’d love to see how gracefully you can tie your shoes (winking smiley face) (note to many curious readers who emailed me last month: no, “(winking smiley face)” is not a bizarre copy-editing tick; it’s just something I’ve been doing on the internet for a couple of years).


PIC

Let’s go one further, and say that this genius player doesn’t even ever pick up the Fire Flower. Let’s say he never uses the warp zone, and he isn’t particularly bent on picking up every last coin. He takes his time — though not too much time, of course, because his time is somewhat strictly limited. Let’s also say that he only kills enemies on accident, or when it feels as though it couldn’t be avoided (even though we’ve established that it can be avoided).

At any rate, if the player is good enough to play the game straight through, without dying or even being injured, on his first time ever playing through the game, would that necessarily make it impossible for him to feel tension, pressure, or event delight at his own success? In a bad game, he could probably get away with not enjoying the experience. In Super Mario Bros., I’m pretty confident that even the virtuoso / jackpotman / idiot savant would enjoy the experience thoroughly. The reason why is so difficult to articulate that even Nintendo has failed to replicate Super Mario Bros. universal success in the (nearly) twenty-five (!) long years since its release.

I am pretty sure that I haven’t gone a single column on this website without mentioning Super Mario Bros. and why it’s a great game, though for the sake of those who came in late, let’s go over it again:

1. It’s simple: you can proceed entirely to the end of the game by only walking, running, and jumping

2. It’s elegant: it provides the player with a robust, flowing experience that requires him/her to use every imaginable permutation of the small set of player character actions

3. It’s “artistically” confident: the game is about a man in overalls who grows to twice his normal size when he eats a mushroom, and it doesn’t dare to explain the reasons why

In 2005 — again, I realize I’m repeating myself — Nintendo proclaimed that many gamers had lost touch with the games of today, and that they sought to win those players back. Their proposed solution was to make games that appealed to people who didn’t play games. Someone at Nintendo must have said, during an important meeting, that people who didn’t play games had played Super Mario Bros. (proof: Super Mario Bros. was a lot of people’s first game). Nintendo’s knee-jerk reaction to this reality was to release Super Mario Bros., perfectly as-is, as a budget-priced Gameboy Advance cartridge. Okay, that worked a little bit. Eventually, Nintendo began to make games that weren’t games, like Brain Training. These sold hugely. In board rooms around Japan, game developers conspired to “think like Nintendo.” Instead of interpreting “think like Nintendo” as “think of good ideas that might appeal to people the way Brain Age appeals to people”, they instead seemingly interpreted it as “rip off Brain Age“.

I guess this mostly brings us to the present. People from outside the gamesphere climb over one another to inform us that games are not art. Maybe they will be, someday, some say, though by and large, at this very moment, they are not.

Gamers get all up in arms whenever someone says games are not art. Roger Ebert got truckloads of hate mail for daring to insinuate that games are not art. I read through a lot of blog posts or comments around the time Roger Ebert said that. Most of the enraged kids were quick to point out how the games industry is making so-and-so many billions of dollars more per year than the film industry, and that was a little weird. Since when does monetary worth translate to art? Whatever happened to artists being starving?

Some say, games don’t have an equivalent of “Citizen Kane.” I say, games don’t have an equivalent of “Ben Hur,” either. The problem is consistency, “artistic conscience”. Man, I feel like I talk about this all the time. People ask me about this on the street, sometimes. It’s weird. Sometimes they say that I only like Super Mario Bros. because I was a kid when I played it. No, I’m immune to the nostalgia thing. Also, I haven’t matured emotionally since about age five (I was a mature kid), and I’m not afraid to admit that. Super Mario Bros. had the formula perfect.

What happened, though? Super Mario Bros. took an industry that was basically a fat and ugly infant, and turned it into a toddler with a popsicle addiction. Twenty-five years later, games are making more money than movies. They are not, however, making All The Money In The World. So maybe people could be making better games.

It’s not vilification to portray game developers as money-grubbing businessmen. This is, after all, a business. However, it maybe it vilification (in the most awesome way) to insinuate that, in the videogame industry, the games are often being made by the types of general-entertainment “fanboys” that, in the movie industry, would consider it the highest honor to get paid a salary to fetch Steven Spielberg his coffee.

Maybe that sounds meaner than I intended it to. Oh well! Too late! It’s all typed out now, and god knows I still haven’t found my “delete” key.

I like to mention this interview with Tetsuya Nomura that I read in Weekly Famitsu a couple years ago, on the subject of Kingdom Hearts II. In this one issue of Famitsu, Mr. Nomura, a man who found fans after designing the characters for Final Fantasy VII, is interviewed twice — once on the subject of the upcoming Blu-ray release of a special edition of a film he had “directed” about what happens to the Final Fantasy VII characters after Final Fantasy VII ended, and once on the subject of the Final Mix re-release of Kingdom Hearts II. A few huge things pop out and touch index fingertips to the readers’ maybe-moist eyeballs:

In the interview re: “Advent Children” (the Final Fantasy VII movie special edition / re-release), he talks about how the computer graphics will be more detailed, to truly wield the power of a larger storage medium (Blu-ray, in this case). For example, the main character, Cloud, has dirt on his face during one particular scene, to “better illustrate the intensity of his struggle.” Mr. Nomura then quips that these are “details that only the true fan will appreciate.”

Furthermore, on the subject of “Advent Children”, Mr. Nomura explains, when asked if both language tracks (English / Japanese) will be available on the disc, that only English will be available, because of both “storage issues” and the fact that, and I quote, “we estimate that most of the buyers of this special edition will be people who already own the movie on DVD”, so they want to give them a language track they haven’t heard before.

On the subject of Kingdom Hearts II: Final Mix, Mr. Nomura was asked why they made a “Final Mix” after saying, in a Famitsu interview on the subject of the original Kingdom Hearts II, that the goal with Kingdom Hearts II was to make a game that would not need a “Final Mix”. Mr. Nomura’s answer is that, sadly, without actually releasing something and seeing how the world perceives it, it’s nearly impossible to tell what could be improved or added to it; “And, of course”, he adds, crucially, “I thought of my own ideas for new content, as well”.

The biggest fire alarm rings when Mr. Nomura says, of the joy of developing Kingdom Hearts II, that nearly every interviewee during the team-expansion phase clapped his or her hands with joy, reported that they were huge fans of the first Kingdom Hearts game, and/or (this is crucial), asked Tetsuya Nomura for his autograph.

What I’m saying is, can you trust such people to give an objective opinion during the game development process?

In short, games are made by fans, for fans. The would-be critics are, more often than not, fans. Despite being fans of games in general, game developers often fear the outside world of game fans to a point of near-absolute subservience.

I COULD PROBABLY SAY A LOT MORE ABOUT THAT

I’m not going to say any more about that. We’re going to leave the tip of that iceberg uncovered and do a little bit of stargazing.

LET’S TALK ABOUT JUMPING

(Jerry Seinfeld voice) What’s the deal with jumping? I mean, who does that? Who jumps everywhere they go? Have you ever noticed just how much you jump in most video games?

I’ve been over this before: characters like Nathan Drake and Lara Croft appeal to us because they are realistic human beings with flawed and amusing personalities or wicked pyramidal breasts. They draw us into their real-like-rule-having real-like worlds, and then they do maybe sixty-six explosive pull-ups in the space of three minutes, pulling themselves up so hard that they spring high into the air, grabbing onto the next ledge. We either see this the first time and lol, because we know this can’t happen in real life, or we keep playing the game without thinking about it, because we’ve never tried to do a pull-up in real life, and that if we ever try and fail to do one, we will no doubt attribute our failure to the fact that we don’t have breasts like Lara Croft. For the former type of gamer, the games’ ridiculousness escalates until we obtain a shrewd enough detachment from the proceedings, at which point we are maybe-sanctimoniously able to “sit back” and “relax” and “enjoy” it because, after all, it’s “just a game.” In the case of the latter type of gamer, they will grow up to think that this is what games are, this is what happens in games, and if this sort of thing doesn’t happen in every single game on earth, the game is stupid and/or for losers.

So, why do action / adventure game characters jump so much?

In short, it’s because games since Super Mario Bros. tend to be about moving from point A to point B. Name me one action game where your long-, short-, or immediate-term goals at any given time don’t include a Point B. You can’t! It’s just not possible! (Tower defense games don’t count!)

Games are about moving. And not just any moving. No mainstream hit game has managed to perform the feat of being about merely walking or running at one speed along featureless terrain. Games need to give us terrain, obstacles, and opponents, in order to make our journeys interesting. If the journeys aren’t interesting, we wouldn’t be taking them. Games are entertainment. The journeys they present us are wholly optional activities, in the context of human living.

Games are wallpaper, or ornamentation, for the rooms of our lives. Films, too, are wallpaper for the rooms of our lives. Films contribute to the human maturation process so profoundly that modern philosophers (some of whom aren’t dead yet, so it wouldn’t be kind to name-drop them) have even taken to acknowledging their influence as vital. The best way to approach this point is to quote a minor character in Edward Yang’s film “Yi Yi,” who tells a girl, on a date, that his unseen, unnamed uncle says “People live four times longer these days because of movies.” How scary, and true.

Do we live longer because of videogames?

Games are wallpaper for bare rooms of our life-chapters. They’re a hobby(, though, like all hobbies, they might also be a job). Games are entertainment. Movies play out in front of us. Games do not. Movies are passive entertainment. Games are active entertainment.

You can construct a film entirely out of scenes where characters sit at a table, talking to one another.

You can sit at a table in real life with your friends, talking about whatever. However, it will never be exactly like a scene in a film. Maybe, in the film, every character in a conversation knows something that one other character doesn’t know. Maybe someone’s life is at stake. Films have taught us how much it sucks when someone’s life is at stake.

You can’t make a game about people sitting at a table, talking. Buried deep in the psychology of games is this absolute, burning need to see something move. Remember the first time you ever used a television remote to turn a television on, or change channels? You might have been three years old, or you might have been nine. Games speak to the part of your psychology that was ecstatic at that precise moment. The need to move something that isn’t yourself, to actively participate in something, is so deeply buried in the psychology of games that no game controller since the Atari 2600 has lacked an input device whose purpose is innately understood as solely to move your main character.

Maybe it’s hard to think back this far: When was the first time you realized that the joystick moved Pac-Man, or that the control pad moved Mario? With Pac-Man, it was easy; you might have been in an arcade, and the joystick was the only input device. Well, there was also a start button. Arcade games were easy to understand, because the function of each button was written right there next to the button.

Or maybe you first played Pac-Man on an Atari 2600. The shape of the standard Atari 2600 controller communicated so much to the player. The joystick was this big tall skinny smokestack of a thing, and the only button was this little dime-sized red thing. It was red, a stark contrast against the black plastic of the controller, so as to stand out just enough, and tell you, “Hey, I’m important,” though it did not dare approach the monumental majesty of the joystick. And what a word, “joystick”! It’s a stick, which brings you joy. What is the joy? The joy is that you are remotely making something inside your television screen move. After all those years of watching news anchors discuss earthquake or fire death tolls, or watching game show contestants flub simple questions and miss out on Brand New Cars, now, you finally had the opportunity to direct the action inside the television screen.

It’s obvious that the joystick was the most important part of the controller, just as it was obvious that the button was absolutely essential. In many games, the button served only to start the game. In some games, it allowed you to shoot. So there we had the birth of most modern genres of game, in Space Invaders: You move, and you shoot. You can even take cover under little shields. Wow, Gears of War is such a Space Invaders rip-off (lol).

The Nintendo Famicom / Entertainment System arrived at a point when game designers were so good at making the same old games that the people started revolting at the sameness of them all. They call this the “crash” of the games market. It must have been right around Super Mario Bros. that Nintendo realized it was probably best to not make too many new franchises, to keep the cards close to their chest and only release new games when they were significantly big and ground-breaking enough. Too many games tired people out.

That said, around the time of Super Mario Bros., games were facing an evolve-or-die situation. The public reputation of games in general had fallen through the floor. People needed a really good reason to care again. Super Mario Bros. was that really good reason.

Twenty-five years into the future, we have lots of games, and even more people playing them. I have a friend whose daughter is old enough to play and enjoy Pokemon. We’re passing games on as a tradition, a habit, or a traditional habit. Well, hey, traditions are just habits we can share.

The games market is always looking for ways to get More Money. The two methods for getting More Money are:

1. Make people who like games buy more games
2. Make people who don’t like games like games

Accomplishing #1 is easy: Make games that people who like games would probably like.

#2, however, is something no one seems to have worked out a perfect formula for.

Meanwhile, twenty-five years in the past, we had Super Mario Bros. After nearly a decade of games where either movement or shooting were the chief joys, we finally had a game where the movement felt amazing (Mario accelerates, he slides to a stop, he squeaks when he turns around, he jumps higher the longer you hold the button), and where the shooting (of fireballs) felt genuinely unique (they bounce at such neat, quirky angles). What we had, in Super Mario Bros., was a game of “artistic conscience.”

A year later, we had Castlevania. Oh man. I’ve been waiting for this part!

I love the Castlevania games; I love them enough to pick them apart viciously. Castlevania games are invariably about a hero on a quest to kill Dracula or someone who is good enough friends with Dracula to be dangerous. Most of the time, the journey leads through Dracula’s castle, colloquially referred to by Transylvanian residents as “Castlevania,” because it’s a landmark and they have to be proud of something.

In the original Castlevania, the hero, Simon Belmont, did a lot of stair-climbing. Staircases were represented graphically as diagonal lines with little individual stairs etched into them. To ascend a staircase, you approach it and press up on the control pad. The hero begins walking up the staircase. He ascends the stairs slowly, “realistically”. He can’t jump while ascending a staircase. He can, however, swing his whip at monsters who might be flying or jumping in his direction. Some of the most ferocious memories of the original Castlevania aren’t of large set-pieces, or even any particular small set-pieces. They are of single repeated moments of dread, of when you’re climbing up some stairs and you see a bat flying at you from the right side of the screen. Bats don’t fly in a straight line. They kind of wobble up and down a bit. In your brain, you do a quick calculation: If the bat wobbles a little bit down or a little bit up before his flight path intersects the staircase, at your current speed of ascent, you would not be able to move, from your current location, to a position higher or lower than that bat’s trajectory. So you are going to have to either

#1 take the hit, or
#2 kill the bat

Now comes brain calculation #2: The business end of your whip is only so-and-so pixels high. The bat is about six times higher than your whip’s height. So where do you need to stand, on this staircase, to be able to destroy this bat with the highest probability?

Thankfully, in the original Castlevania, a shrewd player can always kill the bat. No evil variable will ever tumble out of nowhere and render the act impossible. The game is impeccably balanced, though it may seem cruel and unforgiving. The secret is that the bats, though wobbling in flight pseudo-randomly, only spawn from pre-determined parts of the screen. All you need to do is put two and two together, position yourself accordingly, and press the kill button.

The feeling of hesitating just too long on a staircase, performing some unnecessary calculations, thinking way too hard, and then failing to turn around in time, so that the bat whaps you in the back and Simon cringes, hops back, and falls like a sack of potatoes right off the side of the staircase — it’s burned into the classic old-school gamer’s brain.


PIC
Later years would equip us with such rollicking, frustrating, sticky friction-memories as: Trying to walk up a hill in Super Mario 64, trying to ascend up a hill from a walking speed in any Sonic the Hedgehog game without pressing the jump button, et cetera. In Super Mario 64, it’s maddening. In Sonic the Hedgehog, it’s like we’re trying to make things hard on ourselves. Castlevania planted the seed in us. We want, we need, to feel that friction. And Super Mario Bros. planted the seed in Castlevania: Games aren’t just about moving, they are about rubbing against the surface of the world.

Castlevania released to a soon-to-be-devoted cult audience, wearing its flashy quirks on its sleeves and pants and on the back of its shirt. The whip was a weird weapon. Previous games had featured guns that fired projectiles out of a location in the middle of the character’s body (ship) in a straight line. Those projectiles generally continued until they hit something. In Castlevania, the whip fires at about the character’s chest height, it stretches only so far, and its effective area is only so high and so wide. In order to get anywhere in the game, the player had to master the knowledge of the precise tiny pixel block that represented his avatar’s offensive influence. Most crucial to this knowledge was how far said pixel block existed from his character.

Then there were the stairs. And the quirks of all the secondary weapons. And the jumping physics: Simon jumps up in what looks like a crouched position; you can control the length of the jump until a certain precise point, after which he falls straight down like an axe. It’s pretty safe to say that Castlevania was made this way on purpose. If they didn’t want friction and physics in their game, they would have made something bland and simple. Japanese side-scrolling games had existed for several years without friction, or even without specific quirks. Games like Dragon Buster, where your character floats mouse-cursor-like over a featureless world, swinging a sword at a boring angle. And some side-scrollers had remarkable quirks, like Legend of Kage, where your character can jump two screens straight up, albeit without any really interesting physics. (Also, Legend of Kage scrolls from right to left, instead of from left to right.) Castlevania must have been made the way it was made on purpose: Eschewing huge game-mechanic related gimmicks for general friction.

What was the selling point of Castlevania? Well, it was a game about horror movie monsters. The schlock-smiths at Konami saw fit to craft a game that found a perfect excuse to fit Dracula, werewolves, Frankenstein’s Monster, Medusa, and the Grim Reaper into the same story. They figured that this level of instant familiarity would translate to perfect success.

Actually, wait, where did Castlevania really start? Konami seem to have the information under some kind of G-14 classification, so let’s make up theories: Maybe it was the music. Maybe the music composer was tooling around with the sound chip and was like, “Whoa, listen to this horror-movie-music shit I have going on right here. It sounds like a pipe organ.” Or maybe some game designer went, “Oh, man, this simple character I created has really weird jumping physics and his weapon is quirky as hellllllllll,” and some other guy was like, “Okay, let’s fit this into something. Hey, that dude over there got something to sound like a pipe organ, and that guy is drawing sprites of werewolves, so let’s see what we can do”. (Actually, I just looked up the music composer on Wikipedia, and it turns out she was a . . . she. I just added her as a friend on Facebook! I love Facebook friends! If you’re reading this, be my Facebook friend (so lonely).)

I’d like to think Castlevania began with the staircases. Simon Belmont is a more old-world, realistic kind of dude than Mario. Something tells me that a game programmer at Konami played Super Mario Bros. and found it funny that Mario jumps all the time. One of the more iconic images of Super Mario Bros. is the staircase at the end of each stage. It’s as high as the screen. There’s a flagpole on the other side of it. You climb the stairs, and then you jump off, trying to hit the top of the flagpole. Or, well, technically, if you are a kid with an imagination, you try to jump over the flagpole. Hell, twenty-five years later, if you are an adult who was once a child with an imagination, you still try to jump over the flagpole. Traditions are habits that so don’t hurt us, we just have to share them.

So, wait, why are the “stairs” on this staircase as high as Little Mario’s head? Even Big Mario, who is technically a giant, has to jump to climb each individual stair. Isn’t that a little weird? One answer would be that the “staircase” isn’t actually a “staircase.” “Staircases” are architectural features that facilitate movement between one floor of a building and another. Without stairs, we’d have to learn how to jump. I had a dream just two weeks ago, to be honest, where I got home to find that the landlord had demolished the staircase leading up to my apartment, and informed me that he’d drilled a hole in the middle of my living room floor, and that I would need to learn how to jump three stories straight up. In the dream, I kicked him in the nuts and started crying, though in real life I probably would have just started crying.

The staircases at the end of the stages in Super Mario Bros. are not actually staircases. They’re just the rare case of a Rorshach test in which everyone gives the exact same answer (you know, like that one that looks like a dead dog): this arrangement of blocks Looks Like a Staircase. The brain is triggered: Jumping, eventually, comes to feel weird and ridiculous. So maybe some hotshot at Konami goes, “You know, stairs exist in real life as real things. It feels like work to climb a staircase, just like it feels like work to walk or run long distances. Super Mario Bros. made running and walking into fun. Maybe we can make staircases fun?” Maybe this maybe-plan backfired, depending on your perspective. You might have thought the original Castlevania was a bone-crushing, too-difficult experience. You’d also be a rube, unable to wrap his head around simple genius!

Okay, that was mean. Let’s say something I don’t like about Castlevania: The very beginning of the game. You’re outside the castle. The first time you play the game, you don’t know who the hell you are, or why the hell you are who the hell you are. You press one button, and you jump. You press another button, and you lash out with your whip. Congratulations, you now know how to play. Now, right in front of you, there’s a candle. The candle is glowing, the flame flickering with a brilliant two-frame animation. Okay. Maybe you walk right past it. The game is, if nothing else, atmospheric. That much you can tell already, just standing outside the first level. Hell, you might play Castlevania for ten minutes, performing pretty poorly at the first stage, before your friend comes in and shows you that you can whip the first candle, and a power-up falls out. Touch it, and your character freezes for an instant. Now his whip is, like, twice as long. Wait, why would you give me that right away, before I have even had an occasion to use the whip to kill anything (aside from a candle)? That’s weird.

WALLPAPER

Did you know that, in a Japanese game development environment, the word used to describe “level design,” since the Famicom era, has been “haikei,” meaning “background” or “scenery”. This is very important to understanding the way the Japanese have historically approached making games.

I’ve personally had the postmodern pleasure of having to explain to a few Japanese game developers that “level design” does not, in the West, mean what the level looks like: It means the things that happen in the level, and how they are laid-out. In Devil May Cry, “level design” is deciding that the first stage is going to look like a church.

Level design in modern Japanese games, by Western standards, is pretty abysmal. In the first stage of Devil May Cry 4, we have a part where the player comes into a circular plaza, which is then surrounded by a glowing red wall. Enemies then appear out of mostly nowhere. The “producers” — the “brains” behind this schlock — have seen fit to solve the puzzle of the moment by deciding that the enemies are demons, and everyone knows that demons have the power to basically materialize out of the ground or thin air. We fight them until they stop appearing. When they are dead, the glowing red wall vanishes. We are now free to move forward into the only path forward.

Eventually, we’re inside a building. We walk by a door. We look at it. A message tells us: “It’s locked by a mysterious power.” We continue down the hallway. There’s another door. We look at it. A message tells us: “It’s locked. You need a key.” Then the camera pulls back, and we see the mysterious-power-locked door slide open. There’s a box behind it. We walk back. We open the box. The box has a key in it. We go back into the hallway, unlock the door in front of us. We exit into a circular plaza. A red, glowing wall appears around us. Enemies materialize. We fight them until they die. The wall disappears. We are now free to move forward. Et cetera.

Why are Japanese games able to get away with this? The reasons are simple, and huge:

1. The character designs / setting / story are meticulously researched to appeal to the target demographic

2. Something in the game (in Devil May Cry, combat) is polished / nuanced to a point where players will not complain if anything / everything outside that thing (combat) is bland

What’s alarming is that all signs point to Japanese games being made this way entirely on purpose. I once suggested to a Japanese game developer, many years ago, that we try to make the levels interesting, in addition to making the combat fun and the story appealing to the target demographic, and the immediate response was “Why?” The ultimate form of the refusal was, “Devil May Cry doesn’t bother, so we won’t bother.” In Devil May Cry, the tasks between battles really are tiny and asinine. Find an orb in a risk-free environment where the biggest challenge is figuring out where your character is on the screen and then use it to somehow open a door, et cetera. Your reward is more of the combat that you crave.

We can find hints of the Everything Disease in this, again. Many of the players playing these games might not care about the graphics or the character designs, though can you really blame developers for maintaining their conviction that the players all care about everything in the game? Some players merely appreciate the challenges posed by the battles. They may like the feeling that they have achieved a level of super-legitimate competence at the labyrinthine reflex-oriented tasks the game sets before them — revving up a motorcycle-sword while dodging enemy attacks, stringing together hits of a combo, avoiding detection from enemy AI patterns just long enough to execute unanswered strings of button-press-triggered combinations. Good money says that a decent percentage of Street Fighter IV players couldn’t care less about the characters or the graphics or the music — they just want to play online, and win, and know that they are doing something better than someone else, someone who exists in the real world. It helps to know that the “something” is difficult. We know something is difficult when we fail at it at least once. We know something is very difficult when we fail three or four times.

In Devil May Cry, the naturally emerging “level design” is basically a commercial break between delicious opportunities for failure. Usually, it’s the “scenery” department that slaps these segments together. That’s kind of how it was in the old days, though maybe not quite.

Back in the old days, games were smaller. Programmers, level designers, and game designers tended to be the same people as the play-testers. In the case of Castlevania, you’ve got a guy with a whip, and you’ve got some stairs, and some enemies coming at him, and he can either climb the stairs, stay and fight, or get hit. The challenge of developing a game like this is laying it out in a reasonable fashion, so that it gets harder as it goes along. You want the hardest bosses to be at the end. You want the hardest platform segments to be later than the segments obviously constructed to help the player learn how to jump.

In the original Castlevania, the team likely sat down once they knew the game was about a big castle full of horror movie monsters, and decided what the motif for each stage would be. Stage one would be a hallway. Stage five would be a clock tower.

Back then, the game design vocabulary was limited. You hear a lot of people talk about how the Nintendo sound chip was so primitive that a composer needed to make a really good track or the music would fall flat. The corollary is that “restriction forces ingenuity”.

Ingenuity is . . . probably better than “innovation.”

Yes, what I’m saying is that “older games were better” — though mostly by default! Castlevania was such a simple game that, unless all of the obstacles were laid out in a common-sensically escalating manner, people would have freaked the hell out and hated it. Super Mario Bros. was the same way, technically, though Castlevania is more interesting in a modern light because it cutely tried to tell a film-like story, and its stages were home to background art that grew increasingly more portentous of some bombastic finale.

The original Castlevania didn’t set the world on fire. It was quirky and well-made, though its horror-movie setting might have put some people off. It carved out a niche, and its developers no doubt felt that they could eat off it for a while. Other game developers saw Castlevania and knew that they couldn’t make games about horror movie castles themselves, and that a whip was too obvious a thing to imitate, so they resigned to make their own game in a unique kind of setting.

We ended up with lots of games. Some of them were great. After maybe five years, we had a lot of games with enough unique hooks to populate one really huge, bombastic, great game.

Eventually, what happened, was the Feature Snipers showed up, and nobody ever needed to do anything original ever again. These trained eyes took aim on the whole of game history, and picked out the targets that could be separated neatly from their respective games’ settings and never be noticed.

LET’S TALK ABOUT DOUBLE JUMPING

One of the features to be famously and widely sniped was the double jump. Many snipers sniped it from Super Ghouls ‘n’ Ghosts, which was in itself a non-numbered Super Famicom sequel to a non-numbered MegaDrive sequel of a Famicom version of an arcade game. The truth is that Super Ghouls ‘n’ Ghosts actually sniped it from someplace else. It might have been Dragon Buster, which I conveniently mentioned earlier in this wall of text.

Why in the flaming hell could you double jump in Dragon Buster? Some old arcade game connoisseur is probably going to lecture me for this. I don’t care. The feature is superfluous. You jump, and then you jump again.

In Super Mario Bros. 3, you can land slowly by flapping a raccoon tail as you jump. Okay. That feature probably came out of the same psychological place as mushrooms that make a man instantly grow to twice his size. The feeling of using a raccoon tail to float is not without nuance. You need to press the button repeatedly; the desperation of Mario’s situation on the screen translates into the desperation of your fingers, translates into the desperate solution on the screen. The best games play desperation ping-pong with us. In Super Mario World, it’s arguably dumbed down: When Mario is wearing a cape, you just hold a button, and he floats slowly down.

In Super Mario World, you can kind of double-jump, by jumping off Yoshi’s back. That doesn’t count as a true double jump, because you can’t do it any time, at will.

No, the first Real Double Jump in games was in Super Ghouls ‘n’ Ghosts, in which King Arthur fights zombies and monsters in an effort to rescue a princess who you see in the opening scene, so you don’t feel too bad if you give up without ever rescuing her (most people do). In Super Ghouls ‘n’ Ghosts, you jump, and then you jump again. You can do this whenever you want. The Japanese instruction manual calls this the “Harrier Jump”. The distinction is important. The Harrier Jump only allows you to increase the vertical element of your jump. It’s so full of nuance it’s almost sick. Before a certain point in your jump trajectory, you can’t initiate the Harrier Jump; past a certain point in your jump trajectory, you can’t initiate the Harrier Jump. You have a short window. And anytime you do it, all it does is boost you up vertically.

It constitutes a huge risk. You can survive two hits before suffering scary death in this game. Enemies and their projectile spawn litter the screen. You need to master the precise feel of the Harrier Jump in order to use it effectively. “Effectively” means any manner that won’t get you killed.

Super Ghouls ‘n’ Ghosts represents an important point in the timeline of action game evolution because the idea of an idiot-savant playing entirely through it on his first try is damn near inconceivable to any self-respecting astropsychologist. Learning to come to grips with the “feel” of the character is even more essential in this game than in the earliest Castlevania titles. You could conceive of someone accidentally understanding all of the necessary skills of the first Castlevania game between the castle door and the first zombie. Super Ghouls ‘n’ Ghosts is too chaotic.

The funny thing is, after Super Ghouls ‘n’ Ghosts, people started generally making easier games.

They did not, however, stop making games with double jumps.

I said earlier that even if a savant played through Super Mario Bros. on one life on his first try, he would still understand and maybe appreciate the unsettling feelings of challenge. A psychological Grand Canyon separates Super Mario Bros. from Super Ghouls ‘n’ Ghosts in this regard, and a psychological Pacific Ocean separates Super Ghouls ‘n’ Ghosts from every other game that has sniped the double-jump feature.

LOCKS, KEYS, WALLPAPER

Sensing an opportunity to make “more money” on a “new game console”, Konami set about making Castlevania: Symphony of the Night “something different.” The market research apparently showed that Role-Playing Games were popular, and previous Castlevania games, such as the acclaimed Castlevania III: Dracula’s Curse had flirted with non-linearity, multiple playable characters, and development of said characters. So Symphony of the Night emerged as a kind of lumbering RPG / action game chimera.

Symphony of the Night is one of the names that gets thrown around whenever fourteen-year-olds argue about the “best game ever.” One of the names it gets thrown against with great vigor is Super Metroid. If you ask me, both of these are very nice games, maybe even great games, though neither of them is the best game ever, because they are deceptive and insincere. You might as well just go ahead and declare The Jam’s cover of the “Batman” theme song the “best rock song ever,” if you’re going to say Symphony of the Night is the “best game ever”.

Maybe I confused someone when I said that. I’m sorry. Though my extemporaneous prose style might lead you to believe otherwise, I am actually a proponent of simple, clean, shimmering game mechanics. I like games to be about progress. About moving forward, either by chunking forward, clunking forward, frickting forward, crunching forward, or whatever have you. I don’t like when games glide on by, and I don’t like it when they throw me against the wall, nor do I like it when they waste my time. I especially don’t like when they piss on my lawn and say they’re the sprinkler repairman. My favorite game of all-time is Out of This World, which I understand a lot of people pretend to like. I am not pretending.

What’s disingenuous about Symphony of the Night? It has a double-jump. The double-jump is something it gives you later in the game. Prior to getting the ability that enables the double jump, you might have come across a wall or obstacle just too high for you to jump over. So the game gives you this supernatural power, tosses off an in-game-world explanation for its existence, and then you’re off. You get the double jump ability, you test it out. You sure can jump high! This should help you get somewhere previously unreachable. Maybe you get the double jump ability and turn the game off (unlikely — getting new abilities, market research shows, instantly renews a player’s interest in the game). Maybe you leave the game turned off for several weeks. When you turn it back on, you sure as hell can’t remember where any previously-too-tall obstacles had been. So you open the map, and check out where the “explored” areas end. You can “solve” the “overlying puzzle” of the game by going to every dead end and seeing if the reason for your previous failure to explore that area had anything to do with not possessing the double jump ability. Eventually, with every little ability the game gives you (the ability to turn into mist, et cetera), you can solve every level-design-centered “puzzle” in the game. It’s not so much solving a puzzle as unraveling a sweater. Now, of course, this game has neat little action challenges, too, and monsters to kill. It’s just — there are no Castlevania staircases. All the stairs are Super Mario Bros. stairs — you have to jump to get up each stair — or they’re bland inclines. The game takes the Castlevania wallpaper and transplants it into a wider-audience-friendly game.

People love this sort of stuff. They love watching numbers go up. Why? I guess I can understand. I have this tape measure that I can operate with one hand. It has a little fastener on it. I can put it around my waist or chest or upper arm, and measure the growth of my muscles. I measure them every Sunday night, to see if my week in the gym paid off, and how. Grinding is part of life. It can be for money or for health. Then it finds its way into our entertainment. People like watching numbers go up in RPGs, they like watching their dudes do more damage. They like getting new items in Zelda so that they can use them to defeat previously invincible enemies, or bridge previously impassable chasms. Symphony of the Night gives players the ability to double jump before giving them the ability to turn into mist, before giving them the ability to fly like a bat, all in the name of making different areas accessible. The player eventually comes to sense his “ownership” of the game environment. This type of lock and key game design, super-commonly called “gating,” is all over, and it works on lots of people.

Why doesn’t it work on me? I don’t know. I’m not pretending, here: It’s never worked on me. I also never believed in Santa Claus, the Easter Bunny, the Tooth Fairy, or God. I’ve never been scared during a horror movie. I guess I’m the target audience for the “Saw” movies.

It’s like, when I fight a really hard boss and am then awarded with a double jump ability that allows me to jump over a tall wall way back earlier in the world of the game, all I can think about is how dumb it is that I’m tracking back through the easier segment of the game, only now it’s maybe a little easier because I can jump higher. I’m actually fairly confident in saying that the earlier areas of Symphony of the Night aren’t any easier because of the double jump, though let’s imagine for a second that they were. Now, when you jump over that tall wall and enter the next part of the castle, maybe the enemies in there are harder, meaner, or stronger than the enemies were in previous areas. Usually, they’re just stronger. That means, if they hit you, they do more damage. If you’ve already gotten good at avoiding taking damage, that might not matter so much. Why would you want to make the game easy after a hard part? That’s weird. That’s like shuffling a deck of cards, asking me to put them in order, and then taking them back when I’m done and immediately shuffling them again as soon as you’ve confirmed that they are indeed in the right order. It’s not sincere. It’s mean. If I need a break from the game, I’ll pause it. You know what’s a good way to handle this sort of thing in an action game? Split the game up into combat-heavy segments and then exploration segments. Uncharted 2 does this fairly well — it’s fairly evenly striped: action, exploration, action, exploration.

I said that escalating skill acquisition makes people feel like they “own” a game world or character. It never makes me feel that way. I know the game isn’t real. I know I’m just playing a game. When a Zelda game gives you the hookshot and teaches you how to use it over the course of a fairly long dungeon, it feels like something. And then, later in the game, you’re in a dungeon, fighting some monsters. There’s a spike pit. On the other side of the spike pit is a panel on a wall. You know that this panel is the kind of panel you can hookshot over to. You kill the monsters, open the menu, choose the hookshot, aim it, and pull yourself over. This is what bothers me: There’s no risk. There’s just a reward. You open the menu, choose the hookshot, and grapple over. The “challenge” is “remember what this thing is?” The “solution” is “open the menu and equip the hookshot”. The reward is invariably “Yay Unlocked: You Are Going Where the Game Designers Want You To Go”. In Super Mario 64, we have a righteous double jump of justice, its reward being twice the height of a single jump, its risk being that you need enough solid ground in front of you to properly land and use precise timing in order to execute it. Then you have the triple jump. And you have the amazing sliding long jump, and the excellent wall jump, and the delicio-awesome squatting super-high jump: these all have their own little unique physical quirks. They are available to you, the player, from the very start of the game. The game is so confident that just moving the character is fun and can provide for a million great level design situations that it gives you a playground to move around in freely right at the beginning of the game. (Quick aside: why doesn’t New Super Mario Bros. feature the sliding long jump? That’s the greatest, best, and tastiest jump, damn it!)

Then we have psychological accidents like Banjo-Kazooie, where you have to earn the double-jump, and Donkey Kong 64, where if I’m not mistaken you have to collect a few hundred items simply to unlock the regular jump and attack functions (that’s a joke (not a good one)). This is all in the name of “replay value,” all in the name of hinting at an illusion of depth. Though I tell you, man, every time a game like Zelda makes me take out the hookshot and use it because, what the hell, the dungeon designer figured that you might not have used it in a while, or the game designer might not want you to be doing the dungeons out of order, every time a so-totally-not-the-game-of-the-decade game like BioShock shows me a guy standing in water and then has a voice-over chew my ear off with hints about how I can equip my lightning ability to electrocute people standing in water, all it does is strip away a layer of the wallpaper, and reveal another layer of wallpaper underneath.

Playing games like these, I always get the impression that it’s all wallpaper, and no walls. All reward, no risk. And, most importantly, it’s never my choice. Moments like these reveal — to me — a weird little inferiority complex. The games are deadly desperate to mask their existences as merely simulations of making some character move. And you know what? Noticing how flimsy some games are kind of makes me hate all games. Uncharted 2 makes me press the X button just to step up onto any platform higher than the hero’s waist. Zelda: Ocarina of Time let me pull myself up just by walking toward such a platform. Why Uncharted 2 makes me press a button is wholly understandable: It makes me feel like I’m doing something. It’s great, and it works, though if I’m having a bad day (most days (so lonely)), I might be in a mood to sit there and think that’s dumb. It’s basically like, you’re making me press a button just to move. Isn’t that what the analog stick is for? And then, the devil on my left shoulder realizes that there isn’t an angel on my right shoulder and decides to say something halfway nice: Would you rather the game’s hero be a gray sphere in a world full of other gray spheres? And sometimes I’m feeling in the mood to say “Yes! I would!” And then there are those times in Uncharted when there’s a “puzzle”, when the character says “Hmm, I bet if we do something here in this room, we could open the way forward”, and I’m like, “I’m pretty sure all I have to do is look at the place I’m supposed to go and then scan the walls for hand-holds, and then try to find where the lowest hand-hold is so that I can then jump onto a box or something and start climbing over there.” Can’t I just have an “I get it” button? Sometimes, as Nathan Drake himself says halfway through Uncharted 2, “I’m so tired of climbing shit.” The “I get it” button — listen to me, over here. Before someone tells me I should go watch a movie, or something, I’ll tell myself.

(Quick Aside: Hey, why hasn’t EA Sports ever made a marathon running game?)

((Quick Aside #2: If Sega approached the next 3D Sonic the Hedgehog game from the jumping-off point of “Hold a button to accelerate”, maybe they could get somewhere.))

So here’s a big, ugly problem. Gating manifestations such as earned double jump techniques have become, in and of themselves, traditions. In The Legend of Zelda: A Link to the Past, you don’t actually have to get the blue or red clothes, which increase Link’s defensive power. However, in Super Metroid, you do have to get the Varia Suit in order to survive the high temperatures of Norfair. When and why did this shift happen?

A basic law of capitalism is that businesses need to grow every year, or they’re not doing it right. I have a friend who makes clothing by hand. Her business is growing steadily in popularity, gaining a larger helping of fame with every passing month due to her devotion to making original clothing by hand with a staff of fewer than ten. The more famous her brand gets, the more the demand grows. The more the demand grows, the more tempted she is to hire more staff and make more clothing. However, she knows she would be spreading herself too thin. World fame means people would be ordering her clothes from around the world, consistently depleting her stock to zero while removing the clothes entirely from the streets of Tokyo, the place where word-of-mouth spawned its original popularity. If her business stays the same size, then imitators, backed by larger companies, will step in to make similar clothes and sell them at lower prices to a wider audience. The decision is to either stick to your guns and keep doing what you do, until eventually someone else does it better, for more people, or “sell out” and grow, possibly alienating your friends and definitely inspiring competition. At the end of the day, it’s sheer luck of the draw that something like a fashion brand manages to survive as “the real thing” in a sea of imitators with all its original intentions intact.

Back in 2002, Sega released a followup to their Panzer Dragoon series. Panzer Dragoon Orta was developed by Smilebit, of Jet Set Radio, who were not the original developers of the Panzer Dragoon series. The previous shooting game in the series, Zwei, had been a masterpiece. It was all about shooting — pointing, aiming, shooting, locking on, firing crazy missiles. The challenge ramped up evenly, until eventually the game was coasting up into the stratosphere of sweetness. Depending on your play style, the dragon your avatar rides would change shape and color at the end of every stage. It was neat. The overall game didn’t change much: You were still flying, and shooting.

Orta was not nearly as good a game, probably because the new developers figured they had to “add” something to the game, or die. So they added this thing where you had three types of dragons: the big one, the medium one, and the little one. The big one was painfully slow and fired powerful shots and devastating lock-on shots. The medium one could move alright, and fire pretty good shots and normal lock-on shots. The little dragon was really fast, and could only fire weak regular shots. What you do in Panzer Dragoon Orta is you press a button to change the type of dragon you are at any given time. However, there are parts of the game that are literally impossible to pass without taking damage unless you change into the speedy little dragon, just as there are parts where only the big dragon’s lock-on fire can hurt a certain type of enemy. This is beyond the game designers saying, “Hey, check out this thing you can do” — it’s them saying, “Hey, we made this cool thing for you to do, and now you have to do it or die.” Super Mario Bros. never made you throw a fireball, and people loved it!

The train of thought seems to be that games are like businesses. They’re not. Games are much more like films than fast food chains. Like, well, the film industry is hooked on remakes, and sequels, too.

You know what is like a business, though? The games industry. You know, all Orta needed to be was Zwei with better graphics and maybe better level design.

Then again, we’ve established that some video game developers still don’t “believe in” level design. This would be like a film studio saying a movie didn’t need a script — just put the actors in front of a green screen, tell them they’re in the desert and that they’re thirsty, and see what they come up with.

SHOW THE PLAYER SOMETHING HE CAN’T DO

Many games show the player something he can’t do, only to let him do it later. Zelda games will usually show you doors you can’t reach, only to allow you to access them later via use of some item. Lots of the time, in Zelda games, the finally accessed room or cave leads to some optional item, like a piece of a heart container — collect four (or five, these days) to increase your maximum hit points by one. Zelda games manage to be quite thrilling in spite of themselves, sometimes, despite all the weird disconnects running rampant. You might be able to see the entrance to a cave, and just not possess the ability to hookshot over to it. Sometimes, you get the hookshot, swing over, enter the cave, and see a wall that requires a bomb for you to open. Sometimes it feels really good to throw down a bomb and uncover the loot — like a real good experience in the toilet.

However, it strikes me that games, these days, use these weird gating techniques so that they don’t have to bother thinking about how their game is laid out, or even what’s in their game. Why can’t every game have one Really Fun Thing in it, like the Portal Gun in Portal?

Sometimes, though, gating the player beyond his will can still translate into a wonderful game experience. Dragon Quest famously shows players locked doors in the earliest stage of the game that you’ll return to every time you get a new key, just to be told that the key doesn’t fit. Eventually, when you get the key that fits, it feels like something huge. Dragon Quest games also have characters with personalities and histories. The story moves forward, and time ripens in the world of the game. When the time is right, the key is yours.

Then there are games like Final Fantasy VI. I loved that game. It was about something — a Dickensian, fantasy history opera-thing. It told you a story, and eventually it gave you keys to the world and told you to solve the puzzle. In the second half of the game, all the characters are spread apart. You play as one character bent on killing the despot ruling the world. You can go straight to him and try to kill him if you want. You probably won’t be able to. Or you can fly around the world in your big flying boat and try to reunite your party members. If you go for that, that’s literally tens of hours of game, right there. They’re not side-quests: They’re optional game segments.

LET’S TALK ABOUT SPEED

I’m going to begin to turn this cruise ship back around! Be prepared:

I’m going to copy and paste (and bold the interesting parts of) a paragraph from the Wikipedia page about Symphony of the Night, which is, among other things, longer than the Wikipedia entry about the film “Patton”, which was, incidentally, the only film that my father’s father, who loathed entertainment in all forms (except those tobacco-related) had ever watched in its entirety (and he watched it twice):

Symphony of the Night has a liberal control scheme compared to its predecessors in the Castlevania franchise. Aside from attacking, jumping, and basic movement, Alucard is inherently able to perform both a downward flying-kick and a back-dash. While the downward kick may never be discovered or employed by a player, the back-dash (activated by a single button press) is an easily employed method of evading enemy attacks. Because it is faster than Alucard’s normal walking speed, a player may back-dash as a slightly faster method of travel through the flatter areas of the castle. Yet another use of the back-dash is attack canceling, a technique common in fighting games: by activating the dash just after an attack lands Alucard’s attack animation is interrupted, allowing the player to bypass the attack’s recovery animation and instead perform another action. Evasive dash moves also appear in later Igarashi-produced Castlevania titles.

Do you find yourself doing things like this in videogames? I find myself doing it all the time. I might have mentioned before that my first priority when playing a new game is to make it look as ridiculous as possible. Usually, all this requires is to twirl the analog stick in circles and marvel at the lack of a cornering animation (which, remarkably, Bayonetta has — maybe that’s why Famitsu gave it a 40/40).

I don’t make games look dumb because I am a mean person — no, I do it tentatively, like dipping my toe into a swimming pool. I want to know if it’s safe to let myself go ahead and be immersed into the experience. I don’t want to jump in only to find out later that I can conjure intense silliness out of thin air at a crucial part of the game.

Sometimes, like in Symphony of the Night or Super Mario Sunshine, there’s a weird little game mechanic like the back-step or the belly slide that allow you to move through the game at a far breezier speed. This makes the game look ridiculous; however, once you realize just how efficient it is, you will find yourself unable to stop doing it. Of course, the developers don’t intend you to do this, though it’d be hard to believe the testers didn’t discover it and do it themselves.

This sort of thing has built up to a weird crescendo, of late. Games like Star Ocean 4 give you a run ability partway through the game, and then let you use it all you want. The run ability happens to look ridiculous. If you keep jabbing the button, it looks like your guy is having a seizure at the speed of sound, just floating frictionlessly along the ground. I mean, this thing was programmed in intentionally. You will never be penalized for using it. And it looks ridiculous. Therefore, we must conclude that the developers wanted it to look ridiculous, maybe because ridiculous-looking fast methods of travel existed in popular games like Super Mario Sunshine.

This is getting weird. It’s way too weird, now. Why would you want a feature like that? Why would you think that the kids would be enraged if you didn’t have a feature like that?

Then we have games like Oblivion, where the player has the option to just open the menu at any time and warp anywhere in the game world at the touch of a button. They call this “quick travel” or “fast travel”. It’s the video-game-world equivalent of a word processor’s “search” function. Hey, consider this, genius game developers: When your game world is so large that fast-traveling within friendly areas is considered a necessary design element, maybe your slow travel sucks and/or your game world carries the tone of a theme park after hours, with you playing the role of a widowed octogenarian with a broom and dustpan.

THE ACTION BUTTON

Games are active forms of entertainment, mainly about movement. Movement is the point of games. It should be fun. Films are static, and can be about anything, really, so long as they have a beginning, a middle, and an end. My favorite game of 2009 might be Canabalt, which is about a guy running from an unseen threat. We never see the threat, though we don’t doubt its scary nature, because our guy is running so fast that he literally can’t stop. All we do is jump. The presentation is beautiful. Just looking at this game, you can say it’s a complete piece of work. You need only glimpse three seconds of it in action to know the full scope. You don’t even need to see the character die to know that death is imminent, and that, maybe, this guy’s attempt at escape is futile. It tells you a neat little story, and impresses you with a cute little catharsis, in about as much time as it takes you to tap your finger on an iPhone screen.

When you jump through windows, glass shatters with a perfect sound effect; when you land on a roof where white doves are perched, they scatter randomly and flutter away.

Then we’ve got Kingdom Hearts II, where you have two buttons: Yay and Awesome. Press Yay, and your hero just completely flips the fuck out all over the screen. Keep pressing it and eventually all the enemies will evaporate. Sometimes, a huge triangle appears on the screen. Press the Awesome Button to make Something Awesome happen. Then there’s Bayonetta, where, sometimes — it’s like a pachinko jackpot, really — you can summon a “torture attack”, where you press a button within a somewhat lenient window, causing an iron maiden or guillotine to materialize out of nowhere and destroy an enemy. How does this work? Well, of course, the story deals with supernatural things, so just take the game’s word for it — our heroine is a person who can materialize torture devices out of nowhere, just because, why the hell not?

In Shenmue, they called these Quick Timer Events. In the current Japanese games industry, they call them Action Button Events.

ABEs are a cancer in the duodenum of game design. They’re all over God of War and, well, anything else, really. Sometimes, even fantastic games like Uncharted 2 wedge these in, only in dull no-risk situations. Like, you have to crank a lever to open a door, and you have to crank it really hard, so you have to press a button a whole bunch of times. Maybe this is to keep players from getting through the door before killing all of the enemies in the area, in which case I guess it’s kind of neat, because it adds context to something (having to kill all the enemies in an area before being allowed into the next area) that many games sometimes don’t give context for.

Developer Ninja Theory put an ABE about three seconds into their game Heavenly Sword, prompting Ninja Gaiden director Tomonobu Itagaki to call the game a load of bullshit. Itagaki reported that ABEs were a waste of time. Representatives of Ninja Theory said that ABEs are used to make the player feel like he is part of the dynamic cinematics happening on the screen, rather than sit passively as an audience member. Itagaki didn’t comment further. He very well could have. I’m no Itagaki-worshipper, though I like to think his games exhibit a stellar sense of being in control of your character, and they manage to let you do all kinds of sweet little things like intuitively run up walls. Then again, Itagaki also once said that Resident Evil 4 sucked because your guy had to stand in place to shoot, and that was “unrealistic”; the man obviously learned the majority of his life lessons from John Woo films and/or Contra III: The Alien Wars, so maybe I’m giving him too much credit. (Just kidding, Itagaki! Call me! (Don’t say you don’t have my phone number (even though you don’t (it’s such a boring excuse)).)

Publisher From Software, in the same year that they released the excellent Demon’s Souls, put out a misunderstood little ABE-heavy ninja game called Ninja Blade. Don’t play it — it sucks. Oh, that was mean. Well, it does some neat things, at least. Early in the game, there’s a boss that spits shock waves at you. You have to run down a hallway, dodging the shock waves. Get to the end of the hall, and you can wail on the boss. Eventually, he does the shock wave thing again. Now the camera zooms into your character. He’s got his sword against the shock wave. Press the sword button repeatedly to push against the shock wave with your blade. When the camera zooms out, you find that your dude has been pushed down the hall. If you failed to hit the button enough, you might be pushed all the way back to the end of the hall, meaning you’ll have to dodge all those shock waves again. This is neat — a progressive ABE.

Then there are moments like in Dead Space, where a monster grabs your leg with some tentacle. After a moment, you realize that despite the cinematic camera angle, you can still aim and shoot your gun. Uncharted 2 does the same thing a couple of times. It always manages to feel kind of neat.

“The ideal game”, I guess, would be a sublime mix of rock-solid game mechanics and lots of neat little interactive movie sequences where you’re basically doing only things that your character does repeatedly in the game (like shooting).

ABEs are designed to allow the player to “feel” (not “be totally”) “in control” of something far more nuanced and dynamic than what goes on in the game. Maybe we’ll be seeing a BioWare RPG at some point soon, where dialogue is all ABE-activated. That could be hilarious.

Right now, we have games like Left 4 Dead and Left 4 Dead 2, games centered on simple though deep play mechanics and an online community element. Though we In The Know know that the makers of Left 4 Dead are also genuine crafters of entertainment masterpieces like Half-Life. Valve is shrewdly using Left 4 Dead to boost their reputation. One day, they’ll make a big-scale Half-Life-level event again, and “artistic conscience” will enter the feature snipers’ list of “things to watch.” Maybe overnight — or maybe over a couple of nights — everything in the games industry will change.

RUBIK’S CUBING ON THE BUS

When I was in a hospital recently, I noticed lines of colored tape on the floor. At one junction, the red and purple tape veered off to the left, and the orange and blue tape veered off to the right; the black and green tape pointed the way forward. No doubt these lines of tape point out the way to various departments of the hospital, guaranteeing that those in charge of transporting patients get them to a doctor’s care as quickly as possible.

When I was in Shanghai last month, I noticed that every street sign contains a compass, pointing out which direction you are headed, and on which street.

One of the things that originally helped me decide to live in Japan was the feeling of being able to get lost. Tokyo has very few named streets, and everyone navigates by landmarks. I know my way around by now, and I rely on fabricated entertainment to fulfill my desire to get lost in something. Lately, it’s the old Russian novels. I read both Anna Karenina and War and Peace while riding trains, planes, taxicabs, or toilets during 2009. Why do games always have to have mini-maps and navigation arrows? Mini-maps are the game equivalent of dozens of dog-eared pages in a thousand-page book. Ten years ago, mini-maps felt like game-y touches; now, in the era of GPS, it makes them feel too real. Games are an escape from a world where jumping is not a mode of transportation; why is movement always something handled in such a businesslike fashion? Why mini-maps? Why fast travel? Fast travel is a quiet admission that the Slow Travel Isn’t Always Fun. Why can’t the Slow Travel always be fun? Did you ever play Breath of Fire III on the PlayStation? There was a part where you have to navigate through a desert by the stars. Okay, so that part probably infuriated a lot of people — not me, though! These days, we’ve got great graphics — why not give me big horizon-filling landmarks, and make them be my only guide? Let me figure things out, let me enjoy moving.

Two more things: last year, I was on a subway train that stopped in a tunnel in mid-voyage. Across from me sat a woman who was busy tooling with a Rubik’s Cube. She had a knit in the center of her forehead and was biting her bottom lip so hard I was on edge, waiting for blood. She had no idea what she was doing. No rhythm, no reason. She was just clicking that thing around like crazy. At one point, the announcer came on to apologize — apparently someone had committed suicide in front of the train before ours, so we would be held up for a little while. The woman looked up, just then, and saw me looking at her. Instantly, her face turned red. She got up, ran to the next car of the train, in which there were no available seats, and stood with her back to the glass, continuing to click around on the Rubik’s Cube.

Then, two nights ago, I stood on the same subway train, going in the opposite of the direction I’d been going a year ago. Seated in a corner was a clearly autistic man, quickly solving and then unsolving a Rubik’s Cube using a tried and true method. He might have been a tournament-level Rubik’s Cube solver. The thing is, once you know the method, it’s just a matter of plugging away. Did you know there are kids who speed-run Portal? That’s so weird. I looked at that guy and I thought about the woman in the train roughly a year ago, and I thought about people’s grandmothers pretending to like Wii Sports simply because they relished the opportunity to converse with their grandchildren, and I realize that, really, any given one of us, at any given time on any given day, is a mere psychological molecule away from being that guy, repeatedly solving and unsolving a Rubik’s Cube on a bus or train.

tim rogers is the editor-in-chief of Action Button Dot Net (stay tuned this month for a big-time Action Button revival! lots of cool stuff coming; bookmark it asap, etc); he lives in tokyo; friend his band on myspace! mail him at 108 (at) actionbutton (dot net) if you have something to say or are a game developer and would like to arrange to send free games.

Illustration by HARVEYJAMES™. Buy prints of this illustration at attractmo.de/!

Jumping video: Performed by Jack Fields and Hannah Miller, Music by Ben Burbank.

Barnes & Noble Nook Review: Pretty Damn Good

It’s a relief to finally lay hands on the Nook. The dual-screen reader was just a prop at its unveiling so I’m happy to report it works (pretty) well. It can’t kill Kindle yet, but it’s an alternative worth considering.

A Two-Horse Race

Do this now: Disregard all other ebook readers on the market besides Nook and Kindle. Unless you plan to get all of your books from back-alley torrents, or stick to self-published and out-of-copyright PDFs, you are going to need a reader with a good content-delivery system, one it connects to directly via wide-area network. And as long as you’re set on e-ink as your preferred means of digital reading—and it’s still the choice that’s easiest on the eyes and the battery—you’re going to need a reader that isn’t crapped up with gimmicks that supposedly compensate for the slow display.

But more on the Nook. The thing that makes it special is its two screens: one e-ink for reading books, one touch LCD for navigating and buying books on. More on that later, but basically, the setup works better than the single screen setups on the competition. Sony messed up by putting a glare-inducing film over its screen to provide questionably beneficial touch controls; iRex avoided that, but made a “touch” interface that requires a stylus. Kindle plays it straight, developing a user interface that works well enough with physical buttons and e-ink (as long as you don’t use the “experimental” browser). Nook preserves the same pleasurable reading experience, but tucks in the capacitive-touch LCD screen for added control. In its 1.0 implementation, Nook is not as fast or as smooth as it should be, but already it’s showing that the second screen is not a gimmick.

Still, I need to get this out of the way: The second screen is not a sudden and miraculous cure for what ails ebook readers. It may prove to be, but B&N’s current implementation is conservative. As yet, there are too few occasions on the Nook when I notice an LCD feature and say “Kindle can’t do that.” In fact, the Kindle development team hasn’t been sitting on their asses—the latest firmware makes Kindle more sprightly than ever, with subtle but awesome user-interface improvements. But Barnes & Noble is itself promising round-the-clock enhancing, optimizing and debugging over the next few months, and I wouldn’t be surprised if there were three or four updates pushed through the Nook by March—the first possibly before Christmas.

Is it Good Enough Now
Does that mean it’s not ready now? Let me put it this way: If you are lucky enough to have pre-ordered one in the first wave for the Dec. 7 shipping, or patient enough to wait until mid-January for the next wave, you are going to get a gadget worth being excited about.

And when Barnes & Noble gets its in-store offers and book-lending operation underway, Amazon will have to step up, or sit down.

Big Screen, Little Screen

The first thing I noticed about the LCD was that it was too bright. E-ink is all about eyeball comfort, and I hadn’t really thought about how the LCD underneath would compromise that. Because you don’t want your eyes to have to adjust every time you look down and back up again, it turns out you want that thing a lot dimmer than you might if it was a standalone device. The automatic brightness adjuster isn’t really up to the job, but I found that by dialing it all the way down when reading in bed, and bumping it up a tad, like to 20%, when reading in sunlight, my eyes could look up and down without any annoyance.

The second thing I noticed about the LCD was how nice its keyboard was. Unlike the Kindle, the Nook’s keyboard is only visible when you need it, and as an iPhone user, I found it natural and accurate. The capacitive touch is a real boon, especially on a screen so small.

Besides the keyboard and assorted lists of settings and files, the little screen can display a directional pad for moving around text when highlighting or looking up words in the dictionary; it can give you a search box and a place to type notations; it can pop up the music player without leaving the page; it flows book covers in your library and in the store. And when the screen goes dark, you can make horizontal swipe gestures to turn the pages of the e-ink screen above.

Between the LCD and the e-ink screens is a little upside-down U, actually an “N” from the Nook’s logo. This is covered with a capacitive-touch layer too, and serves as the “home” button, which wakes up the LCD with a tap, and takes you to the home screen with a double-tap. (There are physical buttons, too: Two page-turn buttons on each side, and a power button on the top, which work as billed and have no hidden features.)

I found the capacitive interface to be handy, but it also revealed the bugginess of the early software. Scrolling could be sticky, tapping the home button or the screen occasionally did nothing, and using the directional pad to navigate text made me yearn for the Kindle’s physical mini-joystick. The biggest disappointment was the page-turning swipe gesture. It failed to work half the time I tried it, and when it did work, I noticed that it responded slower than pressing the physical page-turn buttons.

I raised all of these issues with Barnes & Noble, and fortunately they are on top of this. Fixing bugs and speeding up the UI are the primary goals for the first software revision, and I have no doubt that they will achieve their goals in due time, probably before most people can even buy their Nooks.

While You Read

The Nook won’t beat the Kindle if all that LCD is for is facilitating navigation—the interface isn’t a bad one, but in its current implementation, it’s just an alternative, not an upgrade. The way B&N will beat Amazon is by making that damn screen do crazy stuff. It should start by targeting people who read while doing 12 other things.

Me, I require concentration to get through a page, and even music is a distraction. But for some people, it’s not hard to read a book while jamming to tunes, periodically glancing at news tickers, and responding to email or text messages. This is the promise of Nook’s second screen.

It already does this to some extent. The music player isn’t much yet—and has a few kinks B&N is still working out, like automatically and unpleasantly alphabetizing all your songs—but it’s a real applet, unlike the Kindle’s. On the Kindle, you type Alt-Space to get a song to play, and you click F to advance to the next song. That’s about it. With the Nook, you can load up songs and then scroll through them all, picking one you want to hear, or shuffling the tracks. There’s no physical volume button, but you can pull up a slider to adjust it, and another slider to jump around a song. And you can do all of this without leaving the page of your book.

But when you look up a word in the dictionary, the definition pops up on the e-ink screen, not the LCD. When you get an error message, again, the pop-up is on the e-ink. Barnes & Noble designated the e-ink as the place where all “reading” would be done, and that includes messages and sidebar content. I disagree with this, if only because the second screen seems tailor-made for alerts and other pop-up info.

The second screen is also a place for third-party developers to create fun and unexpected applets. Barnes & Noble loves to remind reviewers and customers alike that this baby is powered by Android: In other words, Nook may not look like a Motorola Droid, but developers could write apps for it just as easily.

Right now, the integrated Wi-Fi doesn’t feel like much of a bonus. (Though it offers certain benefits when abroad, it only works with Wi-Fi networks that don’t require a pop-up webpage. Free or not, those are few and far between.) But Wi-Fi means that developers could write internet apps without fearing a crackdown by AT&T, which provides the no-fee wireless connectivity. Paging Pandora!

Built on Bricks and Mortar

When it comes to shopping for books (and reading them), the Nook is the Kindle’s equal, and may soon leverage Barnes & Noble’s 800 physical locations to knock it out of first place. I was not able to test these features, because they are only starting to roll out this week, but when you take a Nook to a B&N, it will automatically jump on the store’s Wi-Fi network, and offer you free goodies—not just downloads but cookies from the café and other treats. Soon, there will be a way to skim an entire ebook while you’re in the store, too. You might say, “Big deal, if I’m in the store, I’ll just look at the real book.” But that’s just the point: How nice will it be to compare real and ebook editions before you buy? I asked B&N about bundles of real book and digital download, and they said discussions with publishers are underway.

Needless to say, one of the biggest advantages the Nook has over the Kindle is the chance for people to touch it before buying it. B&N will start showing off Nooks this week, and will add a few more ebook readers to its lineup, too. People who were afraid of taking the plunge will see the benefits and buy.

(My pet theory as to why Sony and others have sold any ebook readers at all in the US is that they appear in retail locations, unlike Kindle. Because if anything but the Nook was showcased side-by-side with the Kindle in a showroom, the decision to go with Amazon would be easy.)

Barnes & Noble has adopted a more natural attitude toward the books they sell, too, allowing you to access what you buy via ebook readers on Macs and PCs, iPhones and BlackBerrys (and in a few months, Android phones) as well as the Nook. Amazon has an iPhone app but as yet there’s no way to read your Kindle book purchases on your own computer, and is now (finally) rolling out PC and Mac Kindle clients, as well as a BlackBerry app.

Speaking of Kindle downloads, some noise has been made about Kindle books being cheaper than B&N ebooks, but Barnes & Noble says that they are in the process of correcting their prices, basically evening them all out so that they’re no higher than Amazon’s. In my own experience, I found David Foster Wallace’s Infinite Jest for $10 and George RR Martin’s A Game of Thrones for just $7. I was pretty pleased, though I was a tad annoyed that sales tax wasn’t included in the base price. Be warned there.

Lending is another non-Kindle function rolling out this week that I’ll be following up on. You select a book from your collection, lend it to someone listed in your Nook contacts, and they receive a message via email and on their Nook’s “Daily” screen, where periodicals, offers and other notices show up. When they accept, they can read the book for two weeks. During that time, you can’t read it, and when it reverts back to you, they get a notice to buy. You can’t lend the same book to the same person twice.

You can also lend books to someone who doesn’t have a Nook, to read on their computer or iPhone or BlackBerry, though the notification only comes from email. (Expect a radically redesigned iPhone client in January with lending and other features.) The new readers from iRex and Plastic Logic will include the Barnes & Noble store, and all your purchases will be accessible on those devices. However, at this point, those two devices won’t have the lending capability.

Work in Progress

If I haven’t said much about reading books on the Nook itself, it’s because it feels very much like a Kindle, right down to the page-turn buttons. The screen is the same—there’s no discernible difference whatsoever.

Aesthetically, the Nook is better looking, less busy, with a more proportionate bezel (and a wee bit more girth). I like the gray rubber backing as much as I loved in on the original Kindle—I still don’t know why Amazon abandoned that.

The only hardware bummer was the sound of the integrated speakers—Kindle beats Nook here (soundly?), but since both have a 3.5mm jack for headphones, it’s mostly a moot point.

The hardware is fully baked, but as I have mentioned the software isn’t. Aside from the stickiness of the interface and the flaws in the music player, I found a definite bug in the highlights-and-notes system. I have already listed a what feels like a hundred tiny gripes, but I still have more, like why isn’t there AAC playback? And why do I have to get to the home screen to see the clock? (Kindle now shows the time with a single tap of the Menu button, no matter where you are.) I do know why there’s no Audible DRM support—because even the devices that supposedly support Audible files don’t support the ones most people buy from iTunes, so it’s a confusing mess for customers. But I’d still expect the nation’s biggest bookstore chain to get serious about audiobooks.

The great thing is that the fixes will come fast and steady, and like the iPhone, this thing will grow. For those of you who took the plunge already, I don’t need to tell you to be careful with 1.0 software, because as early adopters you are prepared. And for those of you who missed out on the first batch, guess what? That just means you can wait for the key bugglies to get fixed before you pony up $259. And for those who went for the Kindle this season instead? Congratulations, you have a very nice ebook reader too—for exactly the same price.

In fact, if you have to pick one right now, stick with the Kindle. It’s a tough call, because I see a lot of potential in Nook that might not be in Kindle, but damn if the Kindle hasn’t grown to comfortably inhabit its e-ink skin. As long as you don’t expect apps and extras on a Kindle, it delivers the best ebook experience there is at this moment. And it just went international. But while the limitations of a Kindle are clear, the limitations of the Nook are hazier, presumably further out.

For now, no one will laugh at you for owning either, though you will now surely be ridiculed for spending $400 on a Sony with glare issues, or—pardon me, iRex—anything that requires a stylus. And since many third-party readers are going with the Barnes & Noble store, you’d be dumb to buy any of them instead of the Nook. That may change in the future (can you believe I made it this far without mentioning Apple Tablet?) but for now, in the ebook department, there’s just these two big dogs surrounded by a bunch of poodles.

In Brief



Great all-around ebook reader



Second screen serves useful purpose



Expansion and evolution possibilities of this very device are great, especially with touchscreen and Android OS


Lending and in-store Barnes & Noble action will be huge


A little thicker than Kindle, but as a tradeoff, it’s a little smaller footprint


Wi-Fi doesn’t seem to matter now—hopefully it will prove to be an advantage later


LCD and other features mean less battery life than Kindle, but still adequate, “measured in days”


Many of the Kindle killer functions, like lending and in-store perks, weren’t tested, as they are rolling out this week


Current software is buggy and sluggish in spots; hopefully fixes and optimization will come soon


Second-screen possibilities are great, but current implementation is cautious and conservative

Update 1: Unboxing Pics, that I wanted to include because the packaging is just so classy:

Update 2: A word on PDF viewing, which was brought up in comments. Although PDFs are supported natively and use Adobe’s mobile PDF system, I can’t say I was terribly impressed. Page layout is easily mussed up, and instead of zooming, your only option is to change the font size, in so doing, re-flowing the text and adjusting the picture size. In some ways this is better than on the Kindle, which appears to only offer a screen rotation option. (Tap the font size button and you’ll see what I mean.) In all truth, PDFs containing anything but text look pretty grim on either device, but for text-only ones, Nook seems to be a wiser pick.

Update 3: Re: discussions of who has the better catalog, B&N’s is being overhauled this week, so expect to see a lot of new pricing and perhaps some newly available titles. We’ll do some spot checking later on, but in the meantime, don’t be surprised if you see a lot of sudden changes to the lineup.

Update 4: Some of you have asked me about the ePub format, which the Nook does natively support. Third-party non-DRM ePubs can be downloaded from the internet, and side-loaded into the documents folder inside the Nook. When you look at your Documents screen, you’ll see them listed with the appropriate metadata. When on screen, they are as adjustable as B&N-purchased ebooks, and generally look just as nice.

A Romance Flowchart: When Is It Inappropriate to Use Your iPhone?

Does your significant other always yell at you for busting out your smartphone too much when you’re together? Follow this flowchart to determine if now really is a good time to fire that brick up:

Click the image to view a larger version.

Based in New York City, Shane Snow is a graduate student in Digital Media at Columbia University and founder of Scordit.com. He’s fascinated with all things geeky, particularly social media and shiny gadgets he’ll never afford.

How To Clean Your Filthy Gadgets

Hey, you, your gadgets are disgusting. And wiping them with your greasy shirt sleeve isn’t making things any better. Here’s how to clean your gadgets, the right way.

HDTVs and Monitors


This is the number one cleaning question I get from friends and family, and it’s one of the simplest to answer. HDTVs and monitors are the worst kind of dirt magnets, begging to be touched—by your boss who wants to show you something on your computer screen, by your greasy little cousin who’s getting restless during his umpteenth viewing of Finding Nemo, by your drunk old buddy from college who somehow still thinks it’s funny to grope actresses onscreen on his way to the bathroom—and sitting in total vulnerability: in the case of your LCD screen, within sneezing range; in the case of your flatscreen TV, in your dusty living room.

The tempting, nearly instinctual response to a oily, dusty, mucousy panel of glass or glasslike material is to reach under the sink, grab that bottle of Windex and the paper towels and spray that stuff down. Do not do this. There are some TVs and displays for which Windex will do the job—CRT televisions, for example, and some glass-paneled screens—and if you’ve been using Windex in the past without incident, don’t worry too much. But also, stop.

Spraying any kind of cleaner onto a screen isn’t a great idea. These panels aren’t weatherproof, so if your sprayed solvent runs into the crack between the panel surface and the display bezel, there will be tragedy. Furthermore, Windex is a glass cleaner: a lot of your screens’ outer layers aren’t glass, or have some kind of delicate coating. Ammonia-based cleaners, for example, can microscopically abrade some plastic surfaces, causing your screen to become slightly foggy over time. And for your cleaning tool, paper towels aren’t terrible, but they’re also somewhat risky—screen coatings can be extremely delicate, and paper towels can sometimes be a little rough. Plus, they’re prone to leaving streaks, no matter what liquid you’re using.

So, what’s the trick? Water. Water and a soft, lint-free (ideally microfiber, which is better at picking up greasy smudges) towel. To clean your panel, dampen your cloth and strain it out as best you can—you don’t want any drippage here—then run it, folded, gently across your screen, repeating until the screen has been thoroughly covered and any sticky residue has been removed. (For larger displays, perform cleaning in sections, so as not to let the water dry or collect and run.) Now do the same with a dry cloth, applying slightly more pressure, to lift away the dirt and moisture. Repeat if there are still grease deposits. That’s it! A few bucks for some soft cloths, a little bit of water, and your screen is as good as new.

And those specialty cleaning kits? They do work, for the most part, but they’re not necessary.

TV and Game Controllers


By the time your TV is in need to a deep cleaning, your remote—or your videogame controller—is probably in even worse shape. The kind of dirt a remote gathers is an order of magnitude more disgusting (and more human) than your panel, so you’re not just cleaning, you’re disinfecting. Interestingly enough, the cleaning method isn’t too far from the one above: A damp cloth, with some water. This time, though, you’ll want to throw a little isopropyl alcohol in the mix—a 40/60 booze and water split works—to help disinfect the buttons, and remove the oily brown buildups you can get between buttons. Again, soft cloth is better than paper towels, this time it tends to be a bit better at reaching between buttons than stiff, thin paper. Use wooden toothpicks for reaching into cracks, but nothing harder.

These are unique in that they’re shared gadgets. And shared gadgets are, almost without fail, fantastic vectors for germs. So what I’m saying is, clean them or die.

Cameras


Body: Cleaning your camera body is like cleaning almost any other gadget—a very slightly damp towel will do the trick. (Though be gentle around openings, since point-and-shoot camera guts lurk awfully close to the surface, and any intruding water can wreak serious havoc.)

Lenses: Lenses are dirt magnets, and if they’re dirty, you simply don’t get good pictures. They’re also delicate and expensive, so you can’t just reach in there with a paper towel and be done with it. Lens cleaning kits are available at every camera store, and include a light cleaning solution and microfiber cloth. These are safe bets, but don’t spend more than $15 bucks on them. Lens pens also work, but they’re a riskier proposition—there’s such a limited cleaning surface on those things, and I always get the sense that after a few uses, the cleaning element has been sort of tainted.

Again, though, stay safe with this one: Buy a microfiber cloth, and simply rub the lens with a circular motion until all visible smudges are gone. Never apply too much pressure—any dust or dirt on the lens can get picked up in your cloth and scratch your lens—and fold/refold your cloth to ensure you’re using a fresh surface at least once during a lens cleaning.

Two small notes on lenses: Don’t forget the clean the rear glass on any DSLR lens. There’s a lot less surface area there, and since it spends most of its time inside the camera or a locking lens cover it probably won’t be as dirty, so this should take much effort. And if you can, treat each of your DSLR lenses to a UV filter. While this is called a filter, it only block light that humans can’t naturally see, meaning that in most photos, the effect will be generally unnoticeable. (More on that here) Point is, you don’t have much to lose by buying one of the dirt-cheap filters, and it will provide a layer of transparent protection from dirt and scratches over your lenses at all times. And since they’re flat and thin, they’re easier to clean than convex lenses.

UPDATE: I’ve gotten a couple of emails from photo pros about this, and I think it bears mentioning: Before rubbing your lenses, it’s good practice to blast them with a little air. Air pumps (like the one mentioned in the following subsection) and canned air will do the job, as will, in a bind, your lungs. The thinking here is that you should remove any potentially abrasive particles from the lens before rubbing it, so as not to drag them around, causing permanent damage. —Thanks, Jody and Ned!

Sensors: Point-and-shoot and bridge camera users don’t have to worry about this, but DSLR users, who provide a chance for dirty to enter their camera bodies every time they change a lens, may need to clean a sensor one day. It’s not as scary as it sounds!

First of all, you’ll never have to actually clean a sensor, since DSLR sensors all have some manner of filter, either IR or UV, built in. But still, the surface is delicate, so you’ll want to be cautious. Most cameras include some kind of sensor-cleaning function in their software; since most sensor taint is comprised of a stray speck of dust or two, a quick, severe vibration will usually do the trick.

If that doesn’t work, and your photos are showing persistent, faded, unmoving spots in every photo, it’s time for phase II: air. For this, I defer to Ken Rockwell:

After 17,000 shots I finally got a speck on my D70. Remember I also change lenses a lot. The Shop Vac wasn’t enough. This time I used an ear syringe (blower bulb) from the drug store which you can get here. I put the D70 on BULB and pounded the bulb with my fist to create a jarring blast of air. That worked.

Rockwell advises to use an ear syringe; I’d say go with a purpose-design lens blower, since they’re still only about $10, and you’ll get better results without running the risk of pulverizing your DSLR’s guts while trying to get muscle enough airflow through a hard rubber earwax remover.

Beyond built-in sensor cleaning and a few blasts of air, there are plenty more methods for cleaning a sensor, but they’re all risky to varying degrees. Unless you’re supremely confident (and careful) it may be best to leave this one to the guys are your local camera shop, assuming you still have one. A ruined sensor, in most cases, is a ruined camera, so tread carefully.

Laptops


Screen grime is the most common cleaning problem with laptops, and with the display cleaning section of this guide, we’ve got that covered. That said, laptops collect filth in a variety of other ways, and they can get real microbial, real fast.

To clean a typical keyboard—that is, a non-chiclet design—you’ve got three steps to try. First, use a damp cloth with the aforementioned 40/60 alcohol/water mixture, turn off the laptop, and run it across the keys. Fold it a few times and use the edge to reach between the keys. You can use this same cloth to clean the rest of your laptop as well, excluding the screen, but including the touchpad. If that doesn’t do the trick, and you can spot some dust or hair in between keys, it’s time for some canned air. You can pick this stuff up at most big box electronics stores or online for $10 or less, and using it is as simple as tilting your laptop sideways, and blowing air in the cracks.

If this doesn’t work, it’s time to start popping off keys. Since you’re disassembling a keyboard that really isn’t meant to be taken apart, there’s a definite inherent risk here, but the results are practically guaranteed to be good. Here’s an extremely thorough guide, if you’re game for it. To give you an idea of what this entails, there’s a point in this tutorial at which all your laptop’s keys are swirling in a cereal bowl full of soapy water. It’s gruesome.

Another problem area for laptops is fans, air intake vents and heatsinks. These all stand in the pathway between outside air and your processor, which needs said air to keep cool. Any blockage can cause your laptop to run hot, your fans to run high, and consequently, your battery to run low. Disassembly instructions will vary from laptop to laptop, and typically will involve removing your entire keyboard. Once you’ve done this, though, removing the dust is a matter of blasting with air, scraping with a clean toothbrush or even just wiping with your finger. It’s not about total cleanliness here, it’s about clearing your computers’ windpipe.

Another helpful trick: Those white, last-gen MacBooks have a disgusting tendency to accumulate a beige (then brown, then black) residue where users’ palm touch the laptop. This discoloration is more of a stain than a buildup, so you can’t fix it with water or alcohol. The fix? Acetone. Seriously, the best way to wipe that crap off is with nail polish remover.

Desktops


We’ve covered how to clean most of the external pieces of a laptop already: any plastic surface gets a moist wipe-down; keyboards get compressed air. That’s it! Your desktop is sparking clean! This feels so good! Now slide of your desktop’s side panel, and weep. If you’ve had your desktop for more than a few months, and particularly if you keep it in a carpeted room, it’s probably an absolute horror show.

The first thing to do is, you guess it, pull out that microfiber cloth. Wipe down every surface that’s finished, which is to say covered in rubber (wires) painted (the inside of the case, and the plastic shell of an internal optical drive, or the decorated exterior of a video card) or inert (the blades of a fan, or the exterior of your heatsink). You can slightly dampen the cloth to help pick up dust from the corners of the case, but your probably don’t need to, and it’s best to keep this a dry operation, beginning to end. Next, whop out that can-o-air, and have at it. Pay special attention to dust buildup areas, like the heatsinks on your processor and video card, and the fan inside your power supply. This will likely cause some dust to resettle elsewhere, so you may need to repeat your wipedown/blow process once more. Again—cleaning the inside of your tower is less about maintaining a spotless appearance than it is making sure dirt, dust and hair buildup won’t negatively affect your computer’s performance, so don’t get too anal about it, cosmetically speaking.

[image via]

Cellphones and Media Players


Cellphones, iPods and other media players are designed to be pocketed, so you can be a little rough on them during the cleaning process. A very slightly damp cloth or paper towel will remove whatever fingerprint or residue your shirt or jeans won’t.

As much as these gadgets are intended to live in pockets, they have an irritatingly high number of places for dust to hide itself. Cellphones have keypads, or, increasingly, sets up buttons at the base of a touchscreen or on the sideof the handset, all of which give dirt a place to accumulate. The grilles over cellphones’ mics and speakers is another refuge for sludge, and they’re totally immune to simple wipedowns. For this, you’ve got to go one step further. Luckily, you’ve probably got all the supplies you need in your house already.

Wooden toothpicks and old toothbrushes help reach into cracks and crevices, like those around buttons or running around the perimeter of some display panels. (Samsung and HTC are particularly guilty of leaving spaces in places like that.)

Sometimes, as in the case of the tiny little mic/speaker grilles on some phones, you don’t want to push dirt in, but rather pull it out. For those situations, lay a strip of scotch tape over the afflicted area, run your finger over it a few times, and pull it off. If that doesn’t work, upgrade to duct tape—though you’ll want to be a bit more gentle with that, since applying too much pressure can leave adhesive on your device, which is a pain to wipe off.

Your Tips and Tricks

If you have more cleaning tips and tools to share, please drop some links in the comments-your feedback is hugely important to our Saturday How To guides.

And if you have any topics you’d like to see covered here, please let me know. Happy housekeeping, folks!

10 Of The Best Spaces For Kicking Back and Relaxing

We’ve been focusing on gadget gift guides lately, so I thought I would mix up the lists a bit for TGIF and focus on architecture. Here are some of the best places to just kick back and relax.

This stunning home is embedded in a hill in Vals, Switzerland, but it still has some amazing views. Seriously, you could just grab a chair, a beer and look at it all day. Hit the link to see what I mean. [Iwann Baan via Link]
If there ever was a house that lived up to the name “Universe,” this space in Roca Blanca, Mexico would do it. The design is based on the Jantar Mantar Astronomical Observatory, which was built in Jaipur, in 1724. The home has 360 degree open air views of the ocean with swimming pools and hammocks. In short, everything you could ever want in a place to relax. [Link]
If you had a treehouse when you were a kid, you probably considered it as your own private sanctuary. Imagine what it would be like to have a treehouse that is 11 stories tall, with dozens of rooms for you to run away and hide in. [Link]
Spending a few nights in a hotel is a great way to escape from our miserable lives, but the Winvian in Connecticut is more exciting than most. It features themed rooms that would be so much fun you would have little reason to go out during the day. There are golf rooms with putting greens, a treehouse cottage, a music room with playable architecture and even a helicopter room with an actual Coast Guard chopper inside. [Winvian]
As much as I can’t stand the Cowboys, I have to admit that their ridiculously over-the-top stadium is probably the best place to watch a game on the face of the Earth. Super field-level luxury boxes, a mind blowing assortment of concessions and a HDTV that measures 159 feet across. If you were Joey Fatone’s brother, you would have even had the privilege of playing Gears of War 2 on that gigantic screen. Of course, that would mean you would actually have to endure the shame of being related to Joey Fatone.
There should be a crime against spending $2 billion dollars on a private home, but I’m sure you could have a lot of fun hanging out in Mukesh Ambani’s pad. Needless to say, this 22-story monstrosity has every kind of entertainment and relaxation facility you could imagine…and then some. [Link]
Prison may not be the most desirable place to be, unless you happen to be staying at the Leoben Justice Centre in Austria. Seriously, take a look at the pics in the following link. It looks more like a resort than a correctional facility. [Damn Cool Pics and Link]
Electronic House’s Home of the Year for 2009 is short on taste, but high on gadgets. If you were hanging out here, you would be treated to beautiful views, the latest in home automation, racing simulators and an absurd amount of home theater equipment. [Electronic House via Link]
This list contains some extreme homes, but Russian billionaire Roman Abramovich’s yacht takes excess to the high seas with a price tag rumored to have cost in the billions. While on board, you would be treated to every kind of luxury imaginable—and you wouldn’t have to worry about intrusion because the yacht is fitted with a missile defense system and anti-paparazzi laser shield.
What if one of the best places to kick back and relax was your office at work? Google has taken that approach with the design of their Swiss headquarters. It features cozy seating, pool tables, foosball and a top notch lounge. [Link]

Netbooks: What You Need to Know About the Next 6 Months

A bunch of great netbook upgrades are on the way—next-gen Intel processors in January; smooth HD video playback—but to spare you the brain hemorrhage of keeping track, we’ve laid it all out. Here’s what you need to know.

Netbooks with Intel’s next-gen Pineview Atom N450 CPUs arrive in January, and the faster N470 chip may hit in March. There are also more netbooks with Ion graphics coming down the pipe, including the first Ion-based Eee PC. AMD is still kicking around the netbook space, too.

Little netbook keyboards will still make you feel like a basketball player driving a Mini Cooper, but the damn things are just so cute and cheap we can’t stay away. (It’s a love / hate relationship). And though HD video is most definitely a reality for netbooks, not all the new models will give you that smooth HD Hulu loving you crave.

Next-Gen Intel Chips

As our break down of Intel’s line-up explains, “Pineview” Atom processors (like the single-core N450 or the eventual dual-core 510) integrate the CPU, GPU, and memory controller on the same chip. The benefits: Better graphics, and according to MSI, at least 20 percent better power consumption.

MSI previously gave us the scoop that Pine Trail-M netbooks, using Pineview processors, are slated for a big CES debut. Their upcoming 10-inch convertible touchscreen U150 with Windows 7 will use one. Though Intel still hasn’t set an official date (publicly at least), DigiTimes is reporting today that the launch date will be January 10. That means Asus, Acer, Lenovo and MSI, which had planned to launch Atom N450-based netbooks in December, are all now expected to make their new models available from January 11 onwards. As mentioned, we expect to preview them at CES the week before.

DigiTimes goes on to say that the follow-up N470 chip (likely 1.83GHz) is expected to land in March. That syncs with apparent leaks of the Pine Trail-M roadmap that have floated around. And even though netbook makers already ship machines with more than 1GB of RAM, word is that Intel will actually encourage 2GB of memory for the N470. An upgrade over previous Microsoft/Intel limitations imposed to prevent cannibalization of ultra-portable notebooks.

So will N450-based netbooks handle HD video? According to Engadget, not without an extra chip like the Broadcom Crystal HD video accelerator, which should add about $30 to the overall price. Apparently, native HD video is still a little down Intel’s roadmap path.

So What About Nvidia Ion Netbooks?

I’ll be very interested to see just how close Pine Trail-M netbooks get to Ion performance, and for those with an HD video chip, how well they handle high-definition video, too. The integrated nature of Pine Trail-M could give it an advantage in price. But will the price/performance ratio be enough?

Nvidia also has a little ace in the hole called Flash video acceleration. They recently demonstrated an Ion-powered HP Mini 311 playing stutter-free YouTube HD video on an external monitor. Watch the demo below. The final version of Flash 10.1 will make this an everyday occurrence sometime mid-next year. And you can try the Beta now.

News also dropped today that Asus’ 12.1-inch Eee PC 1201N, its first Eee PC with Ion graphics, is finally up for pre-order over at Amazon for $500. It’ll be available in January, and join existing Ion-based netbooks like the HP Mini 311 (11.6-inch), Lenovo IdeaPad S12 (12.1-inch), and the Samsung N510 (11.6-inch). But here’s the thing: they all use existing Diamondville-class Atom processors.

The good news is that Intel has actually pointed out that despite having integrated graphics, Pineview processors are compatible with Ion. We’ve not seen such a netbook with both yet, but CES is just around the corner. Nvidia has also reportedly said that its Ion 2 (yep, gen 2) chipset for Atom netbooks will arrive by the end of the year. I’m betting we’ll see some Ion 2-based netbooks at CES in January, but my guess is we won’t be able to buy one until March or April at the earliest.

A netbook with Ion graphics and an Intel Pineview processor like the N450 sounds pretty sweet, right? Hopefully that’s what we have to look forward to.

Next Gen Flash Runs 720p Movie Smoothly on a Netbook, Demo

Distractions, Distractions

Real quick: I’m not ignoring AMD. Having left it too late to join the netbook fray, their upcoming Congo platform will instead mostly complete with Intel’s ultra-low voltage processors. We’re talking about notebooks with 12 to 13-inch displays. I say mostly, because Asus is readying an AMD Congo-based version of that 12.1-inch Eee PC I mentioned above. The unit’s ATI Radeon HD3200 graphics will handle 1080p video.

It’s going to be one hell of an interesting Consumer Electronics Show. ARM and VIA are still trying to get inside netbook trousers: Asus has an Android-based “Smartbook” planned for early next year, and Nvidia is pushing its competing ARM-based Tegra chip. Asus also wants to be first with a Chrome OS netbook when Google completes it in the second half of 2010. Finally, there are a ton of interesting eReaders and touchscreen tablets on the horizon…and don’t even start me on the Apple Tablet.

A number of these devices might replace what you thought would be your next netbook. Either way, whatever we see, you’ll hear about them here in almost pornographic detail. Personally, the tech behind my next $500 netbook—still no small investment—will almost certainly be something we first see under the bright lights of Vegas. Hopefully they’ll be better-looking by then, too.

Secret CIA Manual Shows Magic Tricks Used By Spies

During the Cold War, the CIA hired a master magician to teach them deceptive maneuvers. Here are a handful of tricks, recovered from a super secret manual the government thought it had destroyed over 30 years ago.

Our spooky spy friends Bob Wallace and Keith Melton—the guys behind the amazing spy-gadget bible Spycraft— uncovered one of the supposedly incinerated “magic” journals. Their new book, The Official CIA Manual of Trickery and Deception, is in part a verbatim reproduction of that manual, but, thrillingly, it also shares the (declassified) history of CIA trickery from the beginning, including the formation of the double top-secret and sometimes sinister MKULTRA division. MKULTRA was supposed to have been erased from history in 1973, but—in true spy fashion—the few shreds of paperwork that remained ended up telling its whole story.

The discovered manual was penned by John Mulholland, the David Copperfield and/or Blaine of his day. Though Mulholland knew more than anybody since Houdini about pulling fast ones, his challenge was to teach people who were not necessarily pros to pull off tricks in front of an audience that didn’t know it was an audience. Perform a lousy trick, and you don’t get booed—you get beheaded.

Here Wallace and Melton have kindly shared some newly created illustrations of tricks from the book, CIA sleights you could employ to escape from a water-bottling plant, poison a friend, send messages with your shoelaces, steal single sheets of paper, look dumb, and of course, kill Castro. Not all of the tricks below come from Mulholland’s original manual, but they were all devised at Langley, and are all lovingly described in the book—a $16 thrill of a read for anyone with even a passing interest in spyology:

Thanks to Bob Wallace and Keith Melton for sharing their book’s illustrations with us. If you’d like to know more about the book, check out its sales page on Amazon (there’s a Kindle version too), or visit the authors’ new website, CIA Magic.

Gizmodo’s Essential iPhone Apps: November ’09 Edition

Each month, the best new iPhone apps—and some older ones—are considered for admission into Gizmodo’s Essential iPhone Apps Directory. Who will join? Who will live? Who will die?

For the full directory of Gizmodo’s Essential iPhone Apps, click here.

The Month’s Best

As gathered from our weekly roundups.

If you hate hate hate galleries, click here for a single post.

Essential App Directory Inductees

This month was BOUNTIFUL, as we welcome seven (7!) new apps to the fold. Here are your new inductees:

I Am T-Pain: This app was fun when it first came out, but now that you can sing over your iPod library, it’s priceless.

Waze: Because it’s getting to be good enough to depend on (in a few areas), because it’s free, and because their video-gamey plan to make the app better is totally charming.

Voices: Because when your iPhone isn’t acting as a tool, it’s a toy. And everyone loves some good voice modulation.

Snapture: Because full 3GS support, which Snapture recently added, was the only thing holding this app back from replacing the iPhone’s camera completely.

ShopSavvy: Because any iPhone decent a good, free barcode scanning app.

Chorus: Because finding new apps is hard, y’all.

Jailbreak: Kirikae: Because without a solid task switcher like Kirikae, fantastic jailbreak app Backgrounder is kind of useless. With it, your iPhone is a full-fledged multitasking smartphone, finally. (Don’t get defensive!)

And Farewell To…

Our current directory members are all safe this time around. But next month, expect hell. (Maybe!)

What counts as an essential iPhone app changes all the time, and so should our guide: If we’ve missed anything huge, or you’ve got a much better suggestion for a particular type of app, let us know, or say so in the comments. We’ll be updating this thing pretty frequently, and a million Gizmodo readers can do a better job at sorting through the app mess than a single Gizmodo editor. Enjoy!

Nokia N900 glitch leads to useful portrait mode, caught on video

File this under “it’s not a bug, it’s a feature” if true. According to Guyver at the maemo.org forums, some glitch in the OS caused his Nokia N900 to switch into portrait mode for everything, not just dialer and photo apps as previously allowed. We’d love to eliminate the need for two hands to run our favorite chunks of mobile software, but so far we haven’t been able to recreate his trick. Try it at home if you’d like by tilting the device to launch the phone app, then sliding up the screen and closing the app. Perhaps the gang at Espoo can turn this into a legit update — if they’re awesome people, of course. Video after the break.

Continue reading Nokia N900 glitch leads to useful portrait mode, caught on video

Filed under:

Nokia N900 glitch leads to useful portrait mode, caught on video originally appeared on Engadget on Sun, 29 Nov 2009 15:19:00 EST. Please see our terms for use of feeds.

Permalink MaemoWorld  |  sourceMaemo.org  | Email this | Comments

How a HS Dropout Became the Youngest Boss at Apple

James Bach, a legend in the software-testing field, just published Secrets of a Buccaneer-Scholar, the tale of how he dropped out of school, became a self-taught games programmer, and scored a sweet gig at Apple—all before turning 21.

The book’s main purpose, as illustrated by the excerpt James has kindly permitted us to publish, is to show how education is not about pieces of paper on the walls, but the knowledge you cram inside your own head. His book is a discussion of his mindframe as he embarked on a life of self-education, as he became what he calls a “buccaneer-scholar.” Here, in a riveting passage, he manages to swing a gig at the hottest company in the Valley, circa 1987:

In May of 1987, nearing my twenty-first birthday, I was down to my last hundred dollars, and the only marketable skill I had was for [programming video games,] something I could no longer force myself to do.

Then a recruiter called. She’d found a resume I had sent months before. Would I like a job in Silicon Valley?

“I thought the industry had taken a downturn. Aren’t there programmers starving in the streets of Sunnyvale?”

No, actually there’s lots of work available. Would I like a job at Apple Computer, for instance?

“Sounds wonderful. What kind of work is it?” All feelings of burn-out were instantly replaced by a blazing electric neon YES in my heart.

Apple Computer needs me. Needs me. I am being called to service.

The job was managing a team of testers.

“What do you mean, testers?” I asked the telephone.

The recruiter explained that testers examine a product someone else has created and find problems in it.

“They pay people to do that?” Interesting. I’d always tested my own work. Then again, I’d never worked on a team with more than two other people. In terms of the software industry, I was a crazy-eyed mountain man.

On the way to Apple I bought a copy of The One-Minute Manager. It looked thin enough for rapid learning. I skimmed it as well as I could in the hour before the interview.

Walking into Apple may have been the first time I ever set foot inside an office building. First time seeing cubicles and conference rooms. First time seeing a carnival-sized cart of free hot popcorn parked in a hallway. Imagine working near the smell of melted butter! (Your eyes sting and you come to hate the smell of butter, it turns out.)

I’d been worried about my clothes. I didn’t own a suit. But looking around, I fit right in. Everyone was dressed like me.

Two guys in a conference room asked me questions. I answered them and showed the portfolio of games I’d worked on. When they asked me about management, I repeated some of what I’d read in The One-Minute Manager. When they asked me about testing, I said what every programmer says: “I’ve tested my own stuff.” Its not a good answer, but I didn’t know that. Neither did they. No one in that room knew much about software testing. There are no university degrees in it. It’s one of many new crafts that have emerged along with modern technology.

After the interview, I went outside and walked twice around the building. This is where I belong, I thought. I will rock this place. Please please please hire me.

A couple of days later, they did.

***

I was a nervous man on my first day at Apple. At twenty, I was the youngest manager in the building. In all the gatherings and reorganizations we went through during the four years I worked there, I never met a younger manager. I was younger than many of the interns.

Also, I was a contractor. That meant Apple could fire me without notice or severance. I had little money and no credit.

The worst thing was that nearly everyone around me had a university degree. A good many had graduate degrees.

I had to catch up to the college kids. I brooded on it every day. I came to work with desperate fire in my soul to learn. Learn everything. Learn it now.

As a manager, I supervised five testers, but no one closely supervised me. My boss, Chris, was in meetings most of the time. He needed me to get on with the work as best I could. This meant I could sneak away and read. I spent part of each afternoon in a donut shop across the street from my building, studying without interruption.

Chris was supportive. “You should not just read about software,” he suggested. “Try to find solutions to our problems in other disciplines.” Maybe Chris was more supportive than he ever knew. I treated that one casual suggestion as permission to spend work time to learn anything. I browsed many of the two hundred or so academic journals that came through the library. Even crazy stuff. I read “Anthropometry of Algerian Women,” and “Optimum Handle Height for a Push-Pull Type Manually-Operated Dryland Weeder.”

Of course I read every testing book I could find. I discovered software testing standards and studied those, too. I studied most evenings and weekends.

At first I thought I would learn a lot from the other testers. There were more than four hundred of them in my building. But talking to them revealed a startling truth: nobody cared.

The pattern I experienced at Apple would be confirmed almost everywhere I traveled in the computer industry: most people have put themselves on intellectual autopilot. Most don’t study on their own initiative, but only when they are forced to do so. Even when they study, they choose to study the obvious and conventional subjects. This has the effect of making them more alike instead of more unique. It’s an educational herd mentality.

I talked to coworkers who wanted to further their education, but they typically spoke in terms of getting a new piece of paper, such as a bachelor’s degree, a masters, or a PhD. For them, education was about the doors they believed would open because of how they were labeled by institutions, not about making themselves truly better as thinkers. Buccaneers, on the other hand, don’t take labels too seriously. A buccaneer studies in the hope of unlocking Great Secrets! Wonder! Mastery! A buccaneer lives for the excitement of deciphering the mysteries of human experience. A buccaneer wants status, too, but only if that status is justly earned and sustained through the quality of his work.

The $13 book is a wonderful read, especially for people who take education into their own hands—or would like to. There are so many brilliant people for whom the structure of school simply doesn’t work, and it takes an eloquent geek like James prove to people in similar situations that this isn’t their fault, and that they can do something about it. You can check out more on James’ website, and you can follow him on Twitter at @jamesmarcusbach. Thanks again, James—and yo ho ho, matey!