Relax, Apple’s new dock connector is A Good Thing

In many ways, Apple is an odd goose. The company has a habit of overlong clinging to some ideas while rivals might have jumped ship long before, but then being desperate to shed others ahead of the curve. Ditching floppy drives in favor of CDs, that was driven by Apple; more recently, switching to digital distribution and dumping optical drives in the process on machines like the MacBook Air and new MacBook Pro with Retina Display. And then there’s the Dock Connector.

[Image credit: Fred]

The 30-pin port has lasted longer than many ideas, cropping up early in the iPod life-cycle and then hanging on through all generations of iPhone, iPod touch and iPad. As a result, it’s also ridiculously outdated: huge in comparison to the tiny microUSB ports on just about all other smartphones, media players and tablets.

Would Apple have kept the current Dock Connector for so long if it didn’t have a thriving third-party accessory ecosystem? That seems unlikely, and the company knew it was onto a gravy train of sorts with the plethora of aftermarket kit that soon flourished for the iPod, iPhone and iPad. Contrast the sheer number of cases, speaker-docks, cables, screen protectors, ridiculous and pointless keychain-charms and other ephemera for the iOS line-up with what third-party firms bothered creating for Zune, Creative’s PMPs or any of the other earnest but soon ejected alternatives to Apple’s media player.

A new port isn’t turnover for the sake of it, mind. Apple seldom dumps something until there’s a replacement lined up, and the Dock Connector will be no different. This time around, wireless has caught up to where wires once led, and so we have AirPlay and AirPlay for Video, along with Bluetooth for those less willing to pay licensing fees for the AirPlay radio tech. The new port might debut on the iPhone 5, but it’ll make a bigger difference to Apple’s iPod nano and iPod touch, which will be able to slim down even further thanks to the smaller connector assembly.

“The adapter will be a carbuncle in your Jony Ive designed life”

It’ll also be a spur to upgrade. Oh sure, there’ll be an adapter to go from shiny new port to old-fashioned connection, but it’ll be a carbuncle in your otherwise sleek, Jony Ive designed life. Accessories that require the iOS device to dock more completely will be out of luck too, as the adapter will presumably add some bulk overall. It’s a sop to existing owners, then, but the expectation is that they’ll upgrade sooner rather than later.

And, while it’s galling to be led by the credit card, that upgrade makes more sense. The smaller port means Apple’s famed aesthetic imagination can run wild again, no longer limited in accommodating the 21mm-long connector. iOS device owners are arguably already used to replacing things like cases when they change iPod or iPhone, too; such is the price of staying on-trend and current. The more frustrating kit will be expensive docking stations, which are coincidentally the items that will work most smoothly with the dock adapter.

There are even whispers of microUSB compatibility for recharging at least, something which in itself could make hundreds of thousands of peoples’ lives more straightforward. Yes, you might need the “proper” cable to sync or hook up with more complex accessories than a charger, but cloud- and wireless-based alternatives to that cord have already proliferated.

So, mourn your current iPod speaker dock if you must, but make no mistake: the Dock Connector as we know it has had its day, and was long-overdue the chopping block.


Relax, Apple’s new dock connector is A Good Thing is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Is Microsoft’s Windows $0.99 app omission madness, money or moral?

When you’re trying to kick-start your tablet platform, apps are everything, so why has Microsoft decided to opt out of the most common price point in recent years: the $0.99 app? Confirmation this weekend that Windows 8 and Windows RT users would be offered paid apps as well as free (unsurprising) and that developers would be able to price their wares from $1.49 to $999.99 (surprising) is a distinct departure from Apple and Google’s strategy. According to the stereotypes, iOS users love paying for apps while Android users only download free ones (or steal them until the apps are made free out of exasperation), but what do Windows tablet owners do?

Microsoft makes no mention of the thinking behind the price tiers, though there are a couple of assumptions we could make. The first is purely motivated by greed: Microsoft gets 30-percent of each paid app sale (dropping to 20-percent should the app make more than $25,000). If a developer wants to make money from their software but opts for the lowest possible price to encourage downloads, Microsoft will take away $0.45 on a $1.49 purchase, versus $0.30 on a $0.99 app.

If that were entirely the case, though, then you might expect Windows Phone to also kick off with the $1.49 tier, and yet on Microsoft’s smartphone platform there are $0.99 apps. Perhaps, then, Microsoft simply believes that tablet apps should be more expensive than phone apps, reflecting some greater expectation of functionality in software designed for the bigger screen.

Such an expectation holds true for developers as much as users: Microsoft could be trying to gently persuade Windows 8/RT coders to up their game when they create tablet apps for the platform, and to stretch a little further than they might for a relatively “throwaway” dollar app. Similarly, users could grow to expect more from the software they buy, with the $1.49 price point acting as a mental graduation up from the assumptions made around cheaper software (even if that cheaper price point isn’t even available on that particular platform).

“Could Microsoft be taking a moral stand?”

Still, is it too much to hope that Microsoft might be taking a moral stand of sorts, and suggesting that it believes software simply should be more expensive? Plenty of developers have grown disillusioned with the app ecosystem and its race to ninety-nine cents, and while some software is certainly disposable enough to make the price tag fit, other coders find themselves stuck facing either devaluing their hard work with a price that will get attention, and asking a little more and ending up ignored.

The reality is likely a combination of the three: a healthy dose of self interest and, yes, the preoccupation that, as primarily a software company itself, seeing apps undervalued doesn’t bode well for the long-term. It’s a potentially dangerous strategy given Microsoft’s position near the back of the tablet race, but it could be the wildcard that prompts developers to give Windows a second look.


Is Microsoft’s Windows $0.99 app omission madness, money or moral? is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Don’t Shoot Your Food

Stop taking pictures of your food. You’re a lousy photographer, and I’m tired of looking at your photos. They are disgusting. While you may be excited about the delicious / unique / unfathomably fattening food you are about to consume, that does not mean you need to mark the occasion with an Instagram or Twitter post. Just don’t. Eat your 17 pound burger, or your pizza with a fried shrimp crust, or your bacon ice cream sundae, and keep it to yourself.

[Image credit: RJ Aquino]

I have been searching for accompanying images for this story for a while, and I finally found one. A Twitter friend posted this lovely image of Chicken Fried Bacon. You may have never heard of chicken fried bacon, but I live in Texas, and we invented frying things unnecessarily. We invented the corny dog, a hot dog dipped in cornmeal batter and deep fried. Chicken fried steak is our state bird. At the Texas State Fair, you can eat fried everything from ho-hoes to Frito pie to beer to butter. Yes, someone dips frozen butter in batter and fries it in oil. It’s not just a heart attack waiting to happen. It’s an offense to God and cows alike.

[Image credit: Chris Sorensen]

So, anyone surprised by chicken fried bacon? No. In fact, it could actually be pretty good. I’m not endorsing its consumption, but bacon plus fried must equal delicious, right? Except, look at that photo. It looks like dried dog poop in a serving tray. That’s not an exaggeration. It is the most disgusting looking food photo I have ever seen. And the accompanying hashtag to go with this picture? #sowrongitsright. No. No. Just . . . No.

Food photography is hard. It takes hours and hours to get the perfect shot used in commercials and promotional materials. The ice cream melts. The burger gets dry. The cheese goes from melty-gooey to stiff and still. Food does not behave. Which is why food photographers often have to augment their photos with unnatural, inedible substances. Concrete is used to stiffen milkshakes. Soap is used to create frothy bubbles. Plastic is used with wild abandon.

If professional food photographers have trouble making food look good with all of these tools at their disposal, how could you possibly imagine you can make your food look good when you snap a shot with your cameraphone? You cannot. Your food looks awful. Instead of making me drool, or envy your experience, or marvel at the spectacle, it simply makes me nauseous. If that’s what you’re going for, congratulations, you have succeeded.

The first problem is lighting. Phone cameras require a great deal of light to take an excellent, appealing photo. That’s why most of your indoor shots look lousy, while brightly lit outdoor photos look much better. The color of the light also makes an enormous difference. Tiny cameraphone sensors tend to have more of a problem discriminating and balancing colors. Reds and greens are especially problematic. This leaves many food photos looking yellow indoors, and bluish outdoors.

The second problem is context. Sure, I can spot a burger from across a room. But how about a lobe of foie gras on a plate of lentils? How about slabs of gnarled, curly bacon fried dark brown in batter? How about a ghoulash of some sort with ingredients I could hardly name if it were right in front of me, let alone the subject of a poorly lit, off balance cameraphone photo?

“It’s time to stop glorifying food”

But my biggest problem of all with food photography is that it’s time to stop glorifying food. I hate the term “foodie,” but I do consider myself knowledgeable about food, food culture, and cooking. But I think we’ve taken a dangerous turn when it comes to an obsession with food on the Internet.

When people are extraordinarily happy with their food, they take a picture and share it. Why? Because they know their friends will relate. Because it makes them feel special and important to be eating something so tasty or unique. Because it’s a way of marking where you are and telling people what you are doing. Whenever we travel abroad, we always make special note of the food. When you think about it, that seems odd.

We spend 3-4 hours a day eating, at most. So, 1/8 – 1/6 of the day is spent at meals. What about the rest of the day? Sure, you can have a great night’s sleep, but you don’t take a picture of your bed afterword. When you have an easy commute, do you take a picture of the open road? When your boss is happy with a project you’ve completed, you don’t snap a picture and share it (confidentiality aside). I would fully expect to see photos of a movie poster if someone liked the film, but I’ve never seen that shot on social networks.

My problem with the current obsession with documenting our meals like photojournalists is that it only promotes more eating. And because we usually snap the most unique and unhealthy dishes, it promotes the worst type of eating. If I see a picture of a juicy, well-adorned burger before lunch time, I want a burger. If I see photos of the awesome dumpling shop you found, I want dumplings.

I don’t deny there’s a level of personal responsibility involved. Sure, it’s my job to make the right decisions for myself. I don’t have to open your photos. I don’t have to eat what you’ve photographed. But if our decision making process were so easy, weight problems would not be such an issue hanging around the waist of the public’s health. We’re already bombarded enough with photographs from professionals working for hours to make food look unnaturally appetizing.

Before you post photos of food, ask yourself what you’re trying to achieve? Is there really any positive benefit? At worst, you’re posting an ugly picture. At best, you’re showing off and glorifying your meal. If you don’t agree that it promotes an unhealthy obsession in our society, at least understand that it’s boring, unless you have the skill to do it right. Which you don’t. So stop shooting your food. You’re a horrible photographer, and I’m having enough trouble sticking to my diet without your help.


Don’t Shoot Your Food is written by Philip Berne & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Distro Issue 49: a visit to the craft and hackerspace at Artisan’s Asylum

Distro Issue 49 a visit to the craft and hackerspace at Artisan's Asylum

While Yahoo was busy hiring its new CEO away from Google, we were hard at work crankin’ out this week’s edition of Distro to help feed your slate reading desires. Let’s just say that if a C&C gantry router that implements a Wiimote piques your interest, you’re in for quite the treat this time around. We head north to Massachusetts to visit Artisan’s Asylum for a glimpse of the craftiness and general hackery that takes place in an old office supply warehouse. We throw down the review gauntlet for the Nintendo 3DS XL, LG Optimus 4X HD and Sony VAIO T13 and offer some detailed reactions on said trio. AllThingsD’s Mike Isaac has a go at the Q&A, “Switched On” discusses the next Office, Steam’s annual sale occupies “Reaction Time” and “IRL” returns. As you might expect, all of the requisite download links await your clicks below.

Distro Issue 49 PDF
Distro in the iTunes App Store
Distro in the Google Play Store
Distro APK (for sideloading)
Like Distro on Facebook
Follow Distro on Twitter

Filed under:

Distro Issue 49: a visit to the craft and hackerspace at Artisan’s Asylum originally appeared on Engadget on Fri, 20 Jul 2012 09:30:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceiTunes, Google Play  | Email this | Comments

Will public shame stem Apple’s patent aggression?

You could hardly make it up: Apple, its patent arguments not only rejected by UK courts, but instructed to do some advertising on Samsung’s behalf to dismiss its rival’s “arch copyist” reputation. That’s a reputation Apple was instrumental in creating, of course, and while Samsung is throwing no small amount of money at its own defense, this latest spanking to its Cupertino rival/customer’s pride is only likely to bolster its unofficial stance that the ongoing phone and tablet war is nothing but good for brand awareness. Question is, will being forced to make a very public apology temper Apple’s appetite for litigation?

Make no mistake, this very public mea culpa marks a tipping point in how patent battles are being fought – and potentially impacting consumers. Until now, spats between companies like Apple, Samsung, HTC, Google, Microsoft and Motorola have been the subject of discussion only among geeks and lawyers: for most end-consumers, it’s been a case of “what is there on store shelves for me to choose between?”

“Apple’s target is to cut down on the range of distractions from the iPad”

That’s been Apple’s target – and the goal of others – to cut down on the range of distractions from their own products. If you don’t see, say, a Samsung tablet or an ASUS one nuzzling up next to the iPad, you’re probably more likely to pick the tablet you can buy there and then. The tech community may be becoming increasingly vocal at the general patent-war tactics companies are utilizing, but as long as their majority counterparts in the consumer marketplace don’t get involved, this “do what you can to impact the shelves” approach is worth playing.

All that changes when, not only are your attempts to game the shelves prevented, but you’re told you have to tell your potential mass-market customers that you’ve been over-reaching in your arguments. The UK court’s demand that Apple must take out prominent adverts dismissing suggestions that Samsung copied the iPad’s design takes the patent war out of the courts and the blogs and straight into consumer mindshare.

Apple is appealing the ruling but, assuming it is upheld, is the potential for such public shaming likely to make the company more hesitant of pulling rivals into court? Perhaps not; patents are tricky things, and if you don’t sufficiently protest when your competitors are potentially infringing them, eventually you lose them altogether.

Until now, that battle process has been one companies are very willing to wade into. As the fall-out starts to spill over from the legal team to the marketing department, though, that could change. Apple and others might be more willing to consider licensing deals and settlements, if the alternative was the risk of being forced to pay for complimentary advertising for your fiercest rivals. Hopefully that will take some of the wind from the sails of tit-for-tat patent spats until more comprehensive intellectual property reform arrives.


Will public shame stem Apple’s patent aggression? is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


What Would the Gaming Industry Look Like Without Mario?

I’m always interested in scenarios in which we examine the “what-ifs.” In some cases, that means discussing what might have happened to RIM if it saw the touchscreen craze coming. In others, it’s a look at what Apple might have been without Steve Jobs. But this time around, I want to take it away from the real world and put it in the digital realm: what might the game industry look like today without Mario?

Nintendo haters will, of course, cringe at such a question. For years, they’ve been saying that Mario hasn’t improved all that much and his importance has been largely overblown. The gaming industry, they say, was going to end up at this point despite Mario’s presence, and to say otherwise is ludicrous.

But I’m not so sure I can agree. When Mario first made an appearance on the Nintendo Entertainment System, the gaming industry was in a state of disarray. Retailers weren’t sure that consoles could appeal to consumers and the crash the preceded the mess was still looming in all gamers’ minds. It appeared to many that in-home gaming would die sooner rather than later.

But with the NES came the kind of innovation, thanks to Shigeru Miyamoto, that captivated gamers and made them realize that maybe there really was an opportunity to enjoy playing titles in the home again.

Without Miyamoto’s talent, the NES would have never succeeded. And Nintendo, a company that was once known for playing cards, likely wouldn’t have a place in the industry as we know it today.

“The NES proved the “Big Bang” moment of gaming”

I think a solid argument can be made that the NES proved to be the “Big Bang” moment of gaming. Sure, there was gaming before the NES, but its success prompted other companies to invest heavily in the market. Would there have been a Sega Genesis without Nintendo’s success? Would Sonic have ever existed? Would Sony even be a player in that market today?

It was the Super Mario franchise that kept Nintendo afloat over the years, and the set of games that even to this day, other companies would love to emulate. Mario played an integral role in keeping the gaming industry going in the 16-bit days and set off a 3D craze when he landed on the Nintendo 64. Super Mario titles through the 1990s were the benchmark by which all other games were judged.

Today, Nintendo is hurting, and there is some concern that the Wii U might not be able to get it out of its current mess. But chances are, its Super Mario title will be wildly popular and a key reason many people buy its console. A similar scenario has played out since the beginning, both in the console space and the portable market. When Nintendo launches a Mario game, it knows it’ll sell more hardware.

The funny thing is, all of those games might have also helped its competitors sell more hardware. Mario is the character that welcomed young people into gaming and those people turned around and became hardcore players that bought every console out there.

So, perhaps we should thank old Mario more often. Without him, there’s a very good chance that the gaming industry wouldn’t look anything like it does today.


What Would the Gaming Industry Look Like Without Mario? is written by Don Reisinger & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


The Movie Is Over When The Credits Roll

Now, I’m mad. At first, it was funny. I definitely didn’t stay to the very end of the movie when I saw “Ferris Bueller’s Day Off” in the theaters, but when I saw the movie at home, I saw the bonus scene at the end. The “stinger,” as it’s sometimes called. Roger Ebert called this the “Monk’s Reward,” because you need to have the patience of a monk to sit through the final credits for the payoff. But if you managed to make it through the scroll of names at the end of Ferris Bueller, Matthew Broderick appears on screen and tells the audience to go home. The movie’s over. Go home.

[Image credit: Lindsey Turner]

It’s time to stop creating these post-credit scenes. The joke is not only played out, it’s actually starting to hurt the movies. I recently wrote a column on spoilers here at SlashGear, and I focused on the horrible ending to the movie “The Grey.” A commenter pointed out something I hadn’t realized before. The movie ends abruptly, just before a fight that I would have expected to be the climactic moment. Imagine if Rocky had ended before the first punch was thrown against Apollo Creed. That’s how it felt.

So, one thrust of my column was that some endings are so bad, it’s almost better knowing about them in advance before you see the movie. But then a commenter on that column pointed out that there is a stinger scene after the credits that completely reverses my interpretation of how the film ended. Or at least it adds significant details.

This is just wrong. It’s time to stop the stinger scene. From now on, the movie needs to end when the credits start to roll.

It’s easy to see why movie makers would add this sort of scene. There are really two reasons. The first is that the credits are important… to the people credited. It’s actually a perk of the job. Whether you are listed, and how high you appear in the list, is a badge of honor for folks working in the movie business. This is why credits are getting longer and longer. This is also why there are four or five producer credits before the movie even opens. These listings are negotiated in advance, and they are part of the job.

“Hollywood needs to get over itself”

Hollywood needs to get over itself. I know, that’s probably the most redundant line I’ve ever typed. But I think, for the credits, it’s a real necessity, now that it’s causing problems.

Can you imagine if everything had credits? In my day job, I work for Samsung Mobile. Can you imagine if you turned off your phone and then had to sit through a list of all the names of everyone who worked on a phone? There are hundreds, if not thousands of people involved.

Can you imagine if you finished a Big Mac, then had to sit through the credits of everyone who helped make the burger? Even in the art world, there are almost no parallels. Video games are the only exception I can think of. When you see a painting, you don’t see a list of everyone involved. The person who stretched the canvas. The artist’s assistant. Sometimes you don’t even see the name of the subject. You just see the artist’s name. Are movies really claiming that every boom operator, every second assistant, is an artist? Feh.

The second reason is more legitimate for the viewing audience, but no less annoying. Those extra scenes make us feel like we are “in the know.” We’re the cognoscenti because we were tipped to sit through a movie to the very last flicker of light.

Now, however, that is not enough. How many of you saw the secret ending for The Avengers? No, not THAT secret ending, the secret ending AFTER the secret ending.

For all of the movies that led up to The Avengers, there was a stinger scene hinting at the upcoming ensemble film. At the end of The Hulk, Captain America, Iron Man, and Thor, there was an appearance by Samuel Jackson’s Nick Fury, or Clark Gregg’s Agent Coulson. There were whispers of an “Avenger’s Initiative,” so that comic fans would get hints of what was coming. Okay, I can accept that. The writers didn’t want to interrupt the main story line with hints about the big blockbuster to come, so fans get the secret ending. After the first one showed up at the end of The Hulk, fans knew to stay in their seats to the end.

“The Avengers screwed with the fans”

Then, The Avengers screwed with the fans. Sit through the credits and you get a scene with Thanos, a villain who appears to be the arch-enemy in possible sequel films. Then, the credits keep rolling. If you stayed even longer, you get a quiet little scene featuring the heroes eating shawarma. Seriously. They’re eating sandwiches. It’s a reference to a toss-off joke from the movie. It’s kind of funny, but I completely missed it when I saw the movie in the theater, and I was peeved.

I consider that Avenger’s scene to be the final flip-off for stinger scenes. It created striations of fandom. I was enough of a fan to know that something else would be coming, but not fan enough to stay even longer. Fine. Leave me out of the joke.

It has gotten to the point where I expect a stinger whenever I see a somewhat unsatisfying movie. I wonder if there will be a better resolution at the end of the credits. I wonder if the characters will be revived for a sequel that might deliver on the promise that the current film could not fulfill. Stinger scenes now act in the opposite way that they were supposed to act.

It used to be so much fun. It was Ferris telling us the movie was over. It was Animal from the Muppets telling us to go home. It was Darth Vader’s heavy breathing at the end of The Phantom Menace, reminding us that we have better Star Wars movies at home on DVD.

Now, it’s about exclusion. It’s about poor writing and directing. It’s about forcing us to pay attention to the people behind the scenes. I’m done. I just finished 44 ounces of Coke and I’ve been sitting for 2 hours. I’m tired of supporting this trend. When the credits start rolling, the movie is finished, and that’s the only chance you get to tell me the story.


The Movie Is Over When The Credits Roll is written by Philip Berne & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


Editorial: Engadget on EyeTap, Project Glass and the future of wearable cameras

Editorial Google may be ready for wearable cameras, but what about you

Summer in Paris — you can’t walk a block on Champs-Élysées without locking eyes with at least one camera-equipped tourist. But Steve Mann’s shooter wasn’t dangling from his shoulder and neck; it was mounted on his head, with a design strikingly similar to Google’s Project Glass. Unlike that mainstream Mountain View product, however, Mann’s version has reportedly been around in one form or another for 34 years, and was designed with the objective of aiding vision, rather than capturing stills and video or providing a bounty of database-aided readouts. It’s also street-ready today. While on vacation with his family, the Ontario-based “father of wearable computing” was sporting his EyeTap as he walked down the aforementioned French avenue, eventually entering a McDonald’s to refuel after a busy day of sightseeing. He left without his ranch wrap, but with seriously damaged hardware.

What allegedly occurred inside the restaurant is no doubt a result of the increasing presence and subsequent awareness of connected cameras, ranging from consumer gear to professional surveillance equipment. As Mann sat to eat, he writes that a stranger approached him then attempted to pull off his glasses, which, oddly, are permanently affixed to his skull. The man, at that point joined by one other patron and someone that appeared to be a McDonald’s employee, then pushed Mann out of the store and onto the street. As a result of the attack, the eyewear malfunctioned, resulting in the three men being photographed. It wouldn’t be terribly difficult for police to identify those involved, but this encounter may have greater implications. McDonalds has since launched an investigation into the matter and seems to be denying most of the claims, but it’ll be some time yet before the full truth is uncovered. Still, the whole ordeal got us at Engadget thinking — is the planet ready for humans to wear video recorders, and will it ever shake a general unease related to the threat of a world filled with omnipresent cameras? Join us past the break for our take.

Continue reading Editorial: Engadget on EyeTap, Project Glass and the future of wearable cameras

Filed under: ,

Editorial: Engadget on EyeTap, Project Glass and the future of wearable cameras originally appeared on Engadget on Wed, 18 Jul 2012 13:41:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSigns of the Times, Slashgear  | Email this | Comments

Wearable Worries: Glass could trigger more than just virtual violence

If you listened to the whoops and hollers at Google IO last month, you’d have thought the world was more than ready for wearable tech like Google Glass. Beyond the braying developers, though, the real world is showing every sign that the Brave New World of augmented reality headsets will cause more headaches than just transparent eyepiece strain alone. The claims by wearables researcher Professor Steve Mann that he was physically assaulted in a French McDonald’s after staff suddenly took offense at his digital eyewear highlight the shadow side of the cutting edge: it can hurt more than just your wallet if the rest of society isn’t ready for it.

Mann’s story – which we covered more comprehensively earlier today – is perhaps as predictable as it is upsetting. The scientist was with his family in Paris, and while the first McDonald’s staff member he spoke to had no issues with his EyeTap wearable, when he sat down to eat he was challenged by three other employees, one of whom tried to pull the gadget from his head.

Mann knew there could be problems; he’d even brought along paperwork from his doctor that explained the nature of the EyeTap and how it’s permanently attached to his head and can only be removed with the appropriate tools. According to his account – and photos snapped by the headset itself – the McDonald’s employees ripped up that documentation, seemingly unimpressed by how Mann has been immersed in the mediated reality dream for the past few decades.

Outside of the geekosphere, there’s still a long way to go before sousveillance – the recording of an activity by a participant of that activity – is generally accepted. Tensions around the rights of photographers to take photos of buildings and other public places, often at odds with the actual legality of the situation, and concerns over privacy are yet to be smoothed away. The rise in cellphone cameras increased such arguments exponentially; how much more troublesome will it be when we hang permanently active cameras from our faces?

There’s invariably a catch-up period with each new technology, as old schemas get challenged (and generally forced to upgrade to accept) with fresh developments. Mediated reality isn’t simply a case of dropping your new phone into your pocket when firing off tweets or snapping Instagram images isn’t acceptable; the whole idea of digitally augmenting your world is that it’s a persistent thing. Just as much in the face of others as it is on your own, and for all of Google’s protestations that “people don’t even notice it,” it’s undoubtedly going to add another degree of perceived separation and difference between you and those around you without wearables.

To those who have been following the development of wearable technology for any length of time, Professor Mann is a pioneer. For everyone else, he’s a guy with a strange – and potentially suspicious – contraption, something unfamiliar and disconcerting. Google may find it easy to whip up developer enthusiasm for Glass, but we’re a world away from wearables being generally accepted among society as a whole.

More on Mann’s research – and augmented reality in general – in our full timeline.


Wearable Worries: Glass could trigger more than just virtual violence is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.


OneNote MX should be Microsoft’s Windows 8 content creation hub

The importance of Office 2013 to Microsoft’s bottom line can’t be understated, and yet the company faces no small amount of ridicule amid questions of whether the productivity suite is “relevant” any longer. With Windows 8 fast approaching, and long-standing arguments over whether tablets are for content creation or merely consumption, Office or its Metro-styled MX variant for Windows RT slates hasn’t necessarily proved the selling point Microsoft may have hoped it might. The company already has that wildcard, though, and it’s been fermenting away under Microsoft’s nose for a decade.

The reaction to Office 2013 – perhaps best described as “a necessary evil” – has been muted if only because it’s tough to get especially excited over word processing, spreadsheet, email and (take a deep breath) presentation software. Microsoft’s Metro UI is a nice touch, and in fact it’s been responsible (along with Office 365 and its cloud ambitions) for most of the positive chatter around the suite. Still, it’s tough to be too enthused when even Microsoft’s attention is elsewhere.

Microsoft is obviously more excited about tablets running Windows 8 than it is about regular desktops or notebooks. Slates may be expected to contribute to a minority of sales overall, but they’re attention-grabbing and – many assume – the future of computing, and so they get over-emphasized in Microsoft’s strategy. That’s already prompted the company to challenge its own OEMs with Surface, no less.

What it needs is the perfect software foil to go with that; something which not only demonstrates how ambitiously segment-stealing Surface is, but how Microsoft is pushing tableteering into segments iOS (and, to a lesser extent, Android) has only partially catered for.

“The sliding panes of Metro make perfect sense for a digital notebook”

OneNote MX could well be that “killer app”. Microsoft’s digital notetaking tool has been bubbling away since the Windows XP days, but it’s with tablets broaching the mainstream that it’s finally ready for primetime. The preview that arrived in the Windows Store today is a good example of why. The sliding panes of Metro make perfect sense for leaves in a digital notebook, as does the Snap View split-screen layout that will allow, Courier-style, two apps to share Windows tablet screen-space simultaneously. (In fact, OneNote MX is crying out for a forward-thinking OEM to slap a couple of 7-inch screens together and do what Microsoft proved too gutless to attempt: give all those Courier enthusiasts the dual-display folding slate they were begging for.)

The radial pop-up menu is a perfect example of a UI that’s been percolating away in some third-party iOS apps, but which could tip over into the mainstream if Microsoft plays OneNote right. Sized to suit both fingertip and stylus control, it’s a simple and convenient hub for common controls and takes a welcome step away from the long, narrow strips of traditional Microsoft toolbars. Less sweeping sideways movement in favor of smaller, more contained button options.

If reaction to Office 2013 has proved anything, it’s that people don’t really care if their content creation tools are in the cloud, or local, or some hybrid of the two. What prompts enthusiasm is when the tools on offer are usable and intuitive: when they suit the device and the way it’s used. Microsoft has woefully underutilized OneNote in the past, but the time is ripe for the app to take its place as the hub of Windows content creation.


OneNote MX should be Microsoft’s Windows 8 content creation hub is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.