It’s all change for Microsoft and Xbox today, with the new Xbox One shaking things up in the gaming world and meeting Sony’s PlayStation 4 challenge head-on. The Xbox One has a lot to live up to, though: the 360 held the best-selling console torch for some time, and that’s something Sony would just love
Microsoft just wrapped up its unveiling of the Xbox One, which is the company’s newest gaming console. The whole keynote was only an hour long, but they blazed through a lot of information without really diving deep into detail. Thus, you might have missed some things if you were watching the live stream or our
This week the folks behind the web browser Opera have pushed their Android-based mobile edition past Beta into its first full-fledged release. They’ve done so with little fanfare, too – so little that it’s scarcely made waves here in the spring of 2013 – right between Google’s developer conference Google I/O and the technology conference
At this years’ Google I/O developers conference, a Fireside Chat with several members of the core Google Glass team proved to reveal much on not just the future of the device, but its origins as well. While earlier in the day a single slide had been shown depicting a set of six original prototypes of what was then called Project Glass, here lead industrial designer Isabelle Olsson had one key prototype on hand to show SlashGear in the plastic, as it were.
As Olsson made clear, this device was created as one of the very first iterations of what’s been reduced to a simple skeleton frame and single, removable computer element. What you’re seeing here is a set of development boards attached to two full-eye glass lenses, white plastic, tape, exposed cords and all.
Isabelle Olsson: When the team started working on this, it was very clear that we’re not taking something that already existed and making incremental improvements to it. The team wanted to create something that’s much more intuitive, immediate, and intimate. But to create a new kind of wearable technology, that’s so ambitious, and very messy at points.
In addition to showing how this pair of glasses worked with folding sides and a real, working set of innards (if you can call them innards, of course), Olsson showed one of the prototype pairs of prescription Glass glasses as well. These are seen in the box below, and their design was seen on Google employees here and there during the week as well, live and active.
Olsson: I will never forget the first day on the team and when I walked into a room wearing these CRAZY things on their heads. I brought the prototype so you could see what I walked in to. It comes in a fancy bag…
Olsson: I think like the colors of the board, maybe, fits my hair color, but I don’t know. It’s kind of heavy, though. I think I’m going to take it off now. So – but – how do you go from something like this to what we’re all wearing today?
Olsson: We took a reductionist approach. We removed everything that wasn’t absolutely essential. And then in addition to that, I formed three principals to guide the team through this ambitious, messy process. Those are:
• Lightness
• Simplicity
• Scalability
You’ll find this particular chat split up across three different features, each surrounding Olsson’s fireside chat contributions. The one you’re in now of course stays within the bounds of the prototype you’re seeing above and below. There are also posts on color choices for Glass, a bit about Modular Fashion, and another expanding on the design of the final product.
Olsson: When I joined the project, we thought we needed 50 different adjustment mechanisms, but that wouldn’t make a good user experience. So we scaled it down to this one adjustment mechanism.
This prototype works with the Glass projection unit nearly the same as what we see in the Explorer Edition of Glass. It’s attached to one of two computer boards, the one on the right temple – here also working with a camera, even in this early state.
This first development board is also connected to the second board, the second presumably reserved for storage space connections and a battery. While tape holds this unit together along with soldered bits and pieces along the board as well as glue, here and there, they do work.
There’s a single button above the camera lens that activates the camera – there’s a similar button (a hidden button) in this area on the Explorer Edition of Glass as well. This original prototype works with essentially every element available in the final release – here it’s just a bit larger, and not really made to look too fashionable for the uninitiated.
Google I/O 2013 also played host to a chat we had with Sergey Brin – co-founder of Google and currently Director of Special Projects for the company. He also gave some insight into the way Glass was first tested, noting that while there were some non-functional bricks used to test form for Glass, it certainly all started with function:
Sergey Brin: We did have some non-functional models, but mostly we had functional, uglier, heavier models. Very early on we realized that comfort was so important, and that [led to] the decision to make them monocular.
We also made the decision not to have it occlude your vision, because we tried. We tried different configurations, because [it’s] something you’re going [need] to be comfortable. Hopefully you’re comfortable wearing it all day? [That’s] something that’s hard to make. You’re going to have to make a lot of other trade-offs.
Have a peek at the photos in a larger sense in the gallery below and let us know if you see anything you recognize – it’s all there, piece by piece.
Google Glass Original Prototype eyes-on with Isabelle Olsson is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
Starting to get bored of the ThinkPad’s classic look but not keen on the Edge series? Then we have good news for you! Earlier today we received a couple of photos that show off two upcoming Lenovo Ultrabooks: the 13-inch ThinkPad S3 (codename “Labatt”) and the 15-inch ThinkPad S5 (“Guinness”). As you can see above and after the break, both aluminum laptops feature a new “floating design” that might have taken a page out of Samsung and Vizio’s book: shaving off the front outer edges of the bottom side to create that slim and floating illusion. Also, these will apparently come with either a black or silver lid.
Some folks on Sina Weibo have received other teaser photos of the ThinkPad S5, with one confirming the presence of JBL stereo speakers. The funny thing is Chinese website Yesky reported on a charity auction that actually sold limited editions of the S3 and S5 earlier this month, but those unannounced Ultrabooks went under everyone else’s radar. If you’re curious, Yesky speculates that a launch is due in China at the end of this month, but you’ll have to stay tuned for the prices and specs.
As Google’s first wearable computer Glass edges in past its initial run of devices, members of the general public begin to ask: when will the device be delivered in a form that any non-developers will be able to get their hands on? At a Google I/O 2013 “Fireside Chat” with several members of the main Google Glass team, this question was addressed more than once. In short: soon, but not nearly as soon as they’d like.
It’s not a matter of being able to create the device and distribute it fast enough: Google has been clear that they’re perfectly able to create devices en masse and send them to customers at speed. The company now creates and distributes several hardware bits and pieces: the Chromebook Pixel, Nexus 4 smartphone, and Nexus 7 tablet to name a few. Instead, it would seem, the team of creators here only wish it were the future, past the steps they must take between here while the device is still in its developer infancy and the point at which Glass is ready for anyone to buy.
Product Director for Google Glass Steve Lee spoke to this point at length, noting the place where the project was today and where it’ll be going in the very near future.
Steve Lee: With the Explorer Program, where we’re at right now, is were now getting it in to the hands of thousands of other people to see what exciting things they can do with Glass. The first group of people that we’re getting it in the hands of are developers. We know that to fully realize the potential of Glass, we need your help. We need innovators to develop on the platform.
About a month ago, we started distributing Glass to our Explorers. I’m happy to say that earlier this week, right before Google I/O, we’d invited all 2,000 people that signed up at last year’s I/O to come pick up their Glass device. We’re very happy about that.
The next group of people that will become Explorers are those that signed up for #ifihadglass. And there were 8,000 people selected from over 100,000 people who applied. And we will soon be rolling out invites to those folks to pick up Glass.
What’s exciting about that group of people – they’re not developers – but it’s a nice cross-section of people. We have folks who are educators, teachers, we have athletes. We have DJs, dentists, hair stylists. All kinds of different people.
And so we’re really excited to see a diverse set of folks – what are they going to do with glass?
Also commenting on the situation with where Glass is today and how long it will be before the product is delivered in a final consumer form was Senior Developer Advocate at Google for Project Glass Timothy Jordan. Also acting as moderator for this fireside chat, Jordan made sure to let the audience know that it’s not that Google wants to hold the device back for no good reason.
Timothy Jordan: We don’t have an updated timeline for Glass release. Where we’re at right now, is… lemme say this: I’ve had a number of people come up to me at Google I/O and be like: ‘I want Google Glass, and I have this amazing idea.’ And my first reaction is: ‘I want you to have Glass!’
And that’s our goal. It’s only a matter of time.
Right now we’re selling Glass to the Explorers who signed up at Google I/O last year. Next it’s the “If I had Glass” people. Next, it could be you.
SlashGear will be continuing to explore the Google Glass environment with our own up-close look at the Developer Edition of the device at updates, during pointed moments of opportunity, and at moments when brilliance strikes from here until that day when Google decides to move forward with a general edition. Until then: courage.
At the moment it’s unclear what sort of price structure will be in place or how the device will be distributed to the public once the time is ripe. It’s likely they’ll be picked up in bags like the one you see here held by Glass lead industrial designer Isabelle Olsson – this bag contained an original prototype, just so you know.
Google Glass is at such a point in its infancy that the company could be making major changes to every piece of the project – software and hardware included. Software updates will be pushed out to developers from this point forward on a monthly bases with changes coming based on suggestions from the public. Suggestions made by developers and the public are also being taken under advisement by the team for hardware as well as software.
This goes so far as Jordan literally writing down a Pantone color code suggested by a developer for the next wave of Glass hardware during this extended chat. This team appears serious about making a device that’s both by and for its future wearers.
It was in chats like these – and in breakout sessions more like lessons for developer attendees of the conference – that the Glass team used to represent itself in California during I/O 2013. Glass may not have been discussed at length in the opening keynote on day 1, but it certainly had its fair share of attention during the week. Expect this chatter to get extra vibrant once the consumer edition arrives.
Google Glass creators talk of final consumer device release is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
It’s a return to form here at Google I/O 2013, with none other than Google’s own Vice President of Android Product Management Hugo Barra letting us know that he’d personally fought hard for a more developer-focused single keynote address. As past years had been notably more consumer and product-focused than 2013, it’s not a flash-bang the company has gone for here, it’s a return to form: Google I/O in its purest form.
Google’s developer conference is home to more than just developers, of course: press, analysts, students, and Google lovers from all angles are invited, but this year the company had a more focused approach in mind. While the conference retained its three-day allotment of breakout sessions and fireside chats with Google’s own for developers of all types, the company’s initial keynote was limited to one day instead of two.
This single keynote was also toned down – significantly – especially compared to last year’s explosion of content: new devices, a new version of Android, and a skydive drop live with what was then called Project Glass. Larry Page stepped on stage to make an address to the developers and the public, taking part in an extended question-and-answer session as well, showing some extreme boldness answering whatever random queries attendees might have.
Because of these elements in the keynote – the most public and direct bit of the convention from Google, to be sure, the entire set of events was given what we suggested to Hugo Barra had given it all a more “human” vibe to I/O. This, he said was “exactly what we were aiming for.”
Google’s top guns stepped into the fray as well, with Googlers like Barra and Sergey Brin appearing for drinks and a chat with the press late on Day 1. There it was abundantly clear that this event was not simply made for developer training, but for person-to-person connectivity: another pillar the event was originally built on.
Our own Chris Davies lent some insight on this subject, his column “Google I/O and the year of the Context Ecosystem” speaking volumes about Google’s aim here in 2013.
“All of Google’s services are gradually interweaving. Google I/O 2013 is an ecosystem play, and it’s one of the biggest – and arguably ambitious – we’ve ever seen. It’ll drag Google+ with it along the way, and it might even kickstart the “internet of things” when we start to see some legitimate advantages of having every device a web-connected node.
Google didn’t give us a new phone for our pocket or a new tablet for our coffee table; instead, it gave us so much more.” – Chris Davies
What did you think of Google I/O 2013 from a consumer perspective? If you don’t consider yourself a consumer in this case – how did you take it all from whatever position you’re in?
Google I/O 2013 on-site Wrap-up: Glass, Developers, and Services on tap is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
This week the folks at the development studio known as Instrument have brought a virtual reality demonstration to Google I/O 2013, complete with a multi-display drop from the upper atmosphere down toward the earth in freefall. What this demonstration consisted of was seven 1080p displays, each of them run by their own Ubuntu PC working with a full-screen version of Chrome version 25. A motion tracker works to track the user, their arms, and the angle at which they’re standing – or leaning and falling, as it were.
This system was developed by Instrument to track user input and motion tracking with a custom C++ app built with openNI as well as an ASUS Xtion Pro 3D motion tracking sensor camera. As the motion tracker sees and understands the angle of the human playing the game’s torso and location of each arm, so too will their avatar on the display array move as they fall.
The 3D game content is rendered with WebGL using THREE.js, the WebGL layer being rendered with a totally transparent background. This setup allows the map layer underneath to show through, this map layer being generated by Google Maps.
What the user sees below – the earth they’re plummeting toward – is a completely live HTML Google Map instance. It’s accurate – meaning you could potentially be diving toward your house, a national landmark, or perhaps somewhere that’d be useful for real-world training.
In addition to this setup being live and ready to roll here at Google I/O 2013 as a playable demo, Instrument has created a Dive editor. With this Dive editor, an editor is able to build directly into the control node administrative console, each of these changes reflected instantly – live in the scene.
The editor user interface exists as a Google Map, the person editing it able to use draggable markers that act as game objects. With this interface, developers and savvy users will be able to utilize geocoding to center the map view on locations of their choice – anywhere Google Maps can see. Think of the possibilities!
Google Maps-driven Map Dive 3D-tracking hands-on is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
This week at a fireside chat during Google I/O 2013, Mary Lou Jepsen – currently the head of the Display Division at Google X – let it be known that “there’s no more silicon in Silicon Valley – it’s all iPhone apps.” She quickly added – “or Android apps, I should say.” An overarching theme from her set of words in the extended chat made it clear: she’s not satisfied with the current atmosphere for hardware innovation, particularly when it comes to startup funding.
Jepsen was joined by serial entrepreneurs Julia Hartz, co-founder and president of Eventbrite, Slava Rubin, CEO and co-founder of Indegogo, and Caterina Fake, founder and CEO of Findery and co-founder of Flickr. It was on this panel that Jepsen made the case for not just a broken device hardware startup model, but for new entrants into this startup world to be aiming for the moon. It was from within Google X, after all, that Google Glass originated.
“Assuming that you start big and swing for the fences – don’t do something small, first off. But assuming you do, and you get to that point where you’re taking on one of the largest companies in the world – even though you didn’t mean to – I’ve never started to mean to – be prepared to give away most of your stocks so you can win that gain, because otherwise you’re crushed.
Plan that early on, for what you’re going to do – at One Laptop Per Child, there’s this 60-minute expose on some of the larger forces that we came up against – and there’s a lot of stories I’ve not yet told about Pixel Qi. When you get in that seat, you have to be able to figure out a way where it’s more attractive for companies not to crush you.
And that’s very difficult.” – Mary Lou Jepsen
She added assurance that joining a big company is not for everyone – startups are great, she said, especially if you don’t want to get involved in the politics of working with a big company. You’ll be in a lifeboat, she explained, but though you’ll be dealing with holes in your boat here and there, you’ll be working with people that want to help you and are ready and willing to go that extra mile for you.
Meanwhile she warned that hardware funding, again, isn’t in a place where it should be. Groups that push cash to software startups are far easier to find at this time in history than those looking to build up a group for a hardware device.
“VCs (Venture Capital companies) don’t have the core competence anymore. Silicon Valley, pretty much, too – and I’m sure there’s exceptions, but by-and-large, to fund or even to due diligence on hardware.
But there are places that do fund hardware, and you can find them depending upon your bend – you have to be creative. There are Angels, certainly, and Super Angels to fund it.
But there’s not this sort of – path – but there’s not much competition, so you have an advantage.” – Mary Lou Jepsen
Have a peek at the video below for additional insight from Jepsen and let us know how well you’re taking the news – or the advice, as it were. Are you encouraged by the idea that Jepsen, one of Time Magazine’s 100 most influential people in the world and a ranking member of the top 50 female computer scientists of all time is suggesting that jumping in on a startup is a situation you should want to be a part of? Let us know!
Mary Lou Jepsen encourages Google X attitude in hardware engineering is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
We went into Google I/O hoping for hardware and gadgetry; instead, we got three and a half hours of software and services – gaming, messaging, Larry Page wistfully envisaging a geeky utopia. You can perhaps excuse us for getting carried away in our expectations. I/O 2012 was a huge spectacle, with lashings of shiny new hardware only overshadowed by skydiving Glass daredevils and Sergey Brin looking moody on a rooftop. In contrast, 2013′s event brought things a whole lot closer back to the developer-centric gathering that the show had originally been established as. Glass was conspicuous by its on-stage absence, and the new Nexus tablets that had been rumored were also no-shows; the emphasis was firmly on how the components of Google’s software portfolio were being refined as the mobile and desktop battles waged on.
A lot of people were disappointed by the absence of hardware. Google’s largely a software and services company, of course, but we’re still trained to expect shiny new gadgets first and foremost. What I/O proved to be was a reminder that the industry has moved on, and that it’s high time we recognized that.
“Specs are dead” is an opinion growing in prevalence among those following the cutting-edge of phones and tablets. There’s a limit to the usable resolution of a smartphone display, for instance – once your eyes can’t make out individual pixels, do you really need to step up to Ultra HD? – and to the speed of a tablet processor. The areas that still need real advancement, like high-performance batteries, are evolving too slowly to make a difference with each new generation.
“Now, hardware is just a question of badge-loyalty”
Hardware used to make a big difference to the usability of a device. Now, it’s just a question of badge-loyalty and aesthetics. What really makes the difference is the range of applications and services that are on offer; not solely the raw count of available apps that gets trotted out at every big press event, but whether the specific titles the user needs are on offer to them.
Software is at a tipping point, too, though. Android used to be clunky and ugly; now it looks great, and the gap between the instant usability of it, iOS, and Windows Phone is arguably nonexistent. The software race has moved on, away from silo’d applications and slick UIs to where our phones – and the companies that make them – are finally considering context alongside capability.
Context is a tricky thing to explain, certainly compared to the instant crowd-pleaser of a big OLED screen or a blisteringly-fast, multicore processor. Put simply, it’s a more intelligent way of your phone or tablet integrating itself into your life, whether that be more time-appropriate notifications, an awareness of the people around you, or of the other devices you might use. It’s about predicting rather than just reacting.
Google’s arguably doing the best at that of all the platform companies, and I/O was its opportunity to demonstrate that. Google Now is the most obvious expression of a system that offers up suggestions instead of waiting for you to go hunting for answers, but through the I/O keynote we saw signs of the disparate strands of Google’s products coming together in intelligent, time-saving ways.
Google Maps, for instance, won’t just autocomplete your recently-used addresses, but learn from your preferences in restaurants and other venues and make suggestions it thinks you’ll enjoy. Google Play Music All Access has a ridiculous name, but its ability to build dynamic playlists based on your favorite tracks will help cut down on one of the most common complaints about cloud-jukebox services: that they overwhelm with choice, and subscribers simply end up listing to the same playlists over and over again.
“It’s the cloud being clever, not just capacious”
The new Highlights feature in Google+ is another example of the cloud being clever, not just capacious. As many have discovered, thousands of photos quickly become unwieldy when it comes to sifting through them for the best shots, no matter whether you’re storing them locally or from somewhere in the cloud.
Google’s ability to pick out the cream (and give them a little auto-enhancing along the way, just to make sure you’re looking tip-top) could mean you actually end up looking at them more, rather than feeling guilty because you’re not manually sorting them.
Google+ remains the big social network people love to slam, but it’s also the glue that looks set to hold all of these personalized services together. Just as Google hinted back in 2012, when it controversially changed its privacy policy to explicitly allow services to share information on the same registered user between themselves, the key here is the flow of data. That might not actually require people to actively embrace Google+ – indeed, they may well not even know they’re using it – but it will cement its relevance in a way that Facebook can’t compete with.
Make no mistake, context is the next big battleground in mobile. As our smartphones have become more capable, they’ve also become more voracious in their appetites for our time and attention. A prettier notifications drop-down is no longer a legitimate solution to information overload: pulling every possible alert into one place doesn’t make it any easier to cope with the scale of the data our phones and tablets can offer us.
The device which understands us better, and which handles our information in a way that’s bespoke, not one huge gush, will control the market. Google knows that; it also knows that hardware is basically just a way of getting a screen in front of users’ eyes, whether that be on a Chromebook like the Pixel, a phone or tablet from the Nexus series, or suspended in the corner of your eye like Glass.
In the same way, speech control – which also demonstrated marked improvements at I/O – is just another way to make sure people can engage with your products, on top of what touching, tapping, and clicking they’ve already been doing. More flexibility means more usage; more usage means more data to collate and customers that are further wedded to Google rather than any other company.
All of Google’s services are gradually interweaving. Google I/O 2013 is an ecosystem play, and it’s one of the biggest – and arguably ambitious – we’ve ever seen. It’ll drag Google+ with it along the way, and it might even kickstart the “internet of things” when we start to see some legitimate advantages of having every device a web-connected node. Google didn’t give us a new phone for our pocket or a new tablet for our coffee table; instead, it gave us so much more.
Google I/O and the year of the Context Ecosystem is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.