Google+ and Glass just got the upgrade for lifelogging everything

If you’re still laughing at Google+, and at Google Glass, then it might be time to stop; Google has just shown that they’re its next route to digitally understanding everything about you, and it slipped that through in the guise of a simple photo gallery tool. Highlights is one of the few dozen new features Google+ gained as of I/O this past week, sifting through your auto-uploads and flagging up the best of them. Ostensibly it’s a bit of a gimmick, but make no mistake: Highlights is at the core of how Google will address the Brave New World of Wearables and the torrent of data that world will involve. And by the end of it, Google is going to know you and your experiences even better than you know them yourself.

Google Glass headset

Lifelogging isn’t new – Microsoft Research’s Gordon Bell, for instance, has been sporting a wearable camera and tracking his life digitally since the early-2000s – but its component parts are finally coalescing into something the mainstream could handle. Cheap camera technology – sufficiently power-frugal to run all day, but still with sufficiently high resolution and bracketed with sensor data like location – has met plentiful cloud storage to handle the masses of photos and video.

More importantly, the public interest in recording and sharing memorable moments has flourished over the past few years, with Facebook over-sharing going from an embarrassment to commonplace, and Twitter and Tumblr evolving into stream-of-consciousness. For better or for worse, an event or occasion isn’t quite real enough for us unless we’re telling somebody else about it, preferably with the photos to prove it.

Into that arrives Glass. It’s not the only wearable project, and in fact it’s not even trying to immediately document your every movement, conversation, and activity. Out of the box, Glass doesn’t actually work as a lifelogger, at least not automatically. However, it hasn’t taken long before Explorer Edition users have tweaked the wearable to grant it those perpetual-memory skills, though we need to wait for Google’s part of the puzzle before we see the true shift take place.

Kickstarter project Memoto, which raised over half a million dollars for its wearable lifelogging camera that fires off two frames a minute all day, every day, isn’t really a hardware challenge – though the startup might disagree with that somewhat, given the slight delays caused by squeezing power-efficient camera tech into a tiny little geek-pendant – but a software one. The issue isn’t one of taking photos, or of storing them: it’s of then organizing them in a way that’s anywhere near manageable for the wearer.

memoto_camera

Think about your last set of holiday photos. You probably took many more than you did in the days of traditional film cameras. Maybe you synchronized them with iPhoto, or uploaded them to a Dropbox or Picasa gallery. Perhaps they went on Facebook, either sorted through or – more likely, maybe – simply dumped en-masse. How many times have you looked through them, or shown them to somebody else?

Now, imagine having a whole day’s worth of photos to deal with. We’ll be conservative and assume you’re sleeping for eight hours – lucky you – and maybe have a couple of hours “privacy” time during which you’re showering, getting changed, or otherwise not camera-ready. Fourteen hours when you could be wearing your Memoto, then, or some other camera: 840 minutes, or 1,680 individual photos. In the course of a week, you’ve snapped 11,760 shots.

“By the end of the year you’ve got over four million photos”

By the end of the year, you’ve got over four million of them. Sure, plenty of them will be of the same thing, or blurry because you were running across the road at the time, or too dark to make out details. Many, many of them will just be plain dull. But they’ll all be there, sitting in the cloud waiting to be looked at.

Nobody is going to sift through four million photos. And so the really clever thing the Memoto team is working on is the relevance processing all of those images are fed through. The exact details of the algorithm haven’t been confirmed – in fact it’s still something of a work-in-progress, and likely will be even when the first units start shipping out to Kickstarter backers – but it takes into account the location each image was taken at (there’s geotagging for each shot), the direction you’re facing, what interesting things are in the frame, and more.

That way, you get the best of both worlds, or at least in theory. “All photos are stored and organized for you,” Memoto promises. “None are deleted, but the best ones are more visible.”

As Memoto sees it, that all amounts to about thirty frames per day. Thirty potentially review-worthy shots out of more than sixteen-hundred. Now, there’s no way of knowing quite how well the system will actually operate, and we’re bound to miss out some gems and have out attention drawn to some duffers, but make no mistake: we need this layer of abstraction if lifelogging is to be more than just a boon for those selling hard-drives.

For a while, Google didn’t seem to have given managing the extra photos from wearables like Glass much consideration. In fact, the first evidence of photo sharing – automatically uploading to Google+, and being posted out with the generic #throughglass tag – was one of the more half-baked of the company’s implementations. That all changed, though, at I/O this week.

Google+ is the glue for Google’s ecosystem – what I call the “context ecosystem” – not least Glass; you may not want to use it as a social network, replacing or augmenting Facebook and Twitter, but if you want Google services or hardware you’re going to end up a Google+ user on some level. The new Highlights feature in Google+ is the key to unlocking Glass’ usefulness as a lifelogger.

“The Highlights tab helps you find photos you’ll want to share by automatically curating the images you upload to Google+ photos” Google explained. “Highlights works by de-emphasizing duplicates, blurry images, and poor exposures while focusing on pictures with the people you care about, landmarks, and other positive attributes.”

For the moment, for most users, Highlights is a way of quickly cutting out duplicated shots. Take three or four pictures of your kids in the park, just to make sure they were all looking at the camera at the right time? Google+ Highlights will make sure you only see one, not all of the nearly-identical frames. No need to delete the others, just – as Gmail taught us with achive-not-delete email, a privilege of copious space and effective search – hide them from regular sight.

google-plus_highlights

As the flow of photos into Google+ turns into a torrent, fueled not least by wearables, those vague “other positive attributes” Google mentions will become most important, however. Highlights is going to become not only a curator of your galleries, but of how you reminisce; how you look back on what you did, where you did it, and who you did it with.

Google can already identify buildings, and locations, and people. It knows who your friends are. Factor in Events, and the communal photo sharing feature, and that will help Google+ fill in even more of the gaps. If it knows you were with your best friend, and your best friend was in Paris at the time, and what a number of famous Parisian landmarks look like, it’ll be able to do a pretty good job at piecing together a curated “holiday memories” album that’s probably more detailed than your own recollection of the trip.

“The comfort levels reported at I/O show this is not just old- versus new-school”

If you’re clenching various parts of your anatomy over fears about privacy, you’re probably right to. Even with only about 2,000 Glass Explorer Edition headsets made, the degree of controversy over what the rights and responsibilities around having photos taken in public and in private are is already exponentially greater. Those at Google I/O this past week are undoubtedly a tech-savvy, open-minded bunch, but the range of comfort levels reported about being in the Glass gaze is a telling sign that there’s more to this than just old-school versus new-school.

Google Glass in box

The discussion is going to be broader than Google, of course – a Memoto camera is arguably more discrete, clipped to your coat or shirt, and it’s almost certainly not going to be the last wearable camera – but how the companies involved process the data created is likely to be the biggest factor, and Google has a track-record of giving privacy advocates sleepless nights.

If Glass – and wearables along with lifelogging in general – is to succeed, however, this is a discussion that will have to be settled. We’re not talking about “how okay” it is for your email account to talk to your calendar account. If the EU decides there should be a clear division between those in the name of user privacy, then you might have to manually create appointments based on email conversations; if the huge and inevitable rush of photos and video that wearables will facilitate aren’t addressed, then Glass and its ilk will stumble and fail. Our new digital brain needs permission to work its magic, but we’re still in the early days of seeing just how magical that might be.


Google+ and Glass just got the upgrade for lifelogging everything is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Meta 1 augmented reality headset fully detailed on Kickstarter

Earlier this morning, we posted about the Meta 1 augmented reality headset — a rather unique pair of glasses that lets you play around with virtual 3D objects in the real world. Being right on schedule, the project has officially hit Kickstarter, with the goal of raising 100 grand in just 30 short days.

0fbdee6fe94f87c21453237b6084b8a1_large

Right off the bat you can tell that Meta 1 is a bit different than Google Glass, but that’s also because Meta 1 serves a particularly different function than what Google Glass offers. While Glass merely consists of a small display that shows you alerts and other information, Meta 1 shows you virtual 3D objects that are mixed in with the real world in front of you.

The device itself is still in the development stages, hence the fact that the Kickstarter campaign is for a dev kit of the Meta 1. And as such, the pair of glasses aren’t quite as compact as Google Glass. The Meta 1 features rather squared-off frames that look uncomfortable, with a 3D webcam mounted on the top. Granted, it’s only meant for developers, so the final version should be much more catered towards consumers.

Essentially, the goal for the Meta 1 is to create HUDs similar to those seen in Iron Man and Minority Report, but once more developers join in and begin to make apps for the headset, the possibilities will most likely be endless. The video above gives some decent examples of what’s possible the Meta 1.

The hardware specs of the Meta 1 are quite impressive at this point. You get a 960×540 resolution with each eye that comes with a 23-degree field of view for each eye as well. The webcam that sits on top includes two cameras (one for each eye), and the glasses have HDMI and USB input. And despite looking a bit cumbersome to wear, they only weigh a little over 10 ounces.

0f57517ee4a0e68928ebd58635eada52_large

The company plans to have these development kits shipped out starting in September of this year. As for price, the full development kit will cost $750, which is a bit steep compared to other headsets, like the Oculus Rift, but the Meta 1 does seem a bit more complex. Granted, it’s still half the price you’d pay for Google Glass Explorer Edition, so if you’re bank account is only allowing so much cash to be spent, the Meta 1 is the cheaper grab.


Meta 1 augmented reality headset fully detailed on Kickstarter is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Meta 1 true augmented-reality headset dev-kit presales inked in for today

It’s turning into a week of wearable computing, with Epson-partnered start-up Meta readying preorders for its true augmented reality headset. First revealed back in January, Meta offers a fully digitally-mediated view of the world – allowing for graphics, video, and text to be superimposed on real people and objects – rather than the Google Glass approach of floating a subdisplay in the corner of your eye. Sales for developers will kick off at 9am Pacific (noon Eastern) on Friday, May 17.

meta_ar_wearable_hero-580x380

The current developer device, the Meta 1, is admittedly somewhat less aesthetically-pleasing than Google’s Explorer Edition of Glass. Epson has brought its Moverio BT-100 to the party, a headset which projects information onto both lenses rather than just one eye. It also has integrated WiFi, runs Android, and lasts for an estimated six hours on a full charge (it’s worth noting that the battery and processing is housed in an external box, which connects to the headset via a cable).

Onto that, Meta bolts a low-latency 3D camera which is used to track hand movements. Resolution down to individual fingertips is supported, and so complex gestures – like a “thumbs up” movement to “Like” a post on Facebook – can be recognized.

Meta concept video:

Just as per Google’s intentions with the Explorer Edition, Meta is hoping to leverage developer interest in preparation for a far more aesthetically-pleasing consumer version of its headset. That could eventually look like a regular pair of sunglasses, with the twin-camera array neatly slotted into the bridge. Whether that sort of design could also accommodate sufficient battery capacity for any meaningful period of use remains to be seen, however.

meta_wearable_ar_concept (1)

Meta is also yet to confirm how much the Meta 1 dev-kit will cost. The unmodified Moverio headset has a list price of $700 (though its street price is down to just $400), though of course that doesn’t take into account the added camera hardware, plus Meta’s external processing box and SDK. The first fifty dev orders will get a $200 discount, however, Meta revealed to pre-interest signups in an email this morning.

Google left its Glass discussion out of the opening I/O keynote, saving it for day two developer sessions where it showed off warranty-voiding Ubuntu installs and native app support with the Mirror API. However, it isn’t the only wearable we’ve been playing with this week. Recon Instruments brought along its Recon Jet headset, a sports-centric take on the concept, which is expected to begin shipping later in 2013.


Meta 1 true augmented-reality headset dev-kit presales inked in for today is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Google Glass lead industrial designer talks modular fashion at I/O 2013

This week at Google I/O 2013, the company’s yearly developer conference, the wearable technology device Glass was discussed as a scalable fashion platform by the project’s lead industrial designer. In a fireside chat with several other creators and head minds from Google on the Glass project, Isabelle Olsson let it be known that Glass has come a long way since its first day in the lab – she had one of the original prototypes on hand to show off in-hand.

holding

Olsson showed a rather bulky and – according to her – rather heavy piece of hardware that was a mix of geeky massive and hipster odd. Speaking about the experience, walking into the room at Google on the first day that prototypes had been mocked up, Olsson described it as a rather exciting – if not scary – experience. One of the first changes the team had to make, she said, was in the unit’s ability to adjust.

first

“When I joined the project, we thought we needed 50 different adjustment mechanisms, but that wouldn’t make a good user experience. So we scaled it down to this one adjustment mechanism.” – Isabelle Olsson, Google Glass Lead Industrial Designer

bagger

Olsson also showed off Glass’ ability to be taken apart and moved. There’s one piece that acts as the most basic frame and the other – the computer – that can be attached to many different bits and pieces being built today.

“We make Glass modular. In this stage, this means you’re able to remove the board from the main frame. This is pretty cool. This opens up a lot of possibilities. It opens up possibilities for not only functionality but also scalability.” – Isabelle Olsson, Google Glass Lead Industrial Designer

glasseslenses

Glass is still at a place where this team cannot tell the public when they will be ready to sell to consumers – the same goes for the future of Glass. Noting that they wouldn’t be able to comment on the future of Glass very much at this point. This was called into question by a boisterous audience member who yelled:

Why not?!

To which the host of this chat, Senior Developer Advocate at Google for Project Glass, Timothy Jordan, replied: “because it’s Google’s policy not to comment on future unannounced products. And because I follow rules.” To which the same audience member replied, pathetically hilariously:

Ok.

This attitude reflected the thoughts and wishes of the entire audience – or at least those without the device on their temples. With more than 30 members of the audience wearing the developer “Explorer Edition” in full effect, we were in rare company without a doubt.

audienceare


Google Glass lead industrial designer talks modular fashion at I/O 2013 is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Sergey Brin talks Glass: Camera stabilizer incoming

Walk the floors at Google I/O and if you’re lucky you’ll run into Sergey Brin, who spent some time telling us about the development process behind Google Glass as well as a teaser for the update roadmap. Surrounded by fans and sporting his own Glass, Brin explained some of the decisions around the use of a monocular eyepiece, and of its placement out of the line-of-sight rather than directly in front of the wearer, as you might expect from a true augmented-reality device. However, he also revealed that a future software upgrade will address one of our own issues with Glass: keeping video steady when you’re filming it from a wearable.

google-io-glass-sergey-brin-vincent-nguyen

We’ve already been impressed by how Glass holds up as a wearable camera, particularly during situations – like when you’re playing with your kids or demonstrating a new gadget – when you need your hands to be free. However, we also found it more than a little getting used to, keeping your head still when you’re recording a conversation. All too easily you end up with nodding video, as you unconsciously move and react to the person you’re talking to.

We mentioned that to Brin, and he confirmed that it’s something Google is actually working on addressing. “Stay tuned, we’re gonna have some software that helps you out” he told us; it’s unclear how, exactly, that will be implemented, but digital image stabilization is already available on smartphones, and Google might be using a similar system. Glass also comes equipped with various sensors and gyroscopes – some of which are only partially utilized in this early iteration – and so Google could tap into those to do image-shifting and compensate for head-shake.

As you might expect for a device named the “Explorer Edition” and aimed squarely at developers, Glass is still a work-in-progress. Google aims to translate what it learns from this relatively small-scale deployment to the eventual consumer version – tipped to arrive in 2014 – including both design and functionality refinements.

Google Glass Tangerine

We asked Brin about the style decisions Google made along the way, and at which point the aesthetics of Glass came into the process. “We did make some functional mockups,” he told us, “but mostly we made functional but uglier, heavier models – style came after that.”

Style is, when you’re dealing with device you wear, distinct in a very particular way from design. Even if the work on physical appearance followed on after function, how Glass sits on the face did not.

Glass-chat with Sergey Brin at Google I/O 2013:

“Very early on we realized that comfort was so important, and that [led to] the decision to make them monocular,” Brin explained. “We also made the decision not to have it occlude your vision, because we tried different configurations, because something you’re going to be comfortable – hopefully you’re comfortable wearing it all day? – is going to be hard to make. You have to make a lot of other trade-offs.”

We’ll have more coverage from Google I/O all week, so catch up with all the news from the epic 3.5hr keynote yesterday!

Glass Video: Controlling AR.Drone with NVIDIA Shield


Sergey Brin talks Glass: Camera stabilizer incoming is written by Vincent Nguyen & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Recon Jet hands-on

Announcing a product during a major event like Google I/O takes some real courage, especially when you’re revealing a device that’s extremely similar to a product Google is headlining with. That’s what Recon is doing with the Jet, a wearable device that’s drawn instant comparisons to Google Glass. This device works with a virtual widescreen display that sits below the left eye of the wearer and utilizes Android as a basis for its user interface.

20130515_162255

Recon Jet is not in a place where it’s able to be sold at the moment – the version we’re having a peek at here at the Google developer event is a pre-production item – but once it’s ready, it’ll be largely the same as what we’re seeing on the inside. Inside this device works with a dual-core mobile processor (the name of which we’re not allowed to speak of quite yet) powering Android 4.2 Jelly Bean with a custom Recon-made user interface over the top.

You’ll control this machine with a miniature touch-sensitive optical pad that sits on the side of the device near the display. Touching this pad as well as swiping left and right, up and down will allow you access to the device’s abilities and settings.

20130515_161831

Inside you’ll be working with GPS, wi-fi connectivity for web, Bluetooth 4.0, and ANT+. With ANT+ you’ll be able to connect to a variety of other sports sensors – this device is, after all, made for hardcore sporting enthusiasts, after all. All of this connects to an HD camera the megapixels of which are not yet available as well.

20130515_162204

You’ll be working with “gaze detection” for instant access to the machine’s abilities, its display turning off and on when you want or do not want to work with it. Your eyes will decide.

20130515_161815

Have a peek at our brief adventure with this device and note that the main aim of revealing this device this week is to find developers that want to work with the SDK for the device in advance of its final release. This machine will be released to the public before the end of the year – we’ve confirmed this specifically once again in-person with Recon – making its appearance fall well before Google Glass hits the streets in a consumer edition. Pricing and release dates will be coming soon.

20130515_162255
20130515_162302
20130515_162315
20130515_161815
20130515_161822
20130515_161831
20130515_162204
20130515_162252


Recon Jet hands-on is written by Chris Burns & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Recon Jet takes Glass-style wearable computing to the slopes

Google has Glass, but Recon Instruments is kicking off Google I/O today with a wearable computer of its own, the Recon Jet. Integrating a microdisplay into a set of sports sunglasses, the Recon Jet floats a virtual widescreen in the lower corner of your right eye, with controls integrated into the side, and most of the connectivity you’d expect from a current phone or tablet.

Recon Jet_white

Inside, there’s an unspecified dualcore processor, WiFi, Bluetooth, and GPS, along with ANT+ fitness connectivity. Sensors include an accelerometer, gyroscope, altimeter, magnetometer, and a thermometer, while an HD camera pokes out the front.

Recon Jet_balanced engineering

There’s still some wiggle-room in the specs, though. For instance, Recon Instruments hasn’t said what resolution the display runs at, though our guess would be that the impressively-detailed renders in the demo video below aren’t quite what the Recon Jet itself can manage.

Obviously the glasses themselves are somewhat less discrete than Google’s Glass, though they’re intended to provide protection during sports like skiing and snowboarding. What the company is aiming to do at I/O is publicize its SDK, which will allow developers to hook their apps and services into the headset.

Recon Jet overview:

Potential uses include activity tracking and fitness monitoring, showing your health performance for instance, as well as streaming music and video. The Recon Jet could also be used as a remote display for apps running on your phone or tablet, and the company says it’s working with “some of the top fitness companies and communities” to cook up titles in time for the eventual launch.

Recon Jet will be available later in 2013, Recon Instruments says. Pricing and specific launch dates are not confirmed, though there’ll be a limited-availability initial production run which will seemingly be offered on an invitation basis to developers, similar to how Google has managed the Glass Explorer Edition roll-out.

Recon Jet_white
Recon Jet_balanced engineering
Recon Jet_black_front
Recon Jet_black
Recon Jet_white_front


Recon Jet takes Glass-style wearable computing to the slopes is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Withings Smart Activity Tracker hits the FCC with a catchier Pulse name

Withings Smart Activity Tracker hits the FCC with a catchier Pulse nickname

Withings introduced its Smart Activity Tracker at CES with many details regarding how it worked, but few hints of just when it would reach our belts and wrists. Courtesy of an FCC approval, we now know that it’s relatively close. The exercise and sleep sensor has gone through US testing with no real surprises in hardware, but a much simpler branding strategy: the manual suggests the tracker will just be called the Pulse, which could help in a market full of one-word rivals. About all that’s left is for Withings to say exactly where and when we can get its new wearable.

Filed under: ,

Comments

Source: FCC

MedRef for Glass adds face-recognition to Google’s wearable

If there’s one thing people keep asking from Google Glass and other augmented reality headsets, it’s facial-recognition to bypass those “who am I talking to again?” moments. The first implementation of something along those lines for Google’s wearable has been revealed, MedRef for Glass, a hospital management app by NeatoCode Techniques which can attach patient photos to individual health records and then later recognize them based on face-matching.

medref_for_glass_facial-recognition

Cooked up at a medical hackathon, the app is still in its early stages, though it does show how a wearable computer like Glass could be integrated into a doctor or nurse’s workflow. MedRef allows the wearer to make verbal notes and then recall them, as well as add photos of the patient and other documents to their records, all without using your hands.

However, it’s the facial-recognition which is arguably the most interesting part of the app. All the processing is done in the cloud, with the current demo using the Betaface API: first, Glass is loaded up with photos of the patient, and then a new photo is compared to the “facial ID” those source shots produce with the matching tech giving a percentage likelihood of it being the same person.

MedRef for Glass video demo:

There are still some rough edges to be worked on, admittedly. In the demo above, for instance, even with just two individuals known to Glass, the face-recognition system can only give a 55-percent probability that it has matched a person. Any commercial implementation would also need to be able to see past bruising or surgery scars, which could be commonplace in a hospital, and evolving over the course of a patient’s stay.

medref_match

Currently, the MedRef app data is limited to a single Glass. Far more useful, though, would be the planned group access, which would allow, say, multiple surgery staff or a group of doctors to more readily find notes for patients they might not have previously seen. Meanwhile, the form-factor of Glass would leave both hands free, something we’ve seen other wearables companies attempt, such as the HC1 running Paramedic Pro.

Nonetheless it’s an ambitious concept, and one which could come on in leaps and bounds as cloud-processing of face-matching gets more capable. “In the future,” NeatoCode suggests, “on more powerful hardware and APIs, facial recognition could even be written to run all the time.” That could mean an end to awkward moments at conferences and parties where someone remembers your name but you can’t recall theirs.

There’s more detail at the MedRef project page, and the code has been released as an open-source project on GitHub.

SOURCE: SelfScreens


MedRef for Glass adds face-recognition to Google’s wearable is written by Chris Davies & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.

Sensoria Socks technology aims to prevent injury before it happens

As wearable computing technology continues to improve, companies are looking for more and more ways we can use the data received and technology at hand to better products, and ourselves. With Sensoria Socks from Heapsylon, they are using new technology to not only track fitness like the Nike FuelBand and others, but also prevent injury before they happen.

Screen Shot 2013-05-09 at 11.56.16 AM

Sensoria Fitness and their new Sensoria Socks is a patent-pending wearing technology that aims to do exactly that. Bring an entire new level to our fitness and daily lives, as well as help with sports athletes and injuries. Products on the market like the Nike FuelBand, FitBit, Jawbone UP and more all track steps, speed, calories, and more, but imagine a product that can track weight distribution on the foot as you stand, walk, and run. Sensoria Socks rely on sensor-equipped textile materials, as well as the accompanying band pictured below.

With more than 25 million runners in the US alone, more than half are prone to some sort of running related injury or pain, and this isn’t even counting other athletes. Instead of dealing with injury we should be looking at ways to prevent it before it happens. This is where Heapsylon come into play. Sensoria Socks can identify poor running types, then using a custom designed app to coach the runner to reduce those tendencies, thus reducing the risk of injury. Then like any other fitness apps runners can benchmark and analyze performance, limits, distance and more.

According to Heapsylon and their demo when an injury or issue does happen, Sensoria can also track patient adherence, progress and much more. The accompanying application will sync the data over Bluetooth to your smartphone, letting users track anything and everything with this new technology. The app as mentioned above will show poor running techniques, but everything else will be available too.

Their anklet tracks activity type and level, heart rate, blood pressure, breathing rate, then relays this to the app dashboard to show how far, how fast you run, calories burnt and more. Even those with good technique can study and learn better habits, reach higher goals, and train harder without strain.

Screen Shot 2013-05-09 at 11.56.16 AM
8718876730_04d97718d0_z
sensor-sock

The idea behind wearable computing for more than just fun (read: Google Glass) and really opens the door for many different things such as Sensoria Socks. We’re hearing they’ll be available later this year and will help runners and athletes dodge and prevent injuries, and up their game at the same time.


Sensoria Socks technology aims to prevent injury before it happens is written by Cory Gunther & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.