Cute Japanese Robot Rides Bike, Plans Human Conquest

Two arms, two legs, two wheels and one diabolical, merciless robot brain

It would seem that putting a robot onto a bike would be a pointless exercise akin to putting gas-powered horses before a carriage. But the impracticalities of this little Japanese bike-bots are easily outweighed by its charm.

Lest you think that this is just a remote controlled bike with a dummy perched on top, watch until the 45-second mark. Here you’ll see the little fellow take his hand off the bars to wave, and put both feet down on the floor.

The robot is called PRIMER-V2, and was built by Dr. Guero of the website AI & Robot. He pedals to move forwards, and he balances like a human rider, by turning into any falls. The balance in this case is provided by a PID (proportional integral derivative) controller which uses feedback loops to adjust for error.

However the PRIMER manages to stay upright, Dr. Guero has given us a valuable insight into how the robot apocalypse will inevitably unfold. It will come from Japan, and it will come — unstoppably — on two wheels.

Biped robot riding a bicycle [AI2000 via Kottke]

See Also:


Bendy Nokia Phone Prototype and 8 Other Bizarro Cell Phone Concepts

<< Previous
|
Next >>


nokia-bendy-f


Today’s smartphones all seem to share the same silhouette. You’ll find a large, flat touchscreen on the front, and maybe a few buttons across the bottom. The form factor will be thin enough to fit in your pocket, and it might include a slide-out QWERTY keyboard. Snooze. But it doesn’t have to be that way, as futuristic cell phone concepts constantly remind us.

At the Nokia World Conference in London — the location where Nokia’s Windows Phone handsets made their debut — a new flexible handset was being demoed. It’s called the Nokia Kinetic Device and, yes, the entire phone is being flexed in the photo above.

The entire device is made of plastic, right down to the AMOLED display on the front. Rather than using swipes and pinches to navigate the UI, you would use bends and twists. To zoom into a page, you bend the phone so its center buckles towards you; zoom out by doing the opposite. A twisting action is used to scroll through photos or adjust the volume.

Since it is all plastic, and all bendy, the prototype lacks a number of features that would allow it to be a true smartphone — or even a cell phone, if we’re being honest. The touchscreen isn’t capacitive, there’s no camera, no GPS and no actual phone functionality. We said it was a prototype, right?

So it clearly has a way to go before it starts landing in consumer hands. Here’s a collection of eight other concepts and protypes that push cell phone design to the limit.

<< Previous
|
Next >>


Microsoft Video Envisions a Touch-Based Future

Just relaxing with a paper-thin tablet and smartphone, at some point in the theoretical future. Image: Microsoft

How do you see yourself living a decade from now? For many of us, it’s difficult to picture how technologies will change and evolve in that time span. But for some folks at Microsoft, it’s their job to figure out where technology is headed, and how to make it happen. And a recently released video shows just what they envision.

The video, titled Productivity Future Vision, shows us a world much like our own, but cooler. Microsoft sees touchscreens and holographic displays dominating our daily experiences. Flat surfaces of any kind are transformed into useful, interactive displays as well.

“The video explores what productivity experiences might plausibly look like five to ten years in the future,” David Jones, director of Microsoft’s Envisioning Team, told Wired.com in an interview. The concepts presented in the video don’t necessarily communicate plans for future Microsoft products, though.

In an interview with GeekWire, Microsoft GM of technical strategy Chris Pratley said, “It would be relatively trivial to do a kind of Hollywood thing, where you just say what would be cool, and you whip it up and put it on the screen. But everything in the video, we could footnote everything about where it’s coming from, who’s working on it, why we think it’s going to happen.” Essentially, Microsoft’s video isn’t just a bunch of hot air.

The video was produced by the Microsoft Office team, and is a follow-up to a “Microsoft 2019″ video that the company created in 2008. It builds on several themes established in the earlier video, even using a few of the same actors.

Microsoft heavily emphasizes how thin they think future displays will be. Smartphones, tablets and desktop monitors all measure in at wafer-like thicknesses, slabs of white, blank slates that images and video can be pawed, swiped and manipulated.

On-screen images can be holographic, so tilting the angle of your phone, for instance, could show you a 3-D rendition of a bar graph. And images aren’t confined to the dimensions of the touchscreen you’re using.

And the technology goes even further in the kitchen: A tap on the refrigerator door reveals its contents, and tapping on a food item can bring up recipes relating to that item.

“Many of the technologies in the video, such as stereoscopic-3D displays … speech recognition, real-time collaboration, and data visualization are already part of products available today,” Jones told Wired.com. The video just expands on their capabilities to where they could be sometime in the next decade.

There’s one big thing that’s missing from the piece: Paper. A woman peruses a magazine on a large legal-pad sized tablet. A child seated at a kitchen table draws and plays a game on another touchscreen device. A dad moves a virtual Post-It note from one spot to another on an interactive wall calendar. Hand gestures pass data from a slate to the countertop. For all intents and purposes, papyrus is virtually extinct.

There are also a number of user experience aspects in the video that would also make our computing experiences more comfortable. For example, around the 3:30 mark, a man at a desk opens up a video (or video chat) with a woman, and as he scoots his chair back, her image enlarges proportionately. This could feasibly be accomplished using facial recognition and some IR sensor technology to measure the distance of the face to the screen.

“In the future, productivity software will work to extend our human capabilities, transitioning from the role of a passive tool to that of an active assistant,” Jones said.

Active assistant, eh? That sounds familiar.

And Microsoft isn’t the only one who’s released conceptual videos of what the future could be like. In the late 80s, Apple famously released a set of videos illustrating a concept called Knowledge Navigator, a concept we’re moving closer to these days with Siri and touchscreen iOS devices.

“Microsoft understands the vision of what consumers need in the post-PC era. What they need to demonstrate is that they can execute this vision before their competitors do,” Forrester analyst Sarah Rotman Epps said of the video via email.

The video is below if you want to check it out. Would you enjoy this world? Is there anything missing? Sound off in the comments.


Apple Patent Uses 3D Gestures to Control an iPad

Forget relying solely on touch to control your Apple device. On future iPads, you may be able to control your tablet from across the room using 3D gestures, such as a swirl or swipe of the hand.

As suggested by a newly uncovered Apple patent, you would be able to manipulate and control graphical elements on your display, such as icons, media files, text and images. The gestures themselves could take many forms: geometric shapes (e.g., a half-circle or square), symbols (like a check mark or question mark), the letters of the alphabet, and other sorts of predetermined patterns.

One interesting application the patent highlights is video annotation and editing via a gesture-based toolbar. The toolbar would provide pre-set options for beginners, but would also allow more advanced users to customize their own gestures.

A previously discovered patent indicates that Apple could be working on an integrated projector for iDevices that would incorporate physical gestures as a method to manipulate a projected image. This newer patent, however, focuses more on the gestures themselves and other ways they could be used to control onscreen images and video. There’s no mention of Siri or combining voice control with physical gestures.

The 3D gesture-capturing method would employ a device’s front-facing camera. The iPad 2, iPhone 4 and iPhone 4S all include a front-facing camera, so if Apple, say, decided to integrate this feature in an upcoming version of iOS, it’s possible that legacy iDevice models could employ the technology as well. That said, the patent does suggest that older iPhones may not have enough processing power for the gesture-capturing workload, as it shows a way to transfer video from the iPhone to an iPad for more advanced editing options.

The patent pre-defines a number of gestures, such as ones for facial recognition, a selection gesture and a pointing gesture (to identify a specific section of an onscreen image).

The patent was originally filed in mid-2010.

Image: Patently Apple


Japanese Robot Zombie Walks Without Power, Brain, Mercy

Before reading this post, I recommend you set the scene by listening to the theme from Terminator 2. If you have Spotify, here’s the track. I’ll wait.

Now, watch this:

Researchers in the Sano Lab, at the Nagoya Institute of Technology in Japan, have built a robot that can walk forever without power, sensors or even an electronic brain. It is powered solely by the potential energy acquired by strolling downhill.

Instead of a regular bipedal gait, the robot has two sets of two legs, in inner and outer pairs. It moves forward by falling and then catching itself, powered by gravity.

The researchers hope to make this commercially available in a couple of years as an aid for those who have difficulty walking. I have trouble seeing any further than total robot apocalypse, but I’ve always been cynical.

Then again, I guess it won’t be too hard to escape these malevolent marching hordes — just make sure you run uphill. In fact, it can’t be long before they all end up in the Dead Sea in Israel, which — at 424 meters or 1,391 feet below sea level — is the lowest point on Earth.

Passive Walking Robot Propelled By Its Own Weight [DigInfo TV]

See Also:


Skin-Like Sensors Could Bring Tactile Sensations to Robots, Humans

This transparent sensor can stretch to great lengths without getting deformed, all the while sensing pressure. Image: Stanford News

We’re used to tapping away at flat, glass-covered touchscreen devices like smartphones and tablets. A group of Stanford researchers have taken that capacitive touch concept and applied it to a completely new form factor, which could have wide-ranging applications in consumer technology, robotics and beyond.

The team created a transparent, super-stretchy sensor that can be used repeatedly without getting deformed, snapping back into shape after each use. The team hopes that their sensor could be used in medical applications like pressure-sensitive bandages, or even as an outer, skin-like layer to create touch-sensitive limbs or robots. Of course, it could also be used on touchscreen devices and computers.

Through spraying carbon nanotubes onto a layer of silicon and then stretching out the substance a few times, the nanotubes are essentially organized into “springs.” The “springs” can stretch in any direction, and be used to measure the force exerted upon itself over and over again without getting stretched out of shape.

The capacitive touch sensor works like this: There are two conductive parallel plates. When one or both are pressed, the distance between them gets smaller, increasing the capacitance of the sensor. That increase can be quantified and measured. In this case, the two conductive parallel plates facing one another are composed of nanotube coated-silicon, with a middle layer of silicon that stores charge.

The stretchy sensor can detect a wide array of touches, according to Darren Lipomi, a postdoctoral researcher on the team. That means from something as light as a “firm pinch between your thumb and forefinger” to double the pressure of a stamp of an elephant’s meaty foot.

So far the sensor isn’t as sensitive as previous projects the Stanford team has worked on (one of which was so responsive that the pressure exerted from a 20 milligram bluebottle fly carcass was well above what it could detect). However, the researchers can eventually use those previous techniques to calibrate this stretchy capacitive sensor. “We just need to make some modifications to the surface of the electrode so that we can have that same sensitivity,” said Zhenan Bao, associate professor of chemical engineering at Stanford.

Check out the video below to see researchers manipulating, testing and talking about their stretchy, skin-like capacitive sensor.

Thanks Steve!


Apple Patent Describes Easy-to-Disassemble iOS Device

If you need to fix your iPhone, in the future it may not be so hard to do. Image: Patently Apple

Apple products are notoriously difficult to repair. Just check out iFixit’s teardowns: You often need special proprietary tools to get inside the devices, which usually score pretty low on iFixit’s 10-point repairability scale. But perhaps future iOS devices won’t pose so much trouble for the do-it-yourselfer, as a recent Apple patent describes a new construction that would make cracking open Apple cases far less headache-inducing.

The patent, unearthed by Patently Apple, describes a few different rear cases that could be slid, hinged or tilted to reveal what’s underneath. The casings would be locked in place with screws, latches, hooks or a combination of the three to ensure they wouldn’t pop open when they’re not supposed to.

Many Android devices are fairly easy to disassemble and repair, and often feature a backplate that slides or pops off so you can replace your battery or insert a SIM card. That concept isn’t new. But Apple is notorious for wanting to prevent users from mucking around inside its devices, both software– and hardware-wise. For example, 2011 batches of the iPhone featured redesigned screws that require an uncommon Pentalobe screwdriver instead of that normal Phillips-head screwdriver you have sitting in your tool drawer.

But perhaps this patent points to a new direction for Apple mobile products.

Indeed, because the new iPhone 4S is a world phone that includes both GSM and CDMA functionality, a more accessible chassis design would make it easier for frequent international jet setters to swap out their microSIM cards to take advantage of cheaper wireless rates abroad. Sprint sells its iPhone 4S units with the SIM unlocked, and Verizon can unlock it at your request after 60 days of ownership.

Do you wish you could more easily crack apart your iPhone to repair it yourself or potentially swap out the battery or SIM? Let us know in the comments.


Sci-Fi Tech: New Adobe Plugin Removes Photo Blur

Adobe’s new deblurring algorithm is like something out of science fiction

There are two things that work with photos in sci-fi movies that still don’t work in real life. One is saying “enhance” to your computer and having it magically zoom in and conjure new pixels from nowhere. And the other one is removing the blur from an image.

Thanks to the brainiacs at Adobe, the next version of Photoshop may actually take care of the second one. Above you see the before/after results of the new deblur tool (click to see it full-sized). The plugin — currently in the early prototype phase — first examines the image to work out what kind of blur it has. This generates a grayscale map which can be visualized as a line, with direction.

Then this information is used to correct the blur. The Photoshop team is keeping hush-hush on the details, but the main problems seem to be that combination blurs are very tricky to decipher. Thus, if you take a photo of a speeding car, it may blur. If you shake the camera at the same time, that will blur everything, not just the car. Separating these from each other requires a lot of processing power.

If you can stomach some idiot actor trying to be funny and heckling the poor technician who demoes the tool, you might like to watch the video of it in action. Deblur works especially well on text. This could certainly help with the shaky shots I take of menus and business cards with my cellphone camera.

The tech might be too far off to make it into the next version of Photoshop, but at least it has made it into the near future, instead of the far-future of movies.

Behind All the Buzz: Deblur Sneak Peek [Photoshop.com Blog]

See Also:


SideBySide Uses Handheld Projectors for Multiplayer Games

When two handheld projectors can see each other, their separate images can interact. Photo credit: Disney Research

When I first read the emails about this project (after a Sunday evening tipple), I thought that it consisted of merely pointing two projectors at the same wall, connecting them to two gaming devices and playing head-to-head. That would be awesome enough, but the reality is even more awesomer.

The project is called SideBySide, and comes from researchers Ivan Poupyrev and Karl D.D. Willis. It combines a camera with a projector so that the two on-screen (or on-wall) images can actually interact with each other. Each unit consists of a modified DLP projector which outputs a single color of visible light and also an invisible infrared image. The IR image is detected by the camera of the second device, letting it know what the other device is up to, and where. Take a look:

Games are the obvious use-case, but if cellphones had this tech you could drag and drop files between them, for instance, using an IR 2-D barcode. Fittingly, SideBySide is a product of Disney’s research labs. And this might hint at the project’s true value: keeping the kids quiet, wherever you happen to be.

Ad-hoc Multi-user Interaction with Handheld Projectors [Disney Research. Thanks, Karl and Ivan!]


Computer Scientists Build Wireless Bike Brake. What Could Possibly Go Wrong?

Holger Hermanns crouches proudly beside his proof-of-concept wireless-braking bike

Computer scientists at Germany’s Saarland University have worked long and hard to rid your bike of that pesky one foot of brake cable which used to curve, short and graceful, down to the front wheel. Instead of a simple system of super-reliable levers and cables, the Saarland team uses a wireless transmitter to brake the bike.

The wireless brake consists of a transmitting hand grip and a motorized disk-brake caliper. Squeeze the grip and the brakes are actuated according to a radio signal. Squeeze harder and you brake harder — as long as your batteries are charged. Just watch out for pranksters with RF transmitters who trigger your brakes from afar.

The Saarland team’s work isn’t intended to actually be used on bikes. Instead it is an experiment to see if wireless brakes can ever be made safe enough for use in trains, planes and automobiles. If you’re testing things out, obviously a slow-moving bike is a safer environment than a landing airliner.

The boffins have managed to get reliability up to a rather decent 99.999999999997 percent, which is a lot better than the reliability of an urban hipster on a brakeless fixed gear bike with a loose-fitting (and skip-happy) chain. This is in part achieved by redundancy: several transmitters send duplicate signals, but even this was found to fail if configured incorrectly.

The system is still far from perfect, but the use of brake-by-wire tech could incorporate anti-lock tech, which could prevent disastrous front-wheel skids in the wet, for example.

I’ll be sticking with cables for the foreseeable future. The last thing I want is to be forced to walk home because I forgot to charge my brakes.

Reaching 99.999999999997 percent safety: Saarland computer scientists present their concept for a wireless bicycle brake [Alpha Galileo via Gizmag]

See Also: