Patent App Shows How Apple Makes Touch Displays Fingerprint-Proof

Apple uses an oleophobic coating to keep fingerprints and oil from mucking up device displays. Photo: Jon Snyder/Wired.com

All that swiping and tapping on your iPhone takes a heavy toll on the screen, leaving it a streaked and grimy mess.

Apple’s been battling our greasy fingers for years, and a recently discovered patent application describes a new way of making sure our oily fingers don’t mar future generations of gadgets from Cupertino.

In the application, Apple describes a way of depositing an oleophobic substance that bonds with the screen. It uses a direct liquid application called Physical Vapor Deposition (PVD).

Image: Patently Apple

Apple’s no stranger to oleophobic (the term describes something that rejects or repels oil) surfaces. The iPhone 3GS and iPhone 4 use them, as does the iPad.

It works well enough, but it’s also possible — though rare — to remove the coating if you clean the screen with anything abrasive. It also could wear off over time through normal usage.

Patently Apple describes the process:

The oleophobic ingredient could be provided as part of a raw liquid material in one or more concentrations. To avoid adverse reactions due to exposure to air, heat, or humidity, the raw liquid material can be placed in a bottle purged with an inert gas during the manufacturing process.

The bottle could be placed in a liquid supply system having a mechanism for controlling the amount of raw liquid material that passes through the liquid supply system. Upon reaching the vaporizing unit, the liquid could be vaporized and the oleophobic ingredient within the liquid can then be deposited on the electronic device component surface. As the liquid supply is drained from the bottle, additional inert gas is supplied in its place to further prevent contamination.

This patent was filed in February, so the method described may not currently be in use yet.

via Cult of Mac


Concrete Alternative Could Make For Stronger Buildings

CO2 Structure sample undergoes a compressive stress test at Tokyo Denki University.

As Japan works to repair the damage caused by the recent earthquake and tsunami, a newly discovered alternative to concrete may make structures stronger in a fraction of the time.

Japanese architectural design office TIS & Partners created CO2 Structure, a building material that supersedes brick and concrete in many ways. When combined with epoxy or urethane, CO2 Structure is twice as strong as regular concrete. While normal gray concrete takes up to 28 days to harden fully, CO2 Structure is ready within 24 hours. It can support structures with almost no steel reinforcement.

The 8.9-magnitude tremor that struck 250 miles northeast of Tokyo triggered a tsunami that hit Japan’s Fukushima and Miyagi prefectures with 7-foot waves. With damages estimated at over $300 billion, the CO2 Structure’s quick hardening will likely be an asset in reconstruction in Japan, and anywhere else prone to earthquakes and aftershocks.

“Areas that underwent subsidence in the East Japan Earthquake could be reinforced using this material,” said Norihide Imagawa, president of TIS & Partners,
in an interview with DigInfo TV. Imagawa said structures built with CO2 Structure could have a lifespan of at least 50 years.

CO2 Structure will make its debut on September 25 when Tokyo Denki University students and TIS & Partners begin construction on a dome outside the UIA World Congress at the Tokyo International Forum.


Apple Patent Shows Plans for Integrated Projector

According to a recent Apple patent, embedded projector technology may be closer to a reality than previously thought. Image: Patently Apple

Maybe an iPhone with an embedded projector isn’t so far off after all.

A patent uncovered by Patently Apple reveals Apple’s intention to eventually include a mini projector in the iPhone and iPad and a pico projector-like accessory for MacBooks.

But the most incredibly novel and useful part of the patent description isn’t the projector. It’s the advanced gesturing analysis that would be used in conjunction with the projector to interpret shadow or silhouette movements when presentations (or even workspaces) are displayed in a dark environment.

In Patently Apple’s words, “The level of detail associated with this patent would suggest that Apple’s development teams are moving full steam ahead on the projection system project.”

Just last week we saw the development of a new glass lens tiny enough that it could eventually be used in mobile devices like smartphones and tablets. And although interactive displays, typically in the form of holograms, long have been a staple of science fiction, such technology has in recent years moved increasingly closer to a reality. Intel researchers, for instance, have developed a projected display that behaves like a touchscreen.

The gesture-detecting technology would involve a library of gesture commands that could be used to easily share data. For instance, an image could be shared from one projected display to another. Figures in the patent show a swipe-type motion, not unlike what’s already used in iOS, would instigate the image transfer. Shadow or silhouette gestures would be detected with a camera, then analyzed with image-processing software.

Since the projector lens would be mounted on the side of the iPhone or iPad, an image could be projected on a wall simply by placing the device on a flat surface. Alternatively, a tripod could be used for displaying the projected image on a surface.

Two devices could also be used to display one single, larger, unified image in a “Unified Display Mode.”

Apple first revealed it is working on projector display technology in 2009, and has issued a series of related patents since then. This, by far, is the most detailed yet.


Power the Stereo by Driving Through Potholes

The quarter-sized energy harvester is small enough for use in car tires and home appliances. (Photo courtesy MicroGen Systems)

By drawing energy from vibrations, tiny sheets of piezoelectric material can provide free power to anything that moves or shakes. The battery-less sensors can take the motion from a car on bumpy asphalt or a loaded clothes dryer, for example, to give supplemental energy to a device.

The piezoelectric unit in motion from a 60 Hz vibration. (Photo courtesy Cornell University)

Piezoelectric technology has been around since the late 19th century and can be found in microphones and phonographs working under simple principles. Under kinetic stress, crystals on a sheet of piezoelectric material become electrically polarized, which produces energy.

The new sensors, developed by MicroGen Systems and Cornell University researchers at the school’s NanoScale Facility, pair with a thin-film battery to apply the same principles, giving a boost to any attached device.

Unfortunately, a vibration-only drive in a Nissan Leaf from L.A. to New York isn’t an option. The piezoelectric system produces only 200 microwatts of energy, but the added juice could reduce drain on our gadgets’ precious power supply.


Tiny Glass Lens Could Turn Mobile Devices Into Projectors

A teeny, tiny lens like this could be the key to an embedded projector in tomorrow.

A tiny aspherical glass lens could turn your smartphone or tablet into a Star Wars-style projector.

It may not be holographic, but at least you and your friends could sit around a table or look up at a wall rather than huddling around someone’s palm-sized smartphone to watch the latest hilarity on YouTube.

At 1mm x 1mm x .8mm, the teensy FLGS3 Series lens is barely the size of the tip of mechanical pencil lead. Actually, it’s the smallest in the industry. Aspherical glass lenses like this are typically used in optical communication applications, where data is converted and transmitted as light signals. Projectors (including palm-sized projectors currently on the market) are one such application.

For now, if you want a tiny, portable projector, you have to opt for a dedicated picoprojector that you can attach to your smartphone or computer through a cable. Projector components are too large and power hungry to reasonably be integrated into battery-powered portable devices like a smartphone. They also generate too much heat.

Tiny components like this lens are key to making projectors small and lightweight enough to be included in the already-cramped innards of mobile devices.

This lens in particular has what’s called a coupling efficiency of 73 percent. Coupling efficiency is a measure of light-transmission efficiency, the amount of wideband light projected through an aperture, taking into account things like reflection and diffusion. Because it has a lower loss than previous models, which had a coupling efficiency of 68 percent, less power is consumed and less heat is created. The increase in coupling efficiency is due to the lens’ higher effective numerical aperture.

The FLGS3 Series lens is also ideal for high-brightness projector situations.

I can’t wait for the future wave of mobile devices that include embedded projectors. Pico projectors are cool and all, but just like point-and-shoot cameras, it’s so much more convenient to have everything wrapped up in one little gadget, than having to tote a phone, and a camera, and a … you get the picture. (I’m a girl, and I don’t even carry a purse — anything I carry with me fits in my jacket or pants pockets.)

Tiny projectors could be great for presentations and sharing information (particularly photo or video). And of course, the obligatory clip of “Help me Obi-Wan Kenobi, you’re my only hope.”

From Geek.com

See Also:


Digital Tattoo Gets Under Your Skin to Monitor Blood

Bioengineering doctoral student Kate Balaconis shines the iPhone reader against her tattooless arm.

Maybe tattoos aren’t just for Harley riders or rebellious teens after all. In a few years, diabetics might get inked up with digital tats that communicate with an iPhone to monitor their blood.

Instead of the dye used for tribal arm bands and Chinese characters, these tattoos will contain nanosensors that read the wearer’s blood levels of sodium, glucose and even alcohol with the help of an iPhone 4 camera.

Dr. Heather Clark, associate professor of pharmaceutical sciences at Northeastern University, is leading the research on the subdermal sensors. She said she was reminded of the benefits of real-time, wearable health monitoring when she entered a marathon in Vermont: If they become mass-produced and affordable for the consumer market, wireless devices worn on the body could tell you exactly what medication you need whenever you need it.

“I had no idea how much to drink, or when,” said Clark, reflecting on her marathon run. “Or if I should have Gatorade instead.”

Clark’s technology could spell out the eventual demise of the painful finger pricks required for blood tests — assuming users have an iPhone, which Northeastern bioengineering grad student Matt Dubach has customized to read light from the tiny sensors to collect and output data.

Here’s how it works: A 100-nanometer-wide set of sensors go under the skin, like tattoo ink — as for the size, “You can spot it if you’re looking for it,” Clark says. The sensors are encased in an oily agent to ensure the whole contraption stays together.

Within the implant, certain nanoparticles will bind exclusively to specific blood contents, like sodium or glucose. Thanks to an additive that makes the particles charge neutral, the presence of a target triggers an ion release, which manifests as a florescence change. The process is detailed in an article published in the journal Integrative Biology.

Dubach designed the iPhone 4 attachment to use the phone’s camera to read the color shift and translate the results into quantifiable data. A plastic ring surrounding the lens blocks out ambient light while a battery-powered blue LED contrasts with the sensors. The software uses the iPhone camera’s built-in RGB filters to process the light reflected off the sensors.

Why blue? Initial trials with lights that projected other colors were hindered by Apple’s built-in optical filter, but blue light uses the iPhone’s built-in RGB setup to process the data accurately. That blue light, powered by a 9-volt battery attached to the phone, works with the sensors’ red-shifted florescence because red shines well through skin.

As of now, the data collected with the iPhone still requires processing through a secondary machine, but Duboch says using the iPhone to do all the work is not far off, and that an app is likely on the way.

Clark hopes to see the work of an entire clinical analyzer done by nanoparticles interacting with smartphones, which would mean a major step forward for personalized medicine. Diabetics and athletes alike could adapt and measure their own statistics without dependence on big, pricey, exclusive medical equipment.

The testing is still in early stages, and hasn’t been tried on humans yet. Research on mice, who have comparatively thinner skin than humans, has shown promising results.

Readings of blood concentrations show up like this, with different colors indicating different sodium concentrations. Photo Courtesy of Matt Dubach.

When Apple’s next iPhone comes out, the project will benefit, said Dubach, citing rumors that the iPhone 5 will include a more powerful camera sensor.

“I’m holding out for the iPhone 5,” Dubach said. “More megapixels gives you more for the average,” meaning the higher-resolution camera provides more data for analysis. Even bioengineers are waiting for Steve Jobs’ next move.

The technology is still years off, but Clark and Dubach’s developments are bringing medicine closer to a time when diagnostics are minimally invasive. Real-time feedback through subdermal circuits and smartphone cameras means you could know exactly when to slug that water.

Researchers tested the iPhone attachment on this plate reader, which determines the nanosensors' response to the reader. Photo courtesy of Matt Dubach


Patent Firm Targets Lawsuit at Angry Birds

Angry Birds-maker Rovio is the most recent target in Lodsys' patent trolling disputes. Photo: Jim Merithew/Wired.com

The mighty hand of Lodsys, a patent firm suing mobile app programmers, continues to come down on iOS and Android developers. Now it’s targeting a major and much-beloved player: Angry Birds.

In the lawsuit, Lodsys claims Rovio has infringed upon “at least claim 27” of their patent, which covers in-app billing technology. Lodsys wants 0.575 percent of any U.S. revenue obtained using the technology.

The lawsuit currently extends to 11 other defendants, including big league app developers like Electronic Arts, Atari, and Square Enix.

Lodsys began sending letters to iOS app developers in early May for including an “upgrade” button or allowing users to make purchases within the app using Apple’s in-app billing infrastructure.

Patent disputes are common among large technology corporations fighting to defend their intellectual property, including Apple, Google and Microsoft. However, it’s rare to see a small patent firm such as Lodsys go on a lawsuit spree against a laundry list of companies big and small.

Lodsys explained the reasoning for their actions in a blog post to Apple: “The scope of [Apple’s] current licenses does NOT enable them to provide ‘pixie dust’ to bless another (3rd party) business applications [sic],” Lodsys wrote. “From Lodsys’ perspective, it is seeking to be paid value for rights it holds and which are being used by others.”

Apple supported its developers with an official response from its general counsel Bruce Sewell (.pdf). In it he says, “Apple is undisputedly licensed to these patent and the Apple App Makers are protected by that license. Apple intends to share this letter and the information set out herein with its App Makers and is fully prepared to defend Apple’s license rights.”

The Texas-based Lodsys recently began filing lawsuits against Android developers for violating their patents, as well.

Many developers have responded to the company’s patent trolling by removing the offending features of their app (the in-app purchasing ability) or just plain removing their app from the market entirely.

The EFF explains that the patent system is intended to support innovation, but in instances such as this, it’s doing the opposite.


Stanford’s Lightsaber-Wielding Robot Is Strong With the Force

What better way to combine your nerdy loves of computer programming and Star Wars than with a robot that can actually battle with a lightsaber?

This is “JediBot,” a Microsoft Kinect–controlled robot that can wield a foam sword (lightsaber, if you will) and duel a human combatant for command of the empire. Or something like that.

“We’ve all seen the Star Wars movies; they’re a lot of fun, and the sword fights are one of the most entertaining parts of it. So it seemed like it’d be cool to actually sword fight like that against a computerized opponent, like a Star Wars video game,” graduate student Ken Oslund says in the video above.

The world of dynamic robotics and AI has been immensely aided by the affordable, hackable Microsoft Kinect. The Kinect includes multiple camera and infrared light sensors, which makes recognizing, analyzing and interacting with a three-dimensional moving object — namely, a human — much simpler than in the past. Microsoft recently released the SDK for the Kinect, so we should be seeing increasingly useful and creative applications of the device. The KUKA robotic arm in the video above is traditionally used in assembly line manufacturing, but you may remember it from a Microsoft HALO: Reach light sculpture video last year.

According to the course overview (.pdf) for the “Experimental Robotics” course, the purpose of the laboratory-based class is “to provide hands-on experience with robotic manipulation.” Although the other groups in the class used a PUMA 560 industrial manipulator, the JediBot design team, composed of four graduate students including Tim Jenkins and Ken Oslund, got to use a more recently developed KUKA robotic arm. This final project for the course, which they got to choose themselves, was completed in a mere three weeks.

“The class is really open-ended,” Jenkins said. “The professor likes to have dynamic projects that involve action.”

The group knew they wanted to do something with computer vision so a person could interact with their robot. Due to the resources available, the group decided to use a Microsoft Kinect for that task over a camera. The Kinect was used to detect the position of JediBot’s opponent’s green sword-saber.

The robot strikes using a set of predefined attack motions. When it detects a hit, when its foam lightsaber comes in contact with its opponent’s foam lightsaber and puts torque on the robotic arm’s joints, it recoils and moves on to the next motion. It switches from move to move every one or two seconds.

“The defense mechanics were the most challenging, but people ended up enjoying the attack mode most. It was actually kind of a gimmick and only took a few hours to code up,” Jenkins said.

The project utilized a secret weapon not apparent in the video: a special set of C/C++ libraries developed by Stanford visiting entrepreneur and researcher Torsten Kroeger. Normally, the robot would need to plot out the entire trajectory of its motions from start to finish — preplanned motion. Kroeger’s Reflexxes Motion Libraries enable you to make the robot react to events, like collisions and new data from the Kinect, by simply updating the target position and velocity, with the libraries computing a new trajectory on-the-fly in less than a single millisecond.

This allows JediBot to respond to sensor events in real time, and that’s really the key to making robots more interactive.

Imagine a waiterbot with the reflexes to catch a falling drink before it hits the ground, or a karate robot you can spar against for practice before a big tournament.

I doubt anyone would be buying their own KUKA robotic arm and creating a sword-playing robot like JediBot in their home, but innovations like this using interactive controllers, and the availability of the Reflexxes Motion Libraries in particular for real-time physical responses, could help us see robots that better interact with us in daily life.

Video courtesy Stanford University/Steve Fyffe


Chocolate 3-D Printer Arrives At Last

Now yummy chocolate can be used to build 3-D objects

It seems impossible, but apparently nobody has ever made a chocolate-laying 3-D printer before now. Thankfully, that oversight has been remedied by Dr Liang Hao and his team, of the University of Exeter in England.

The printer works like any other additive 3-D printer, building up the design one layer at a time, only this one works with delicious chocolate which can be eaten afterwards. The final goal is to have the printer available to consumers, so they could go into a store with their design and print out a tasty treat to give as a gift. To this end, an easy-to-use interface to input designs is already in development.

Building the printer wasn’t as simple as just swapping in chocolate for other materials. The nature of chocolate means that it has tight tolerances on temperatures, both for flow and to allow the design to set between layers.

This is great, although it will certainly make things like the amazing chocolate keyboard or working chocolate tools a lot less impressive.

The future of gift shopping – design and print your own 3D chocolate objects [EPSRC]

See Also:


MIT Project Uses Smart Phones to Detect Cataracts

A Brazilian man takes the CATRA test. Photo Erick Passos

CATRA is an invention of MIT’s Media Lab which uses a cellphone and a cheap plastic eyepiece to detect cataracts. Not only is it cheaper and easier to use than existing solutions, it actually provides much better results.

Cataracts cause blindness by fogging the lens of the eye, scattering light before it reaches the retina. Normally, they are diagnosed using a “backscatter” device which shines light into the eye and measures how much of it is reflected by the cataract. This requires a skilled user, a fancy machine and still doesn’t detect the problem early, nor tell the operator what the patient actually sees.

CATRA uses a smartphone with a custom app, and a cheap eyepiece. The patient holds it up to their eye and the app fires light successively at each part of the eye. The patient uses “the phone’s arrow keys” to adjust the brightness of these beams until they match. The app logs the differences in intensity required to reach the retina and creates a map of the eye. Thus is can detect the problem early, and also reflects the actual experience of the patient.

But most important, it requires no special hardware except for that simple eyepiece.

The product is about to undergo field testing for a future launch. The market for this is clearly the developing world, which is also the place where cellphone usage is taking off. It might be time to forget about programs like One Laptop Per Child and instead concentrate on the using smart-phones instead.

CATRA: Cataract Maps with Snap-on Eyepiece for Mobile Phones [MIT Media Lab via Cult of Mac]

See Also: