SOINN artificial brain can now use the internet to learn new things

A group at Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has succeeded in making further advances with SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks.

“Image searching technology is quite practical now. So, by linking our algorithm to that, we’ve enabled the system to identify which characteristics are important by itself, and to remember that what kind of thing the subject is.”

These are pictures of rickshaws, taken in India by the Group. When one of these pictures is loaded, the system hasn’t yet learned what it is. So, it recognizes the subject as a “car,” which it has already learned. The system is then given the keyword “rickshaw.” From the Internet, the system picks out the main characteristics of pictures related to rickshaws, and learns by itself what a rickshaw is. After learning, even if a different picture of a rickshaw is loaded, the system recognizes it as a rickshaw.

“In the case of a rickshaw, there may be other things in the picture, or people may be riding in the rickshaw, but the system picks out only those features common to many cases, such as large wheels, a platform above the wheels, and a roof, and it learns that what people call a rickshaw includes these features. So, even with an object it hasn’t seen before, if the object has those features, the system can recognize it.”

“With previous methods, for example, face recognition by digital cameras, it’s necessary to teach the system quite a lot of things about faces. When subjects become diverse, it’s very difficult for people to tell the system what sort of characteristics they have, and how many features are sufficient to recognize things. SOINN can pick those features out for itself. It doesn’t need models, which is a very big advantage.”

The Group is also developing ways to transfer learned characteristic data to other things. For example, the system has already learned knives and pens, and possesses the characteristic data that they are “pointed objects” and “stick-shaped objects” respectively. To make the system recognize box cutters, it’s made to look at the similarities between box cutters, and knives and pens, which it has already learned. And it’s made to transfer the basic characteristic of being stick-shaped and pointed. If characteristic data for box cutters can be obtained from other systems, SOINN can guess from the transferred data that the objects are box cutters.

“Here, you’ve seen how this works for pictures. But SOINN can handle other types of information flexibly. For example, we think we could teach it to pick out features from audio or video data. Then, it could also utilize data from robot sensors.”

“With previous pet robots, such as AIBO, training involved patterns that were decided in advance. When those possibilities are exhausted, the robot can’t do any more. So, people come to understand what it’s going to do, and get bored with it. But SOINN can remember an amount of changes. So, in principle, it can develop without a scripted scenario.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Shibaful lush lawn iPhone case puts Yoyogi Park in your pocket

“Shibaful is the world’s first iPhone case modeled after a grassy park. This case is based on Yoyogi Park in Tokyo, and it’s the first in our World Parks series. For the next versions, we’re considering basing the grass on New York’s Central Park and London’s Hyde Park.”

“Regarding the technology, the case is made using electrostatic flocking. When the five different colored fiber particles are dropped from above, they form this kind of texture. There are all kinds of iPhone cases, but we think this is the first with a grassy texture. Also, it feels different when you stroke it and when you grip it. The green color is really fresh, and easy on the eyes, too. Another part of the concept is that you’ll sometimes want to turn your iPhone over, and rest your eyes by looking at the green.”

“The studio we work from, called co-lab Shibuya Atelier, is a shared office. We have shared access to 3D printers, laser cutters, and digital machines, so we can turn PC data into tangible objects. Here, we can try all kinds of ideas quickly and cheaply, taking those ideas closer to commercial production. In Japan, there are lots of small businesses with all sorts of technologies. We’ve produced this iPhone case to express our goal of creating new, exciting things, by combining small businesses’ technology with our ideas and prototyping abilities.”

“When we market this, we’ll initially do a limited run of 100. They’ll be available from the end of April, at eight stores throughout Japan. The price will be 3,980 yen. Meanwhile, we’re gearing up for mass production to meet future needs.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Fujitsu Laboratories – Touchscreen interface for seamless data transfer between the real and virtual worlds

Fujitsu Laboratories has developed a next generation user interface which can accurately detect the users finger and what it is touching, creating an interactive touchscreen-like system, using objects in the real word.

“We think paper and many other objects could be manipulated by touching them, as with a touchscreen. This system doesn’t use any special hardware; it consists of just a device like an ordinary webcam, plus a commercial projector. Its capabilities are achieved by image processing technology.”

Using this technology, information can be imported from a document as data, by selecting the necessary parts with your finger.

This technology measures the shape of real-world objects, and automatically adjusts the coordinate systems for the camera, projector, and real world. In this way, it can coordinate the display with touching, not only for flat surfaces like tables and paper, but also for the curved surfaces of objects such as books.

“Until now, gesturing has often been used to operate PCs and other devices. But with this interface, we’re not operating a PC, but touching actual objects directly, and combining them with ICT equipment.”

“The system is designed not to react when you make ordinary motions on a table. It can be operated when you point with one finger. What this means is, the system serves as an interface combining analog operations and digital devices.”

To detect touch accurately, the system needs to detect fingertip height accurately. In particular, with the low-resolution camera used here (320 x 180), if fingertip detection is off by a single pixel, the height changes by 1 cm. So, the system requires technology for recognizing fingertips with high precision.

“Using a low-res webcam gives a fuzzy picture, but the system calculates 3D positions with high precision, by compensating through image processing.”

This system also includes technology for controlling color and brightness, in line with the ambient light, and correcting for individual differences in hand color. In this way, it can identify fingertips consistently, with little influence from the environment or individual differences.

Also, in situations that don’t use touch, the system can be operated by gesturing. In this demo, when you move your fist, you can manipulate the viewpoint for 3D CAD data. So, there could be applications for this touch system by combining it with current gesture systems.

“For example, we think this system could be used to show detailed information at a travel agent’s counter, or when you need to fill in forms at City Hall.”

“We aim to develop a commercial version of this system by fiscal 2014. It’s still at the demonstration level, so it’s not been used in actual settings. Next, we’d like to get people to use it for actual tasks, see what issues arise, and evaluate usability. We want to reflect such feedback in this system.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

DNA testing chip delivers results in one hour, paves way for personalized drug treatments

Panasonic, together with the Belgium-based research institution IMEC, has developed a DNA testing chip that automates all stages of obtaining genetic information, including preprocessing.

This development is expected to enable personalized, tailor-made therapy to become widespread.

“This is the chip we’ve actually developed. As you can see, it’s less than half the size of a business card. It contains everything needed for testing DNA. Once a drop of blood is inserted, the chip completes the entire process, up to SNP detection.”

SNPs are variations in a single DNA base among individuals.

Detecting SNPs makes it possible to check whether genetically transmitted diseases are present, evaluate future risks, and identify genes related to illness.

“By investigating SNPs, we can determine that this drug will work for this person, or this drug will have severe side-effects on that person. Investigating SNPs enables tailor-made therapy. But with the current method, it has to be done in a specialized lab, so it actually takes three to four days. In the worst case, it takes a week from sending the sample to getting the result. Our equipment can determine a patient’s SNPs in just an hour after receiving the blood.”

Testing is done simply by injecting the blood and a chemical into the chip, and setting it in the testing system.

First of all, the blood and chemical are mixed. DNA is then extracted from the mixed solution. The regions containing SNPs are then cut out and amplified. DNA amplification uses technology called PCR, which cuts out the desired sections by varying the temperature. With the conventional method, this process took two hours.

“Through careful attention to thermal separation design, we’ve achieved high-speed PCR, where 30 temperature cycles are completed in nine minutes. We think this is one of the fastest PCR systems in the world.”

The amplified DNA is then sent through a micropump to a DNA filter. Here, the DNA is separated for each section length. Then, a newly developed electrochemical sensor identifies SNPs while the DNA is dissolved in the chemical.

“To implement this system on one chip, and make detection easy, the first thing we focused on was the actuators. This system requires a very small, powerful pump. In our case, we used a conductive polymer for the actuators. A feature of these actuators is they’re powerful, yet extremely compact. They can exert a pressure of up to 30MPa.”

“Ultimately, we’d like to make this system battery-powered. We think that would enable genetically modified foods to be tested while still in the warehouse.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Via: Panasonic, IMEC

Double the brightness in low light photos with Panasonic’s new color filtering technology

Panasonic has developed a unique technology that doubles the brightness of color photography, by using micro color splitters instead of conventional color filters in the image sensor.
These two photos were taken using CCDs with the same sensitivity. The one on the right was taken with the color filter system used in nearly all digital cameras. The one on the left was taken with Panasonic’s new micro color-splitting system.
Until now, image sensors have produced color pictures by using …

Management Game from Japan Simulates Actual Business Experiences

In 1976, Sony CDI developed a kind of business game called the Management Game. The game simulates running a business, with each participant as a manager. Participants can compare their performance, by producing a financial report for their business. Many companies have introduced the Management Game to help staff learn about management.
Shigeto Takahashi is a leading expert in education and publicity regarding the Management Game. He’s the representative of BM Network, which holds …

Dummy cursors keep your passwords safe from prying eyes

This is a system for preventing password theft, by mixing several dummy cursors in with the real cursor.
The software keyboards used in online banking are effective against key loggers, but by taking screen captures or looking over your shoulder, people would be able to work out your password.
With this system, only the user knows which cursor is the real one, so there’s no concern about people stealing passwords just by being able to see the screen.
“At first sight, it looks as if …

Photoshop-like interior light control interface

“This is a lighting system, called Lighty. There’s a group of robotic lights on the ceiling, and their orientation and brightness can be controlled through this interface.”
“This feels just like Photoshop. To specify which places you want bright or dark, all you need to do is color in the corresponding areas.”
In this system, the interactive pen display is used to paint the room in light or darkness, with a camera placed in the ceiling returning the results in real …

NTT – Visual SyncAR – Using digital watermarking technology to display in sync companion content (from DigInfo.TV)

Another innovative application of technology reported by Don Kennedy and Ryo Osuga of DigInfo.TV.
Visual SyncAR, under development by NTT, uses digital watermarking technology to display companion content on a second screen, in sync with the content being viewed on the TV.
“For example, you can show a CG character dancing in sync with an artist like this. Or a CG character can jump into the picture, and things in the picture can jump out. In this way, the system enables new forms of …

DigInfo.TV – The award-winning Smart Trash Can moves autonomously to catch your trash

Don Kennedy and Ryo Osuga of DigInfo.TV in Tokyo bring this video and story of Japanese technology applied to one of the most unlikely of places. How useful this will be for Japanese and worldwide homes remains to be seen, but so far, it is award-winning technology.
The story from DigInfo.TV:
This Smart Trash Can, developed by Minoru Kurata, an engineer at a Japanese auto maker, won an Excellence Award at the Japan Media Arts Festival.
“When you toss trash at it, a sensor detects the …