Sony A4-sized digital paper notepad is light, durable and responsive

Sony has developed an A4 sized equivalent 13.3″ digital paper notepad.

The display is the first in the world to use E Ink Mobius, a new flexible electronic paper display technology developed by E Ink in collaboration with Sony. Technology developed by Sony for forming high precision thin film transistors on plastic instead of glass has been used, making the display flexible and light. It is scheduled for mass production this year.

“We’ve succeeded in mass-producing these large flexible panels, by combining E-Ink’s flexible paper technology and Sony’s mass-production technology.”

“Usually, devices are made by sandwiching TFTs between glass sheets. But these panels use plastic instead of glass, so they’re much lighter. Another feature is that, unlike glass, these panels are very durable.”

This prototype digital notepad weighs 358 g and is 6.8mm thick, with the 1200×1600 pixel display itself weighing around 60g, 50% less than if glass was used. The prototype also features a battery life of approximately three weeks.

“This is a PDF document. You can page through it with your finger. Of course, you can also write comments and draw lines in the PDF document. Also, if you choose the marker, and move your finger over text, you can highlight text like this.”

“This is still at the prototype stage. But we’re designing it to work smoothly. Also, with paper, you can rest your hand on it while you write, but with a tablet, you can’t always do that. This digital paper makes it possible to write while resting your hand on the panel.”

“We’d especially like this to be used in universities. From the second half of this year, we’re planning to do trials with Waseda, Hosei, and Ritsumeikan Universities. We also plan to release a commercial version during this year.”

Event: Educational IT Solutions Expo

This Video is provided by DigInfo.tv, AkihabaraNews Official Partner.

Measure your pulse in real-time with Fujitsu’s facial imaging technology

Fujitsu has developed technology which can measure a person’s pulse in real time by analyzing video of their face.

“As blood circulates through the body, the amount of light absorbed by the face varies, depending on how much blood there is in it. The first point about this technology is, it identifies minute changes in light intensity on the face, and converts them to a pulse. Also, it accurately detects people’s movements, to distinguish noise. Consequently, it can make a measurement in as little as five seconds.”

When the user is sitting still, the system continuously detects changes in light intensity on the users face, as shown by the green waveform. The red waveform shows the resulting wave with noise associated with movement removed. Fujitsu has found that that accuracy of the system is within about three beats per minute.

“The main point about this technology is, it can make the measurements naturally. All the person needs to do is be in front of the camera, without operating a device. For example, when you’re working on a computer, you often stop moving for at least five seconds while you’re thinking. We think that, by detecting those moments and measuring your pulse rate, this system could be used to support health management, by recording changes throughout the day.”

“In the case of a security camera, it might be possible to detect suspicious persons, based on the assumption that people about to do something risky have a high pulse rate. However, we don’t think that can be done using this technology alone. We think it might be possible through all-round analysis, by combining this with other technologies.”

“We’d like to release this as a device embedded in our products. Right now, we’re working to bring such products out this year, including smartphones as well as PCs.”

This Video is provided by DigInfo.tv, AkihabaraNews Official Partner.

3D/4D ultrasound hologram printing service using Pioneer’s compact holographic printer

Pioneer has announced a service that prints the expressions of unborn babies as 3D holograms, using a compact full-color hologram printer developed by the company last year.

“When an expecting mother has a check-up, a 3D/4D echogram is made, and that contains 3D data. So, we suggest taking pre-birth photos of the baby, by skillfully processing that data.”

This device can record full color card-sized Lippmann hologram in 120 minutes with one color holograms taking 90 minutes.

“Previously, holograms were produced by making a model of the subject, shining two lights on the model, and photographing it. That method involved a lot of work, because it required a darkroom, knowledge of techniques, and specialized equipment. But with the device we’ve developed, even if you don’t have the actual object, as long as you have a CG design, then that can be used to record a hologram easily.”

The recording medium is a high-performance film specifically for holograms, called Bayfol HX, from Bayer Material Science. The hologram is visible within a 23 degree viewing angle, and is 200 components high and 300 wide, with each component containing 60 points of view vertically and horizontally.

“This method works by shining light containing information about the object from one side of the recording material, and reference light from the other side, and recording the state of interference between the two light sources in the material. A hologram is created by regularly arranging the recordings on the medium.”

As these holograms can be used to commemorate births, and Lippmann holograms can be viewed clearly in white light, Pioneer is exhibiting holograms in card-case holders and jewel-boxes with white LEDs.

Related: OPTICS & PHOTONICS International Exhibition 2013 (OPIE ’13)

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

SOINN artificial brain can now use the internet to learn new things

A group at Tokyo Institute of Technology, led by Dr. Osamu Hasegawa, has succeeded in making further advances with SOINN, their machine learning algorithm, which can now use the internet to learn how to perform new tasks.

“Image searching technology is quite practical now. So, by linking our algorithm to that, we’ve enabled the system to identify which characteristics are important by itself, and to remember that what kind of thing the subject is.”

These are pictures of rickshaws, taken in India by the Group. When one of these pictures is loaded, the system hasn’t yet learned what it is. So, it recognizes the subject as a “car,” which it has already learned. The system is then given the keyword “rickshaw.” From the Internet, the system picks out the main characteristics of pictures related to rickshaws, and learns by itself what a rickshaw is. After learning, even if a different picture of a rickshaw is loaded, the system recognizes it as a rickshaw.

“In the case of a rickshaw, there may be other things in the picture, or people may be riding in the rickshaw, but the system picks out only those features common to many cases, such as large wheels, a platform above the wheels, and a roof, and it learns that what people call a rickshaw includes these features. So, even with an object it hasn’t seen before, if the object has those features, the system can recognize it.”

“With previous methods, for example, face recognition by digital cameras, it’s necessary to teach the system quite a lot of things about faces. When subjects become diverse, it’s very difficult for people to tell the system what sort of characteristics they have, and how many features are sufficient to recognize things. SOINN can pick those features out for itself. It doesn’t need models, which is a very big advantage.”

The Group is also developing ways to transfer learned characteristic data to other things. For example, the system has already learned knives and pens, and possesses the characteristic data that they are “pointed objects” and “stick-shaped objects” respectively. To make the system recognize box cutters, it’s made to look at the similarities between box cutters, and knives and pens, which it has already learned. And it’s made to transfer the basic characteristic of being stick-shaped and pointed. If characteristic data for box cutters can be obtained from other systems, SOINN can guess from the transferred data that the objects are box cutters.

“Here, you’ve seen how this works for pictures. But SOINN can handle other types of information flexibly. For example, we think we could teach it to pick out features from audio or video data. Then, it could also utilize data from robot sensors.”

“With previous pet robots, such as AIBO, training involved patterns that were decided in advance. When those possibilities are exhausted, the robot can’t do any more. So, people come to understand what it’s going to do, and get bored with it. But SOINN can remember an amount of changes. So, in principle, it can develop without a scripted scenario.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Shibaful lush lawn iPhone case puts Yoyogi Park in your pocket

“Shibaful is the world’s first iPhone case modeled after a grassy park. This case is based on Yoyogi Park in Tokyo, and it’s the first in our World Parks series. For the next versions, we’re considering basing the grass on New York’s Central Park and London’s Hyde Park.”

“Regarding the technology, the case is made using electrostatic flocking. When the five different colored fiber particles are dropped from above, they form this kind of texture. There are all kinds of iPhone cases, but we think this is the first with a grassy texture. Also, it feels different when you stroke it and when you grip it. The green color is really fresh, and easy on the eyes, too. Another part of the concept is that you’ll sometimes want to turn your iPhone over, and rest your eyes by looking at the green.”

“The studio we work from, called co-lab Shibuya Atelier, is a shared office. We have shared access to 3D printers, laser cutters, and digital machines, so we can turn PC data into tangible objects. Here, we can try all kinds of ideas quickly and cheaply, taking those ideas closer to commercial production. In Japan, there are lots of small businesses with all sorts of technologies. We’ve produced this iPhone case to express our goal of creating new, exciting things, by combining small businesses’ technology with our ideas and prototyping abilities.”

“When we market this, we’ll initially do a limited run of 100. They’ll be available from the end of April, at eight stores throughout Japan. The price will be 3,980 yen. Meanwhile, we’re gearing up for mass production to meet future needs.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Fujitsu Laboratories – Touchscreen interface for seamless data transfer between the real and virtual worlds

Fujitsu Laboratories has developed a next generation user interface which can accurately detect the users finger and what it is touching, creating an interactive touchscreen-like system, using objects in the real word.

“We think paper and many other objects could be manipulated by touching them, as with a touchscreen. This system doesn’t use any special hardware; it consists of just a device like an ordinary webcam, plus a commercial projector. Its capabilities are achieved by image processing technology.”

Using this technology, information can be imported from a document as data, by selecting the necessary parts with your finger.

This technology measures the shape of real-world objects, and automatically adjusts the coordinate systems for the camera, projector, and real world. In this way, it can coordinate the display with touching, not only for flat surfaces like tables and paper, but also for the curved surfaces of objects such as books.

“Until now, gesturing has often been used to operate PCs and other devices. But with this interface, we’re not operating a PC, but touching actual objects directly, and combining them with ICT equipment.”

“The system is designed not to react when you make ordinary motions on a table. It can be operated when you point with one finger. What this means is, the system serves as an interface combining analog operations and digital devices.”

To detect touch accurately, the system needs to detect fingertip height accurately. In particular, with the low-resolution camera used here (320 x 180), if fingertip detection is off by a single pixel, the height changes by 1 cm. So, the system requires technology for recognizing fingertips with high precision.

“Using a low-res webcam gives a fuzzy picture, but the system calculates 3D positions with high precision, by compensating through image processing.”

This system also includes technology for controlling color and brightness, in line with the ambient light, and correcting for individual differences in hand color. In this way, it can identify fingertips consistently, with little influence from the environment or individual differences.

Also, in situations that don’t use touch, the system can be operated by gesturing. In this demo, when you move your fist, you can manipulate the viewpoint for 3D CAD data. So, there could be applications for this touch system by combining it with current gesture systems.

“For example, we think this system could be used to show detailed information at a travel agent’s counter, or when you need to fill in forms at City Hall.”

“We aim to develop a commercial version of this system by fiscal 2014. It’s still at the demonstration level, so it’s not been used in actual settings. Next, we’d like to get people to use it for actual tasks, see what issues arise, and evaluate usability. We want to reflect such feedback in this system.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Turn a Printed Page Into a Touchscreen With This Brilliant Concept

Realizing that the oft-promised ‘paperless office’ may never actually come to fruition, researchers at Fujitsu are working on a backup plan that gives printed documents similar tablet-like touchscreen functionality. More »

DNA testing chip delivers results in one hour, paves way for personalized drug treatments

Panasonic, together with the Belgium-based research institution IMEC, has developed a DNA testing chip that automates all stages of obtaining genetic information, including preprocessing.

This development is expected to enable personalized, tailor-made therapy to become widespread.

“This is the chip we’ve actually developed. As you can see, it’s less than half the size of a business card. It contains everything needed for testing DNA. Once a drop of blood is inserted, the chip completes the entire process, up to SNP detection.”

SNPs are variations in a single DNA base among individuals.

Detecting SNPs makes it possible to check whether genetically transmitted diseases are present, evaluate future risks, and identify genes related to illness.

“By investigating SNPs, we can determine that this drug will work for this person, or this drug will have severe side-effects on that person. Investigating SNPs enables tailor-made therapy. But with the current method, it has to be done in a specialized lab, so it actually takes three to four days. In the worst case, it takes a week from sending the sample to getting the result. Our equipment can determine a patient’s SNPs in just an hour after receiving the blood.”

Testing is done simply by injecting the blood and a chemical into the chip, and setting it in the testing system.

First of all, the blood and chemical are mixed. DNA is then extracted from the mixed solution. The regions containing SNPs are then cut out and amplified. DNA amplification uses technology called PCR, which cuts out the desired sections by varying the temperature. With the conventional method, this process took two hours.

“Through careful attention to thermal separation design, we’ve achieved high-speed PCR, where 30 temperature cycles are completed in nine minutes. We think this is one of the fastest PCR systems in the world.”

The amplified DNA is then sent through a micropump to a DNA filter. Here, the DNA is separated for each section length. Then, a newly developed electrochemical sensor identifies SNPs while the DNA is dissolved in the chemical.

“To implement this system on one chip, and make detection easy, the first thing we focused on was the actuators. This system requires a very small, powerful pump. In our case, we used a conductive polymer for the actuators. A feature of these actuators is they’re powerful, yet extremely compact. They can exert a pressure of up to 30MPa.”

“Ultimately, we’d like to make this system battery-powered. We think that would enable genetically modified foods to be tested while still in the warehouse.”

This Video is provided to you by DigInfo.tv, AkihabaraNews Official Partner.

Via: Panasonic, IMEC

Double the brightness in low light photos with Panasonic’s new color filtering technology

Panasonic has developed a unique technology that doubles the brightness of color photography, by using micro color splitters instead of conventional color filters in the image sensor.
These two photos were taken using CCDs with the same sensitivity. The one on the right was taken with the color filter system used in nearly all digital cameras. The one on the left was taken with Panasonic’s new micro color-splitting system.
Until now, image sensors have produced color pictures by using …

Panasonic’s Developed a Simple Sensor Tweak That Vastly Improves Low Light Photography

Researchers at Panasonic’s imaging division have found a way to increase the sensitivity of digital camera sensors, which in turn equates to almost double the brightness in photos taken in low light conditions. But the discovery has nothing to do with the sensor itself; instead, the company’s improved the color processing filter placed in front of it. More »