Digital Content Expo 2013: Simulated Environments

Unlike some of the bigger trade shows or expos which tend to exhibit relatively completed products, the Digital Content Expo puts the spotlight on prototype systems and technology that are still squarely in the R&D stage. Describing its function as being a ‘bridge for digital innovation’, the focus of this event is on exhibiting the creative potential and possibilities of technology, and communicating the efforts of research laboratories and universities exploring this to a wider audience. 

We’ve already blogged a little bit about the AquaTop Display, a system which turns ordinary bath water into an interactive screen, and to follow up on that we have a two part post on some of the other technologies, systems and concepts that were being exhibited. The first of these posts is on some of the displays that were exploring the potential of simulated environments, whether it was augmented reality (AR), mixed reality (MR), or virtual reality (VR).

The Oculus Rift started gaining attention as a Kickstarter project last year as a VR headset intended to immerse gamers into a simulated environment by combining a ultra-wide field of view, high resolution display and rapid head-tracking system. The hype surrounding Oculus Rift was definitely palatable at DC Expo this year, and there were a number of displays that were taking on the challenge of creating content that would work with this immersive headset. OCULUS FESTIVAL exhibited an experience that utilised touch through hand shaking and gripping actions. The movements of the mounted rubber hand are made to be synchronised with the actions of any user wearing the Oculus Rift headset, creating a more dynamic, lifelike interaction with virtual characters like Hatsune Miku.

A group of students at Keio University also used the Oculus headset to showcase a project called the Virtual Rope Slider as part of the IVRC 2013 (International collegiate Virtual Reality Contest), a contest which focused on student projects related to interactivity and robots. Users wearing the Oculus headsets sit on a motorised seat attached to a Tarzan-esque rope, and chose from five different simulated scenarios which are projected on a screen in front of them: Jungle, Edo, Space, Fantasy and City. Each simulation works in relation with the movements of the seat to create the sensation of being propelled into the environment chosen.

Another take on the experience the experience of flying or soaring through an environment was exhibited by Solidray and their “Flight Experience” setup, which aimed to create the sensation of soaring back and forth in mid air on a swing. The experience is based around a system they call Duo Site – a setup that is based around two white screens each 2.2m X 2.9m in size, set up perpendicular to each other. Two projectors are used to project 3D animations onto each of the two displays while the user stands facing the entire setup.

The user wears 3D glasses which are attached with a sensor to track head movements. A sense of immersion is created using vection – relying on the illusion of self-motion created when a large portion of a person’s visual field moves. Although a user remains stationary, confusing the visual system by wearing 3D glasses and watching a moving environment (in this case what you would see if you were swinging back and forth mid-air) allows the user to experience different sensations of movement in a simulated environment.

The relatively simple setup of the Duo Site system allows for a flexible interaction system which could be used with additional devices such as the Kinect or other types of motion sensors and game controllers to create any number of interesting augmented or virtual experiences that feel truly immersive.

On the other hand, the University of Tokyo’s Ishikawa Oku Laboratory showcased some of their work in Dynamic Image Control (DIC), a research theme that aims to explore how dynamic phenomena – such as patterns on a flying bee wing, or a red blood cell flowing through a vein – that is difficult comprehend when observed with the human eye or even with conventional imaging systems which have a relatively slow frame-rate, can be shown in a comprehensible and intelligible way. On display was “Lumipen” (るみぺん), a projection mapping system intended for use with fast motion/high speed objects.

Currently, traditional projection mapping is mainly used with static objects and surfaces, as it is difficult to project an image or animation on a surface that is moving at high speed without some sort of misalignment between image and object due to delays in the mapping systems used. To solve this problem, Lumipen uses a high-speed vision sensor that can capture a thousand images per second to detect the movement of a moving object, while a high-speed optical gaze controller called Saccade Mirror  ensures that the direction of the projected image is aligned within milliseconds.

The Lumipen system enables content-rich visual information such as videos and graphics to be projected onto fast moving objects in real-time, and could even be used in dynamic situations such as concerts and sports games in the future.

Another lab based at the University of Tokyo, Naemura Lab, presented a mixed reality interface called DeruChara which can form 3D images in mid-air that respond to user interaction. The system works through a depth sensor and projector that is set up above a flat surface laid with several physical blocks. The depth sensor maps the terrain of the surface and maps any changes and user interactions based on shadows. For the Expo display, moving the blocks caused a small chick to appear and move around.

There was generally a pretty good mix of displays both of new systems being developed to support interactions, as well as displays that focused more on using existing systems to create interesting immersive experiences. While all of the displays were not finished products per se, they do showcase some of the interesting developments we can expect to see expanded upon as content increasingly mixes simulation with “reality”.Stay tuned for next time when we focus more on some of the interesting interfaces that were exhibited at the Expo.

AquaTop Display for Interactive Bathing

Always in search of new things to blog about, we went out to the Digital Content Expo in Odaiba this Thursday and we were not disappointed.This is our first post showing what we found

The Aqua top display from Koike Laboratory is one of a kind.

Bathtub,water screen, game, toy, Japan, DC expo 2013

The idea came to one of its creators while in the bath, because of the difficulty of bringing an information device such as a personal computer or a tablet in the bathtub. The project took six months to be conceptualized and prototyped, though it’s currently still in the prototype phase.

water screen, display, bathtub, lights & colors, toy, Japan

The system is set up around a bathtub or a 600mm x 900mm x 250mm square plastic tank full of water mixed with commercially available bath milk, on which the display is projected. Both of the containers are equipped with waterproof speakers on the inside to provide greater interactivity. Above the water are suspended a projector and a depth camera (such as Microsoft Kinect) connected to a personal computer. Currently two displays can be projected on the water, a content viewer (for videos or photos) and a jellyfish shooting game.

The liquid display enables the user to become one with it, making for a truly immersive experience as the different functionalities of the display can be controlled both from above and bellow the surface level.

Though, as said before it is still in a prototyping stage, it it not difficult to take a leap and imagine this system being implemented in homes…imagining how much easier it would make a parents life when it comes to making their children take a bath makes it an interesting innovation…until said children refuse to leave the bathtub that is.