Remember 1995? Yeah, me neither. But to refresh our memories, we’ve got an "In and Out" list from the December 20, 1995 edition of USA Today. This strange artifact (found in the University of California-San Francisco tobacco document archives) gives a peek at how mainstream America was thinking about shifting trends in media, technology and, I guess, Mexican food in the mid-1990s.
The dream of wearing a lightweight headset, like the Oculus Rift, in order to simulate physical presence isn’t limited to the imaginary worlds of video games. One man’s vision is that of immersive TV shows, movies and live sports. In fact, David Cole, co-founder of Next3D and an industry veteran who helps content creators and providers produce and deliver 3D, has been using his Rift dev kit to bring TV and film to life since the kits started shipping in March. The company is combining its video processing and compression technology with its experience in content production and stereoscopic delivery to offer what it’s called Full-Court.
Next3D hopes to leverage its existing relationships with creators and providers to assist them in jumping into the world of live-action VR content. This includes both pre-recorded and live broadcasts. We wanted to see this firsthand, so we jumped at the opportunity to witness the creation of content and experience the results. This trial run of Next3D’s stereoscopic, 180-degree field-of-view camera rig, and the post-processing to adapt it to VR, was part of the production of the paranormal investigation show, Anomaly, at Castle Warden in St. Augustine, Fla. Being nearby, we braved the perils of the haunted surroundings to tell you about what we hope is only the beginning of virtual reality content.
Filed under: Home Entertainment, HD
The Oculus Rift virtual reality headset has turned some heads since its initial debut, and now that it has been in the hands of developers for a few months now, the creators of Oculus Rift have received some incredible feedback. One of the biggest requests, however, was 1080p. And with the snap of a finger,
Putting Your Finger in this Japanese Robot is a Step Toward Actual Virtual Reality
Posted in: Today's ChiliWelcome to Touchable TV!
In addition to showcasing their 8K, 7680×4320, Ultra-High-Def (Ridiculous-Def?) TV broadcasting kit last weekend, Japan’s NHK also demoed a haptic feedback device that simulates virtual 3D objects in real time. And the thing is, it’s really just a robot that, when you touch it, kinda touches you back.
NHK (Nippon Hōsō Kyōkai/Japan Broadcasting Corporation) is a public media organization somewhat analogous to the American PBS. However, entirely not at all like its American counterpart, the J-broadcaster’s got this: NHK Science & Technology Research Laboratories. Which is nice, because in cooperation with various corporate partners, NHK seriously delivers the tech.
Okay fine… so where’s the robot?
Haptic Virtual Reality that’s Actually Virtual – Just Put Your Finger in This Robotic Thingy!
In the image above, a brave test pilot is placing his index finger into the locus of a five-point artificial haptic feedback environment. Based on the analysis & modeling of a virtual 3D object that in turn informs the movements and relative resistances among five robotic arms controlling the five feedback points, a focused area of stimuli/response is generated. Sounds complicated to explain “robotic, artificial sense of touch” that way, but conceptually the idea is quite simple:
#1. Put your finger in here and strap on the velcro:
#2. It’ll feel like you’re touching something that doesn’t physically exist, like Domo-kun (Dōmo-koon) here:
Each of those shiny round points is the terminus of a robotic arm that either gives way or holds steady based on the relative position of the finger to the contours of the object being simulated. Each point’s position-resistance refreshes every 1/1000th of a second. Not bad.
For practical, full-immersion VR to exist (in a physical sense; that is, before VR becomes a direct neural interface a la The Matrix), for now and for a while our low-to-medium-resolution interactive haptic feedback interfaces will be intrinsically robotic. And for virtualizing entirely digital, non-real artifacts, NHK’s device is a step in that direction.
Of course five points of interactivity might not sound like much, but mindful of the generally leapfroggy nature of technological advancement, effectively replicating and surpassing the haptic resolution we now experience via the estimated 2,500 nerve receptors/cm² in the human hand doesn’t seem too tall an order.
If that does seem too tall, if that does sound too far out and overly optimistic, if it seems impossible that we’d ever be able to cram 2,500 sensory & feedback robots into a square centimeter – well, then your robo-dorkery score is low and you need to pay more attention. Because dude, we’re already building nanorobots atom-by-atom. Not an “if” question, this one.
Neat… But Anything Really New Here?
Of course, a wide variety of teleoperated force-feedback systems are either already in use or in-development (the da Vinci Surgical System; NASA’s Robonaut 2; etc.), so it’s important to emphasize here that NHK’s device is novel for a very particular reason: Maybe all, or nearly all, of the force-feedback haptic systems currently in use or development are based on an ultimately analog physicality. That is to say, whether it’s repairing a heart valve from another room, or, from a NASA building in Texas, tele-pushing a big shiny button on the International Space Station – what’s being touched from afar ultimately is a physical object.
So, what we might consider contemporary practical VR is more accurately a kind of partial VR. As the sense of touch is essential to our experience as human beings, incorporating that sense is a step toward interactive, actual factual, truly virtual virtual reality. Modeling and providing haptic feedback for non-physical objects, i.e., things that don’t really exist, in concert with other virtualization technologies – that’s a big step.
So What Can/Does/Will it Do?
NHK is kind of talking up the benefits for the visually impaired – which is good and noble and whatnot – but perhaps focusing on that is a bit of a PR move, because at least in theory this technology could go way, way beyond simple sensory replacement/enhancement.
An advanced version, incorporating the virtual touching of both simulated and/or real objects, could add layers of utility and interactivity to almost any form of work, entertainment, shopping… from afar we might discern how hard it is to turn a valve in an accident zone (partial VR), how bed sheets of various thread count feel against the skin (partial or full VR), the rough surface of the wall one hides behind in a videogame (proper VR), or even pettting the dog, or petting… ummm, a friend (partial and/or proper VR – chose your own adventure)!
That’s a ways off, but in the short-to-near-term, here’s how NHK envisions functionality for their touchable TV tech:
Matchmaker, Matchmaker, Make Me a Full-Immersion Omni-Sensory VR System!
Okay, so to get this ball rolling: NHK, meet VR upstart Oculus Rift. NHK & Oculus Rift, meet VR/AR mashup Eidos. NHK, Oculus Rift, and Eidos, meet UC Berkely’s laser-activated pseudo-robotic hydrogels.
We’re all waiting for your pre-holodeck lovechild.
• • •
Reno J. Tibke is the founder and operator of Anthrobotic.com and a contributor at the non-profit Robohub.org.
Via: MyNavi (Japanese/日本語); DigInfo
Images: DigInfo; NHK
My brain hates first-person games, so most of the stuff I’ve read and seen about the Oculus Rift don’t really interest me. But I’d be willing to buy the Rift just for Joo-Hyung Ahn’s VR Cinema 3D. It’s an app for the Rift that lets you watch videos in 3D while you’re inside a virtual movie theater.
As Ben Kuchera described at the Penny-Arcade Report, VR Cinema 3D doesn’t just let you watch videos on an enormous 3D screen. It lets you move around the theater. You can change seats and perhaps even faff about, fall asleep and ignore the movie altogether. The way Kuchera describes it, Joo Hyung-An seems to have captured the perspective of being in a cinema so well: your perspective of the screen changes accordingly when you move around, and the theater is believably dark and spacious. It’s a happy place for introverts and claustrophobes.
Those of you who have a development unit of the Rift can use VR Cinema 3D for free. Head to Joo Hyung-An’s website and click on the ‘Projects’ header to see the download link.
[via The PA Report via Movies]
Google Glass, Meta Wants Your Milkshake! …Do Consumers Want Either of Them?
Posted in: Today's ChiliGoogle Glass fever and upstart Meta’s rapidly financed US $100,000 Kickstarter campaign indicate #1. impending altered reality market maturity, or #2. everything new remixes the old, but still the geeks sing “Ohhhhh look, shiny!”
Google Glass: Loudest Voice in the Room
In development for several years and announced way back when, Glass finally got to developers and the geek elite about two months ago (for US $1500, plus getting oneself to a mandatory orientation meeting thingy). Glass is a kind of hybrid between a head-mounted display and augmented reality (AR) prosthetic outfitted with the internets. Really, if you’re reading Akihabara News you’re probably already hip, but if not there’s a search engine very ready to help you. Big G overlord Eric Schmidt indicated last month that a consumer-ready Glass product is about a year away. Realistically, at this point it’s unclear whether Glass is expected to be a viable consumer product or more of a proof-of-concept development platform.
Meta: Quickly Kickstarted, High-Profile Team Assembled – Working Man’s AR?
If you saw last year’s sci-fi short film “Sight” or the YouTube sci-fi series “H+,” you’re already hip to what Columbia University’s Meron Gribitz & pals are aiming for with Meta. While Glass is more of a HUD with some AR, Meta is less with the acronyms and more what the name suggests: information about information, i.e., Meta hopes to overlay manipulatable imagery/data on the physical world, augmenting real reality and projecting virtual reality (VR) artifacts that you can fiddle with in real time.
For now, Meta has a slick video, a prototype, a crack team of engineers and advisors including professor Steven Feiner and wearable computing advocate guy, Steve Mann, and financing to get their dev kit into dev’s hands. To its credit, Meta does seem to aim less at generalized gee-whiz gimmickry and heads-up automated narcissism, and more toward the getting actual work done.
Asian Alternatives:
First: POPSCI, very well done. The image on the above left melts one’s technosnarky heart.
In typical form, China has assimilated and excreted: the Baidu Eye is their Glass clone. There’s no indication of plans to bring it to market, so maybe they just wanted to say “Ha, ha, we can, too!” Or maybe they just wanted to do research and ride the Glass hype, which is understandable. But China, dude – might wanna think about doing some original stuff someday soon. That lack of intellectual capital is going to sting when “Designed in California” meets “Made in the U.S.A. With My 3D Printer.”
Over here in Japan we’ve got startup Telepathy One pushing a Glass-looking, but as they openly declare, not Glass-like AR headset (above-right). While technology writers rhetorically speculating as much in a headline makes for good Search Engine Optimization (other adjectives include: disingenuous, blithe, lame), rather than compete with Glass, Telepathy One is focusing on social networking & multimedia – but they too are clearly attempting to catch the contemporary current of AR hype – which is understandable. And hey, even if Telepathy One flashes and disappears, that fact that the phrase “Japanese Startup” can be used without the usual preface of “Why Aren’t There Any…” is a positive thing.
Okay Then, It’s Almost Doable – But Still…
Indeed, the apps, core software, computational capability, and the ubiquitous-enough network connectivity essential for decent AR are quickly ramping up. Along with innovative concepts like the AR/VR mashup Eidos Masks, alternatives to and more advanced versions of the above devices will likely continue to crop up. In fact, the never-even-close-to-being-vaguely-realized promises of VR are also showing signs of decreased morbidity. So…
We Actually Want It vs. They Want Us to Want It
Glass, the engine of the current VR hype machine, is of course conceptually nothing new, but it has the word “Google” in the name, so people are paying attention. Of course even Google gets ahead of itself from time to time (Buzz? Wave?), but lucky for them selling ads pays well, and they’ve got a boatload of cash to pour into whatever sounds cool. Millions have benefited from Google’s side projects and non-traditional ventures (Gmail much?), but the expectations leveled on Glass are… perhaps a bit much. Suffice it to say, Google absolutely nails search and software and web apps, but thus far big-G’s hardware projects have but limped.
But if we’ve got the cash, that probably won’t stop us! The soft tyranny of the tech elite is the ability to ring a shiny bell and then watch the doggies line up to pay. Luckily, actually useless products, products produced with too much hype, products produced with too much variety, products out of touch with the people who ultimately finance their creation – no matter how awesome they seem at first blush – they will fail. Hard. (Note: Sony, if you’re here, please reread the last sentence!).
Until AR & VR technologies can out-convenience a smartphone, shrink into a contact lens, dispense with voice controls and the confusing non-verbal communication of fiddling with a touchscreen on your temple, i.e., until such devices can move beyond relatively impractical novelty, it’s unlikely they’ll amount to much more than narrowly focused research and demonstration platforms.
This is to say, along with inventing Google Glass, the search giant might also want to invent something for us to like, you know, do with it. Or maybe that’s not fair – so to be fair, one can concede that no new technology is perfect at 1.0, and any awesome innovation has to start somewhere…
Maybe it could start in 1995. Ask Nintendo about that.
• • •
Reno J. Tibke is the founder and operator of Anthrobotic.com and a contributor at the non-profit Robohub.org.
Props to io9 and Meta’s Kickstarter and Meta (but come on guys, tame that website – autoplay is really annoying). PopSci article/image; Watch the augmented reality-themed “Sight” and “H+” by clicking on those words.
Researchers with the University of Southern California Institute for Creative Technologies are working on a virtual therapist that appears lifelike and is aimed to help those who need some type of counseling but aren’t yet ready or able to see a live worker. In addition, the digital shrink, because of the way it is designed, can be used to monitor the minute details of a person’s body language over time, helping a live counselor monitor progress.
The virtual therapist is designed to look as much like a real human as possible, with realistic body movements and facial expressions similar to those you’d observe from a live in-person counselor. The therapist “observes” its client by means of a webcam and a gaming sensor that is mounted above the display, with the person’s various movements, facial expressions, and other such shifts being recorded in relation to the corresponding question.
The information that is gathered from the “clients” are fed into a computer where the virtual therapist’s software lies. Such data is used to aid the software in guiding the virtual shrink in how it should approach clients and what questions it should ask, and how it should interpret the body language of clients that respond to the questions.
For example, a certain tone of voice and facial movement, such as an aversion of eyes or a brief smile, all indicate different mental aspects of the client, and can help the virtual therapist pin-point whether the person is suffering from depression, anxiety, or other such disorders. One big purpose of the virtual therapist is in PTSD cases, helping soldiers address the issue and proceed towards live counseling.
The center responsible for the work spends a lot of time collecting data from hundreds of military personnel, helping provide the data needed for the software to eventually identify the signs of PTSD. Other experiments being carried out by the researchers include the creation of a 3D human face hologram, and virtual full-size human projections that interact with real humans.
SOURCE: BBC News
Researchers create virtual therapist with webcam and game sensor is written by Brittany Hillen & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.
The Eidos masks are really, really cool.
New toys like Google Glass are neat, but augmented reality (AR) has been around forever. Going back to WWII-era active gun sights, forward to the heads-up displays of modern aircraft & automobiles, to smartphone apps that overlay directions, traffic conditions, restaurant reviews, or, through facial recognition, the name of an acquaintance or colleague; throughout the basic concept and implementation of AR has remained fundamentally unchanged.
The same could be said for virtual reality (VR), with one caveat: for practical purposes, it doesn’t really like, you know, exist. In 1992, when Lawnmower Man came out and hundreds of computer scientists died in tragic eye-rolling incidents, VR and enthusiasm for it pretty much crawled into the sci-fi/fantasy corner where it’s spent most of the last 20 years. VR is an endlessly fascinating concept, and it’s endlessly unavailable (though the Oculus Rift shows promise).
The novel and intriguing Eidos project, consisting of a pair of prototype sensory enhancement masks developed by students at the Royal College of Art in London, has very simply but cleverly shoehorned together both AR & VR into an indirect, yet direct augmentation of analog perception.
Basically, the system is digitally processing analog reality and then feeding it back to the user’s organic sight & sound receivers in real time – which is neither discretely AR nor VR, but kinda simultaneously both.
Potential applications abound, so have a watch below, and the admittedly convoluted explanation above should make sense:
Via Mashable – Eidos Home
Akihabara News Contributor Reno J. Tibke is the founder and operator of Anthrobotic.com.
In the latest update to Valve‘s classic sci-fi first-person shooter Half-Life 2, the developer added official support for the Oculus Rift virtual reality headset, making it the second game from Valve to support the new VR device, with Team Fortress 2 being the first game that received the green light.
Valve programmer and lead developer of the Oculus Rift-ified Team Fortress 2 Joe Ludwig announced support for Half-Life 2 on the Oculus Developer Forums. The beta for the VR version of the game is out now, and Ludwig says that the port should be shipping to everyone “in a few weeks,” allowing Oculus Rift owners to take advantage of the Valve game.
However, Ludwig notes that this particular port is “a bit more raw” than the Team Fortress 2 port when it first was released for the Oculus Rift, so gamers will definitely experience some bugs in the game, and Valve is already aware of a few themselves, including problems with HUD and and the zoomed-in UI.
As for other Valve titles that will make their way through the Oculus Rift development process, Valve says that more of their games will be heading to the VR headset, including the Portal and Left 4 Dead series, which have already gone through some testing phases. However, no more details were given about upcoming games.
[via Polygon]
[Source: Oculus Developer Forums]
Half-Life 2 Oculus Rift support official is written by Craig Lloyd & originally posted on SlashGear.
© 2005 – 2012, SlashGear. All right reserved.