Print your own blood vessels, no need for red toner

Barely 24 hours after we told you about printing your own bones, the franken-science continues with the announcement that blood vessels are next on the body-parts-you-can-print list. Unsurprisingly, you’ll need more than just regular toner if you want to start printing your own at home, but pioneering work by application-oriented research organization Fraunhofer has claimed to have cracked it by adding some good old ‘two-photon polymerization’ into the mix — yeah, obvious once you know. The added photon special sauce is what makes the printed synthetic tubes biofunctionalized, which in turn enables living body cells to dock onto them — we’re guessing that’s important. Sounds cute, but how long until we can start printing whole people — Weird Science, anyone?

Continue reading Print your own blood vessels, no need for red toner

Print your own blood vessels, no need for red toner originally appeared on Engadget on Mon, 19 Sep 2011 13:46:00 EDT. Please see our terms for use of feeds.

Permalink CNET  |  sourceFraunhofer  | Email this | Comments

Scientists attempt to give spark of life to all-synthetic metal cells

Just because it hasn’t happened yet, doesn’t mean it can’t; at least that’s what a Scottish research group is hoping as it attempts to create reproductive synthetic cells made completely from metal. At this stage, the idea of sentient metallic life remains a distant sci-fi dream, but researchers at the University of Glasgow have already birthed iChells — inorganic chemical cells. These bubbles, formed from the likes of tungsten, oxygen and phosphorus, can already self-assemble, possess an internal structure, and are capable of the molecular in-and-outs expected of its biological counterparts. Researchers are still tackling how to give these little wonders the ability to self-replicate, and possibly evolve — further cementing our doom post-Robot Apocalypse. Check out our future synthetic overlord’s first steps in a video after the break.

Continue reading Scientists attempt to give spark of life to all-synthetic metal cells

Scientists attempt to give spark of life to all-synthetic metal cells originally appeared on Engadget on Mon, 19 Sep 2011 07:59:00 EDT. Please see our terms for use of feeds.

Permalink DVICE, New Scientist  |  sourceUniversity of Glasgow  | Email this | Comments

Dublin City University adopts Chromebooks — time to go streaking through the quad!

Instead of handing out cheap mugs (or iPads… or iPods) and sending students on their merry way, administrators at Dublin City University will be showering incoming freshmen with free Chromebooks — in doing so, it’ll become the first European higher-education institution to adopt the device. As you probably recall, Google’s always-connected laptops have gone through various incarnations throughout the years, but they’ve always included a dash of WiFi or 3G and a pinch of hasty boot — intentionally ditching local storage for the cloud. The Google Chromebooks for Education partnership is said to support DCU’s commitment to make 80 percent of its classes partially or fully online by 2013, allowing coeds to stay in their Scooby Doo pajamas or attend class from Pi Kappa Delta HQ. Now, if only the dining hall supported online ordering…

Dublin City University adopts Chromebooks — time to go streaking through the quad! originally appeared on Engadget on Thu, 15 Sep 2011 20:39:00 EDT. Please see our terms for use of feeds.

Permalink Silicon Republic  |  sourceDublin City University  | Email this | Comments

SUFFER ’11 farming robot plays a multitude of roles, takes commands via Wiimote (video)

So much for stereotypes, eh? The future of farming is being painted in a far different light here at NEXT Aarhus, where a team from the University of Southern Denmark brought in the largest Wiimote-controlled robot that we’ve ever seen. The heretofore unnamed beast (going by SUFFER ’11 for the time being) is a farming-centric machine that’s designed to take the load off of the landowners (while providing a bit of enjoyment all the while). Put simply, this modular bot can have various apparatuses swapped into its midsection — one pop-in attachment could pick potatoes, while another could disperse pesticide, for example. There’s even a module that’ll enable it to detect rows and plow down the obvious routes, making it that much easier for farmers of the next millennium to take time off. Of course, the standout feature from our perspective was the inbuilt Bluetooth and WiFi, which allowed the demonstrator to operate the ‘bot with a standard Wii remote. Per usual, the vid’s after the break.

Continue reading SUFFER ’11 farming robot plays a multitude of roles, takes commands via Wiimote (video)

SUFFER ’11 farming robot plays a multitude of roles, takes commands via Wiimote (video) originally appeared on Engadget on Tue, 30 Aug 2011 14:08:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Cyclone Display exemplifies ‘multi-colored expression,’ totally heading to a nightclub near you (video)

Ever heard of Yoichi Ochiai? You have now. Hailing from Japan’s University of Tsukuba, this whizkid was on hand here at SIGGRAPH to showcase one of his latest creations — and it just so happened to be one of the trippiest yet. The Cyclone Display was a demonstration focused on visual stimulation; a projector shown above interacted with a plate of spinning disks. Underneath, a cadre of motors were controlled by a connected computer, and as the rotation and velocity changed, so did the perceived pixels and colors. The next step, according to Ochiai, would be to blow this up and shrink it down, mixing textures in with different lighting situations. With a little help, a drab nightclub could douse its walls in leopard print one night, or zebra fur another. Interactive clubbing never sounded so fun, eh? You know the drill — gallery’s below, video’s a click beneath.

Continue reading Cyclone Display exemplifies ‘multi-colored expression,’ totally heading to a nightclub near you (video)

Cyclone Display exemplifies ‘multi-colored expression,’ totally heading to a nightclub near you (video) originally appeared on Engadget on Fri, 12 Aug 2011 22:37:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceYoichi Ochiai  | Email this | Comments

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Lookin’ to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach.

Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed “a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need.” Naturally, this one’s most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices — Microsoft’s Kinect was specifically mentioned, and it doesn’t take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we’d advise you to set aside a few hours (and rest up beforehand).

Continue reading Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted originally appeared on Engadget on Wed, 10 Aug 2011 21:46:00 EDT. Please see our terms for use of feeds.

Permalink Physorg  |  sourceCarnegie Mellon University, Microsoft Research  | Email this | Comments

PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video)

Think SIGGRAPH‘s all about far-out design concepts? Think again. A crew from the Tokyo Metropolitan University IDEEA Lab was on hand here at the show’s experimental wing showcasing a new “musical interface,” one that’s highly tactile and darn near impossible to walk away from. Upon first glance, it reminded us most of Yamaha’s Tenori-On, but the “universal input / output box” is actually far deeper and somewhat more interactive in use. A grand total of 16 solenoids are loaded in, and every one of ’em are loaded up with sensors.

Users can tap any button to create a downbeat (behind the scenes, a sequencer flips to “on”), which will rise in unison with the music until you tap it once more to settle it (and in turn, eliminate said beat). You can grab hold of a peg in order to sustain a given note until you let it loose. There’s a few pitch / tone buttons that serve an extra purpose — one that we’re sure you can guess by their names. Those are capable of spinning left and right, with pitch shifting and speeds increasing / decreasing with your movements. The learning curve here is practically nonexistent, and while folks at the booth had no hard information regarding an on-sale date, they confirmed to us that hawking it is most certainly on the roadmap… somewhere. Head on past the break for your daily (video) dose of cacophony.

Continue reading PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video)

PocoPoco musical interface box makes solenoids fun, gives Tenori-On pause (video) originally appeared on Engadget on Wed, 10 Aug 2011 18:50:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

It’s a little shocking to think about the impact that Microsoft’s Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what’s happening here, we’d actually encourage you to hop back and have a gander at our hands-on with PrimeSense’s raw motion sensing hardware from GDC 2010 — for those who’ve forgotten, that very hardware was finally outed as the guts behind what consumers simply know as “Kinect.” The breakthrough wasn’t in how it allowed gamers to control common software titles sans a joystick — the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that’s precisely what a smattering of highly intelligent blokes in the UK have done, and they’ve built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral.

The actual technobabble ran deep — not shocking given the academic nature of the conference — but the demos shown were nothing short of jaw-dropping. There’s no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user’s surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that’s been created here can “extract surface geometry in real-time,” right down to the millimeter level. Of course, the Kinect’s camera and abilities are relatively limited when it comes to resolution; you won’t be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there’s nothing stopping this from scaling with the future. Have a peek at the links below if you’re interested in diving deeper — don’t be shocked if you can’t find the exit, though.

Microsoft’s KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities originally appeared on Engadget on Tue, 09 Aug 2011 14:48:00 EDT. Please see our terms for use of feeds.

Permalink Developer Fusion  |  sourceMicrosoft Research [PDF]  | Email this | Comments

$1 chip tests for HIV in 15 minutes flat, fits in your wallet

Getting tested for STDs used to mean a doctor’s visit, vials of blood, and days, weeks, or even months of anxiously waiting for results. mChip aims to change all that, while simultaneously ridding your brain of viable excuses not to get tested. It works as such: one drop of blood goes on the microfluidics-based optical chip, 15 minutes pass, and boom, the AmEx-sized device will confirm whether or not you have syphilis and / or HIV. The bantam gizmo is practically foolproof, as reading the results doesn’t require any human interpretation whatsoever. Plus, it’s cheap — cheaper than a coffee at Starbucks. One dollar cheap. Researchers at Columbia University claim the mChip has a 100 percent detection rate, although there’s a four to six percent chance of getting a false positive — a stat similar to traditional lab tests. As you’d likely expect, there’s hope that the inexpensive mChip will help testing efforts in places like Africa to detect HIV before it turns into AIDS. Next stop: the self-service pharmacy at CVS?

$1 chip tests for HIV in 15 minutes flat, fits in your wallet originally appeared on Engadget on Thu, 04 Aug 2011 07:32:00 EDT. Please see our terms for use of feeds.

Permalink DVICE  |  sourceNature Magazine  | Email this | Comments

Gig. U hopes to bring Gigabit networks and straight cash, homey, to university communities

Familiar with Johnny Appleseed? He who traipsed ’round the country with a sack ‘o seeds on his shoulder, planting trees hither and yon leaving apple orchards blooming in his wake? Gig. U is similar, only it’s a project that aims to plant Gigabit networks in 29 collegiate communities to facilitate research, attract start-ups, and stimulate local economies. The plan is just getting underway, and the schools in question — including Virginia Tech, the University of Hawaii, and the University of Alaska — are asking private telcos and companies to help make their high-speed dreams a reality. In addition to benefiting the immediate areas, Gig. U sees these swift new networks functioning as hubs in a faster nationwide broadband system. The colleges claim that construction of these new information superhighways won’t start for several years, so it’ll be some time before they can help elevate us from our current state of broadband mediocrity. Chop, chop, guys.

Gig. U hopes to bring Gigabit networks and straight cash, homey, to university communities originally appeared on Engadget on Wed, 27 Jul 2011 23:07:00 EDT. Please see our terms for use of feeds.

Permalink New York Times  |  sourceGig.U  | Email this | Comments