Facebook developing brain-like AI to find deeper meaning in feeds and photos

Facebook News Feed diagram

Facebook’s current News Feed ranking isn’t all that clever — it’s good at surfacing popular updates, but it can miss lower-profile updates that are personally relevant. The company may soon raise the News Feed’s IQ, however, as it recently launched an artificial intelligence research group. The new team hopes to use deep learning AI, which simulates a neural network, to determine which posts are genuinely important. The technology could also sort a user’s photos, and it might even select the best shots. While the AI work has only just begun, the company tells MIT Technology Review that it should release some findings to the public; those breakthroughs in social networking could help society as a whole.

Filed under: ,

Comments

Source: MIT Technology Review

Joaquin Phoenix finds real love with artificial intelligence in Spike Jonze’s ‘Her’ (video)

DNP Joaquin Phoenix finds real love with artificial intelligence in Spike Jonze's Her

When sci fi narratives explore artificial intelligence approaching a human level of sentience, they tend to focus on the negative (Skynet, anyone?). Not so with Spike Jonze’s new movie Her, a melancholy examination of what it means to be human in an increasingly inhuman world. The film stars Joaquin Phoenix as a social recluse who finds a friend in his smartphone’s Siri-inspired assistant, Samantha (voiced by Scarlett Johansson). The relationship blossoms in a way that manages to be both heartfelt and deeply unsettling, and Jonze’s take on a sort of technological animism feels pretty culturally resonant. Her is set for a November release, and you can watch the trailer after the break.

Filed under: ,

Comments

Via: io9

Source: Annapurna Pictures

Study reveals AI systems are as smart as a 4-year-old, lack common sense

DNP AIs are actually 4yearold kids

It’ll take a long time before we see a J.A.R.V.I.S. in real life — University of Illinois at Chicago researchers put MIT’s ConceptNet 4 AI through the verbal portions of a children’s IQ test, and rated its apparent relative intelligence as that of a 4-year-old. Despite an excellent vocabulary and ability to recognize similarities, the lack of basic life experience leaves one of the best AI systems unable to answer even easy “why” questions. Those sound simple, but not even the famed Watson supercomputer is capable of human-like comprehension, and research lead Robert Sloan believes we’re far from developing one that is. We hope scientists get cracking and conjure up an AI worthy of our sci-fi dreams… so long as it doesn’t pull a Skynet on humanity.

[Image credit: Kenny Louie]

Filed under: ,

Comments

Via: Extremetech

Source: University of Illinois Chicago

Apple announces Anki Drive, an AI robotics app controlled through iOS

Apple announces Anki, an AI app for iOS

Apple is just starting its WWDC keynote this morning, but it’s already announcing something quite interesting: a new company called Anki and its inaugural iOS app called Anki Drive, which centers around artificial intelligence and robotics. The name, which is Japanese for “memorize,” features smart cars that are capable of driving themselves (although you can certainly take over at any time) and communicate with your iPhone using Bluetooth LE. These intelligent vehicles, when placed upon a printed race track, can sense the track up to 500 times a second. The iOS-exclusive game is available as a beta in the App Store today, which you’ll need to sign up for — the full release won’t be coming until this fall — and it’s billed as a “video game in the real world.” According to the developers, “the real fun is when you take control of these cars yourselves,” which we can definitely attest to — the WWDC demo cars had weapons, after all.

Follow our liveblog for all of the latest news from WWDC 2013.

Filed under: , , , ,

Comments

Source: Anki

Google and NASA team up for D-Wave-powered Quantum Artificial Intelligence Lab

Google and NASA team up for DWavepowered Quantum Artificial Intelligence Lab

Google. NASA. Quantum computers. Seriously, everything about the new Quantum Artificial Intelligence Lab at the Ames Research Center is exciting. The joint effort between Mountain View and America’s space agency will put a 512 qubit machine from D-Wave at the disposal of researchers from around the globe, with the USRA (Universities Space Research Association) inviting teams of scientists and engineers to share time on the unique super computer. The goal is to study how quantum computing might be leveraged to advance machine learning, a branch of AI that has proven crucial to Google’s success. The internet giant has already done some work with quantum computing before, now the goal is to see if its experimentation can translate into real world results. The idea, for Google at least, is to combine the extreme (but highly-specialized) power of the quantum bit with its oceans of traditional data centers to build more accurate models for everything from speech recognition to web search. And maybe, just maybe, with the help of quantum computers your phone will finally realize you didn’t mean to say “duck.”

Filed under: , ,

Comments

Via: New York Times

Source: Google Research Blog

Kimera Systems wants your smartphone to think for you

Kimera Systems wants your smartphone to think for you

When Google took the wraps off Now we all got a pretty excited about the potential of the preemptive virtual assistant. Kimera Systems wants to build a similar system, but one that will make Mountain View’s tool look about as advanced as a Commodore 64. The founder of the company, Mounir Shita, envisions a network of connected devices that use so-called smart software agents to track your friends, suggest food at a restaurant and even find someone to paint your house. That explanation is a bit simplistic, but it gets to the heart of what the Artificial General Intelligence network is theoretically capable of. In this world (as you’ll see in the video after the break) you don’t check Yelp or text your friend to ask if they’re running late. Instead, your phone would recognize that you’d walked into a particular restaurant, analyze the menu and suggest a meal based on your tastes. Meanwhile, your friend has just reached the bus stop, but it’s running a little behind. Her phone knows she’s supposed to meet you so it sends an alert to let you know of the delay. With some spare time on your hands, your phone would suggest making a new social connection or walking to a nearby store to pick up that book sitting in your wishlist. It’s creepy, ambitious and perhaps a bit unsettling that we’d be letting our phones run our lives. Kimera is trying to raise money to build a plug-in for Android and an SDK to start testing its vision. You check out the promotional video after the break and, if you’re so inclined, pledge some cash to the cause at the source.

Continue reading Kimera Systems wants your smartphone to think for you

Filed under: , ,

Kimera Systems wants your smartphone to think for you originally appeared on Engadget on Sat, 13 Oct 2012 05:57:00 EDT. Please see our terms for use of feeds.

Permalink Read Write Web  |  sourceKimera Systems (RocketHub)  | Email this | Comments

Georgia Tech receives $900,000 grant from Office of Naval Research to develop ‘MacGyver’ robot

Georgia Institute of Technology received $900,000 grant from Office of Naval Research to develop 'Macgyver' robot

Robots come in many flavors. There’s the subservient kind, the virtual representative, the odd one with an artistic bent, and even robo-cattle. But, typically, they all hit the same roadblock: they can only do what they are programmed to do. Of course, there are those that posses some AI smarts, too, but Georgia Tech wants to take this to the next level, and build a ‘bot that can interact with its environment on the fly. The project hopes to give machines deployed in disaster situations the ability to find objects in their environment for use as tools, such as placing a chair to reach something high, or building bridges from debris. The idea builds on previous work where robots learned to moved objects out of their way, and developing an algorithm that allows them to identify items, and asses its usefulness as a tool. This would be backed up by some programming, to give the droids a basic understanding of rigid body mechanics, and how to construct motion plans. The Office of Navy Research‘s interest comes from potential future applications, working side-by-side with military personnel out on missions, which along with iRobot 110, forms the early foundations for the cyber army of our childhood imaginations.

Filed under: ,

Georgia Tech receives $900,000 grant from Office of Naval Research to develop ‘MacGyver’ robot originally appeared on Engadget on Fri, 12 Oct 2012 10:59:00 EDT. Please see our terms for use of feeds.

Permalink GizMag  |  sourceGeorgia Tech  | Email this | Comments

IBM debuts new mainframe computer as it eyes a more mobile Watson

IBM debuts new mainframe computers as it eyes a more mobile Watson

Those looking for a juxtaposition of IBM’s past and future needn’t look much further than two bits of news out of the company this week. The first comes with IBM’s announcement of its new zEnterprise EC12 25 mainframe server — a class of computer that may be a thing of the past in some places, but which still serves a fairly broad range of companies. In addition to an appearance that lives up to the “mainframe” moniker, this one promises 25 percent more performance per core than its predecessor and 50 percent more capacity. The second bit of news involves Watson, the company’s AI effort that rose to fame on Jeopardy! and has since gone on to find a number of new roles. As Bloomberg reports, one of its next steps may be to take on Siri in the smartphone space. While there’s no indication of a broader consumer product, IBM sees a range of possible applications for a mobile Watson in business and enterprise — even, for instance, giving farmers the ability to ask when they should plant their crops. Before that happens, though, IBM says it needs to give Watson more “senses” in order to respond to real-world input like image recognition — not to mention learn all it can about any given subject.

Filed under: ,

IBM debuts new mainframe computer as it eyes a more mobile Watson originally appeared on Engadget on Wed, 29 Aug 2012 12:48:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceBloomberg, IBM  | Email this | Comments

Scientists investigating AI-based traffic control, so we can only blame the jams on ourselves

Scientists investigating artificial intelligencebased traffic control, so we can only blame the jams on ourselves

Ever found yourself stuck at the lights convinced that whatever is controlling these things is just trying to test your patience, and that you could do a better job? Well, turns out you might — at least partly — be right. Researchers at the University of Southampton have just revealed that they are investigating the use of artificial intelligence-based traffic lights, with the hope that it could be used in next-generation road signals. The research uses video games and simulations to assess different traffic control systems, and apparently us humans do a pretty good job. The team at Southampton hope that they will be to emulate this human-like approach with new “machine learning” software. With cars already being tested out with WiFi, mobile connectivity and GPS on board for accident prevention, a system such as this could certainly have a lot of data to tap into. There’s no indication as to when we might see a real world trial, but at least we’re reminded, for once, that as a race we’re not quite able to be replaced by robotic overlords entirely.

Continue reading Scientists investigating AI-based traffic control, so we can only blame the jams on ourselves

Filed under: , ,

Scientists investigating AI-based traffic control, so we can only blame the jams on ourselves originally appeared on Engadget on Sun, 26 Aug 2012 21:09:00 EDT. Please see our terms for use of feeds.

Permalink PhysOrg  |   | Email this | Comments

Robot stock traders lose $440,000,000 in 45 minutes, need someone to spell it out

Robot stock traders lose $440,000,000 in 45 minutes, need someone to spell it out

Humans never learn and apparently neither do robots. Autonomous trading AIs went on a spending spree at Knight Capital Group in New Jersey this week, buying up shares in everything from RadioShack to Ford and American Airlines (ouch) in a 45-minute frenzy of disobedience. The company tried to offload the unwanted stock, but discovered it was already nearly half a billion dollars in the red — enough to wipe out its entire profit from 2011 and “severely impact” its ability to conduct business. If only it had protected itself with one of these.

Filed under: , ,

Robot stock traders lose $440,000,000 in 45 minutes, need someone to spell it out originally appeared on Engadget on Fri, 03 Aug 2012 10:02:00 EDT. Please see our terms for use of feeds.

Permalink New Scientist  |  sourceNY Times  | Email this | Comments