The Celebrity Costumes That Were All Treat This Halloween

Heidi Klum, Beyoncé, Justin Timberlake and LeBron James embraced the spooky holiday with their wild outfits.

Toyota Tundra PIE Pro Has Pizza-Making Robots On Board

If you like your pizza hot, fresh, and delivered by robots, look no further. Toyota and Pizza Hut have got you covered. The two brand giants recently collaborated to make a high tech pickup truck that has all the gear on board to make pizza on the go, so it can be made to order on its way to its recipient.

The build team at Toyota’s Motorsports Technical Center at the company’s US headquarters in Plano, Texas started out with a full-size Tundra pickup truck, then got to kitting it out with all kinds of goodies. For starters, the Tundra PIE Pro truck runs not on fossil fuels, but on hydrogen power, which it uses to generate electricity for its motor and its robotic pizza cookery.

While that might make treehuggers geek out, it’s the robots that get me all hot and bothered. The truck is equipped with two Nachi industrial robot arms, similar to the ones that Toyota uses on its assembly lines – only smaller. Each robot is responsible for part of the pizza making process – one doing prep, and the other handling packaging and delivery.

The first robot uses its arm to open up one of the truck’s built-in pizza refrigerators, and reaches inside to grab an uncooked pizza. It then carefully maneuvers the pizza into a high-speed professional pizza oven, where it rolls through as it bakes. It takes about seven minutes to cook, and when it pops out the other side of the oven, robot number two takes over.

It’s the second robot’s job to grab the freshly-cooked pizza, and place it onto a special, rotating cutting board, at which point a mechanical pizza wheel raises and lowers itself as the robot turns the pizza, resulting in perfectly triangular slices. Once the pizza is sliced, robot number two fetches it, transfers it to a pizza box, closes the lid, then hands it over to the customer. When it’s all done, it rings a little bell as an added flourish.

I watched the whole thing work like a seamless industrial ballet when Toyota revealed the truck at this year’s SEMA show in Las Vegas, filling the air around the booth with the aroma of fresh-baked pizza, and making mouths of hungry journalists water.

There’s a nifty making of video below from Toyota’s Motorsports Technical Center showing off the design and build process behind the Tundra PIE Pro. Be sure to check it out.

It’s a very impressive build, though sadly, there are no immediate plans for robotic pizza delivery trucks to start hitting the streets. So for now, we’ll just have to rely on those pesky humans.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Trump Appears to Fuel George Soros Conspiracy Theory About Migrant Caravan

President Donald Trump is once again throwing fuel on the fire of the George Soros conspiracy madness by seemingly insinuating that the former hedge fund manager and political activist is funding the migrant caravan currently making its way from Central America to the United States border.

Read more…

Tesla's Summon upgrade turns vehicles into remote-controlled cars

Tesla’s self-parking Summon feature is getting an upgrade, and it’ll be ready in less than six weeks. In a series of tweets, chief executive Elon Musk revealed that the beefed-up feature will now allow vehicles to drive around parking lots, find empt…

Google & iRobot Team Up To Help Improve Smart Homes

In the recent years, Google has managed to find their way into our homes through devices like Google Home, where it works together with various smart home devices for voice activated features and also automation. Now it seems that Google will be teaming up with iRobot, the makers of the Roomba robot vacuum cleaner, to improve on our smart homes.

This will rely on the indoor maps that a Roomba robot will create when it is cleaning your home. Since robots do not have “eyes”, the Rooba will try to map out your home as it goes about cleaning it so that in the future it has a better idea of where to go. Through that mapping data, Google and iRobot will attempt to integrate their platforms together so that in the future, more complex tasks might be able to be accomplished.

For example with the Roomba being able to map your home, Google could then assign it names so that you could setup a schedule for your Roomba to clean particular rooms. Or you could command it using voice commands via Google Home to clean an area of the house, like the kitchen, living room, and so on.

Speaking to The Verge, iRobot’s CEO Colin Angle also mentioned how this information could in the future lead to more awareness of our smart home devices. “This idea is that when you say, ‘OK Google, turn the lights on in the kitchen,’ you need to know what lights are in the kitchen. And if I say, ‘OK future iRobot robot with an arm, go get me a beer,’ it needs to know where the kitchen and the refrigerator are.”

Google & iRobot Team Up To Help Improve Smart Homes , original content from Ubergizmo. Read our Copyrights and terms of use.

Toyota Tundra PIE Pro Has Pizza-Making Robots On Board

If you like your pizza hot, fresh, and delivered by robots, look no further. Toyota and Pizza Hut have got you covered. The two brand giants recently collaborated to make a high tech pickup truck that has all the gear on board to make pizza on the go, so it can be made to order on its way to its recipient.

The build team at Toyota’s Motorsports Technical Center at the company’s US headquarters in Plano, Texas started out with a full-size Tundra pickup truck, then got to kitting it out with all kinds of goodies. For starters, the Tundra PIE Pro truck runs not on fossil fuels, but on hydrogen power, which it uses to generate electricity for its motor and its robotic pizza cookery.

While that might make treehuggers geek out, it’s the robots that get me all hot and bothered. The truck is equipped with two Nachi industrial robot arms, similar to the ones that Toyota uses on its assembly lines – only smaller. Each robot is responsible for part of the pizza making process – one doing prep, and the other handling packaging and delivery.

The first robot uses its arm to open up one of the truck’s built-in pizza refrigerators, and reaches inside to grab an uncooked pizza. It then carefully maneuvers the pizza into a high-speed professional pizza oven, where it rolls through as it bakes. It takes about seven minutes to cook, and when it pops out the other side of the oven, robot number two takes over.

It’s the second robot’s job to grab the freshly-cooked pizza, and place it onto a special, rotating cutting board, at which point a mechanical pizza wheel raises and lowers itself as the robot turns the pizza, resulting in perfectly triangular slices. Once the pizza is sliced, robot number two fetches it, transfers it to a pizza box, closes the lid, then hands it over to the customer. When it’s all done, it rings a little bell as an added flourish.

I watched the whole thing work like a seamless industrial ballet when Toyota revealed the truck at this year’s SEMA show in Las Vegas, filling the air around the booth with the aroma of fresh-baked pizza, and making mouths of hungry journalists water.

There’s a nifty making of video below from Toyota’s Motorsports Technical Center showing off the design and build process behind the Tundra PIE Pro. Be sure to check it out.

It’s a very impressive build, though sadly, there are no immediate plans for robotic pizza delivery trucks to start hitting the streets. So for now, we’ll just have to rely on those pesky humans.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Trump Appears to Fuel George Soros Conspiracy Theory About Migrant Caravan

President Donald Trump is once again throwing fuel on the fire of the George Soros conspiracy madness by seemingly insinuating that the former hedge fund manager and political activist is funding the migrant caravan currently making its way from Central America to the United States border.

Read more…