Pete Davidson Slammed For Joke About Navy Veteran’s War Wound

Demands for the ‘SNL’ comedian to apologize to Dan Crenshaw flooded social media on Sunday.

In Georgia, 11th-hour daggers over voting system.

Republican gubernatorial nominee Brian Kemp made a hacking allegation against Democrats.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Huawei Mate 20 Pro screen has strong green bleeding, complain owners

Halloween is long over but something might continue to haunt Huawei Mate 20 Pro owners like an eerie green ghost. And no, we’re not talking about Ghostbusters’ Slimer. Just days after new, proud owners got their hands on Huawei’s latest and greatest, the flood of complaints regarding screen problems come crashing into forums and social media. The complaints are clear: … Continue reading

This Fan Trailer Turns DC's Silly Crossover Into a Delightful Movie

Doctor Manhattan vs. Superman. Why? I don’t know. Honestly, I don’t care. There’s some knowledge I just don’t need in my life. But I do know that this fan adaptation of Doomsday Clock is great.

Read more…

Winklevoss twins claim cryptocurrency guru stole 5,000 bitcoins

The Winkevoss twins are back in the news for their involvement in the cryptocurrency world, but this time they likely wouldn’t want to celebrate. The New York Times has learned that the two sued crypto investor and ex-convict Charlie Shrem for alle…

Hearthstone Expansion ‘Rastakhan’s Rumble’ Announced

Fans of Blizzard’s Hearthstone might be pleased to learn that during BlizzCon 2018, the company officially announced the next expansion in the game in the form of Rastakhan’s Rumble (for those unfamiliar, Rastakhan is the current ruler of the Zandalari Empire and had a key role in the World of Warcraft: Battle for Azeroth expansion as far as the Horde side was concerned.

As expected of this expansion and given its name, Rastakhan’s Rumble will focus on the troll empire. According to Blizzard, “It’s an event like none other! You’ll THRILL to the roar of the crowd and the incredible crash of thudding flesh and searing spells. You’ll savor the scent of fresh funnel cake on the air! Witness with breathless anticipation as world-renowned troll gladiators put their mojo on the line to clash with magic and might–in style! When the Loa call your name and team spirit fills your thundering heart, nothing’s gonna stop you from leaping into the arena to JOIN the gladiatorial mayhem!”

The expansion will feature 135 brand new cards, a new play setting, and also as expected, new mechanics in the form of Overkill, which according to Blizzard will trigger a bonus effect when a card with the Overkill keyword does damage in excess of their target’s health. There will also be the introduction of the Loas (which are the troll gods), and also Legendary Champions.

The expansion is currently set for a release on the 4th of December and gamers who are interested can pre-purchase it now where for $50, you will get 50 card packs, access to a new Shaman hero in the form of King Rastakhan himself, and also the new “Ready to Rumble!” card back.

Hearthstone Expansion ‘Rastakhan’s Rumble’ Announced , original content from Ubergizmo. Read our Copyrights and terms of use.

In Georgia, 11th-hour daggers over voting system.

Republican gubernatorial nominee Brian Kemp made a hacking allegation against Democrats.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.

Watch this little robot transform to get the job done

Robots just want to get things done, but it’s frustrating when their rigid bodies simply don’t allow them to do so. Solution: bodies that can be reconfigured on the fly! Sure, it’s probably bad news for humanity in the long run, but in the meantime it makes for fascinating research.

A team of graduate students from Cornell University and the University of Pennsylvania made this idea their focus and produced both the modular, self-reconfiguring robot itself and the logic that drives it.

Think about how you navigate the world: If you need to walk somewhere, you sort of initiate your “walk” function. But if you need to crawl through a smaller space, you need to switch functions and shapes. Similarly, if you need to pick something up off a table, you can just use your “grab” function, but if you need to reach around or over an obstacle you need to modify the shape of your arm and how it moves. Naturally you have a nearly limitless “library” of these functions that you switch between at will.

That’s really not the case for robots, which are much more rigidly designed both in hardware and software. This research, however, aims to create a similar — if considerably smaller — library of actions and configurations that a robot can use on the fly to achieve its goals.

In their paper published today in Science Robotics, the team documents the groundwork they undertook, and although it’s still extremely limited, it hints at how this type of versatility will be achieved in the future.

The robot itself, called SMORES-EP, might be better described as a collection of robots: small cubes (it’s a popular form factor) equipped with wheels and magnets that can connect to each other and cooperate when one or all of them won’t do the job. The brains of the operation lie in a central unit equipped with a camera and depth sensor it uses to survey the surroundings and decide what to do.

If it sounds a little familiar, that’s because the same team demonstrated a different aspect of this system earlier this year, namely the ability to identify spaces it can’t navigate and deploy items to remedy that. The current paper is focused on the underlying system that the robot uses to perceive its surroundings and interact with it.

Let’s put this in more concrete terms. Say a robot like this one is given the goal of collecting the shoes from around your apartment and putting them back in your closet. It gets around your apartment fine but ultimately identifies a target shoe that’s underneath your bed. It knows that it’s too big to fit under there because it can perceive dimensions and understands its own shape and size. But it also knows that it has functions for accessing enclosed areas, and it can tell that by arranging its parts in such and such a way it should be able to reach the shoe and bring it back out.

The flexibility of this approach and the ability to make these decisions autonomously are where the paper identifies advances. This isn’t a narrow “shoe-under-bed-getter” function, it’s a general tool for accessing areas the robot itself can’t fit into, whether that means pushing a recessed button, lifting a cup sitting on its side, or reaching between condiments to grab one in the back.

A visualization of how the robot perceives its environment.

As with just about everything in robotics, this is harder than it sounds, and it doesn’t even sound easy. The “brain” needs to be able to recognize objects, accurately measure distances, and fundamentally understand physical relationships between objects. In the shoe grabbing situation above, what’s stopping a robot from trying to lift the bed and leave it in place floating above the ground while it drives underneath? Artificial intelligences have no inherent understanding of any basic concept and so many must be hard-coded or algorithms created that reliably make the right choice.

Don’t worry, the robots aren’t quite at the “collect shoes” or “collect remaining humans” stage yet. The tests to which the team subjected their little robot were more like “get around these cardboard boxes and move any pink-labeled objects to the designated drop-off area.” Even this type of carefully delineated task is remarkably difficult, but the bot did just fine — though rather slowly, as lab-based bots tend to be.

The authors of the paper have since finished their grad work and moved on to new (though surely related) things. Tarik Tosun, one of the authors with whom I talked for this article, explained that he’s now working on advancing the theoretical side of things as opposed to, say, building cube-modules with better torque. To that end he helped author VSPARC, a simulator environment for modular robots. Although it is tangential to the topic immediately at hand, the importance of this aspect of robotics research can’t be overestimated.

You can find a pre-published version of the paper here in case you don’t have access to Science Robotics.