Todd McFarlane’s Spawn Hires an Oscar-Nominated Screenwriter

James Gunn’s Superman gets an update. King Kong joins Monarch: Legacy of Monsters. Plus, the Dexter prequel series casts its leads. To the victor go the spoilers!

Read more…

These Are the Best Places to Find Free or Ultra-Cheap Games on PC and Phone

Games are expensive, and for some, it’s far too expensive to be a regular hobby. That makes shopping for deals that much more of a necessity. I’m here to offer a few tried and tested ways to find good deals on games, including legitimate ways to find games to play for free.

Read more…

'Challengers' VFX artists show how they did that tennis ball POV scene

Challengers, the tennis movie starring Zendaya, Mike Faist and Josh O’Connor, is not the first movie you’d think of for visual effects. But the film uses them to a surprising extent: One shot in particular, a 24-second volley between two of the protagonists from the perspective of the ball, used extensive digital and practical effects, as VFX supervisor Brian Drewes explained on X

The live plates were shot with an Arri Alexa LF on a 30-foot technocrane during a period of five hours with stunt doubles, according to Drewes. 23 individual shots were stitched together to create the final sequence. 

“Highly detailed LiDAR and photogrammetry scans of the tennis court environment were captured to help create the final models. 100+ actors and background extras were also photoscanned to populate the stands of our CG environment,” according to Drewes.

After that, CG was used to smooth camera motion and correct time of day changes. The stunt doubles’ faces were then replaced “with a combination of full CG heads and additional photography,” Drewes added. 

So why do all that? The sequence appears designed to convey the speed, chaos and passion in the sport, matching the movie’s overall themes. It’s also just a cool and exciting way to convey what would otherwise be a routine tennis match. 

This article originally appeared on Engadget at https://www.engadget.com/challengers-vfx-artists-show-how-they-did-that-tennis-ball-pov-scene-120523596.html?src=rss

Engadget Podcast: Microsoft goes Copilot+ crazy

Microsoft is leaning even more into AI after launching a new Copilot+ AI PC initiative earlier this year. It’s a new set of standards for PCs with powerful neural processing units (NPUs), and it could be just as significant for Windows as Apple’s move towards its M-series chips. In this episode, Cherlynn and Devindra discuss Copilot+ and the potential rise of Arm-based Windows systems, and we dive into the new Surface Pro and Surface Laptop.


Listen below or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!

  • Microsoft announces a new chapter with Copilot+ and NPU-powered Surface Pro and Surface Laptop – 0:51

  • Scarlett Johansson vs. OpenAI is just getting started – 37:17

  • Sonos Ace headphones take aim at Apple’s AirPods Max – 42:15

  • US Department of Justice makes its first arrest for AI-generated CSAM – 45:50

  • Bloomberg Report: Humane AI seeks a buyer for $700m–$1B, but will they get it? – 47:21

  • Listener Mail: Could you port the new ARM-based Windows to your Android handheld? – 51:42

  • Working on – 53:11

  • Pop culture picks – 54:19

Hosts: Cherlynn Low and Devindra Hardawar
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-microsoft-goes-copilot-crazy-113037153.html?src=rss

Linda H. Codega Tells io9 All About Their Queer Fantasy Debut Novel, Motheater

io9 has several acclaimed authors among its alumni, including Annalee Newitz, Charlie Jane Anders, Evan Narcisse, and Andrew Liptak. Another former io9-er, Linda H. Codega, will join their ranks early next year with the release of fantasy novel Motheater, and we’re thrilled to be debuting the cover and talking to the…

Read more…

OpenAI scraps controversial nondisparagement agreement with employees

OpenAI will not enforce any nondisparagement agreement former employees had signed and will remove the language from its exit paperwork altogether, the company told Bloomberg. Vox recently reported that OpenAI was making exiting employees choose between being able to speak against the company and to keep the vested equity they earned. Employees could lose millions if they choose not to sign the agreement or if they violate it. Sam Altman, OpenAI’s CEO, said he was “embarrassed” and didn’t know that the provision existed, promising to have the company’s paperwork altered. 

According to Bloomberg, the company notified former employees that “[r]egardless of whether [they] executed the agreement… OpenAI has not canceled, and will not cancel, any vested units.” It released them from the agreement altogether, “unless the nondisparagement provision was mutual.” At least one former employee said they had lost their vested equity that was equivalent to multiple times their family’s net worth by refusing to sign when they left. It’s unclear if they’re getting it back with this change. The company also talked to current employees about this development, easing their worries that they will have to be careful with everything they say if they don’t want to lose their stocks. 

“We are sorry for the distress this has caused great people who have worked hard for us,” Chief Strategy Officer Jason Kwon said in a statement. “We have been working to fix this as quickly as possible. We will work even harder to be better.”

This wasn’t the only controversial situation OpenAI has been involved in as of late. The company recently revealed that it was disbanding the team it formed last year to help make sure humanity is protected from future AI systems, which could be so powerful they could cause our extinction. Before that, OpenAI chief scientist Ilya Sutskever, who was one of the team’s leads, left the company. Another team lead, Jan Leike, said in a series of tweets that “safety culture and processes have taken a backseat to shiny products” within OpenAI. In addition, Scarlett Johansson accused OpenAI of copying her voice without permission for ChatGPT’s Sky voice assistant after she turned down Altman’s request to lend her voice to the company. OpenAI denied that it copied the actor’s voice and said that it hired another actor way before Altman contacted Johansson. 

This article originally appeared on Engadget at https://www.engadget.com/openai-scraps-controversial-nondisparagement-agreement-with-employees-043750040.html?src=rss

Linda H. Codega Tells io9 All About Their Queer Fantasy Debut Novel, Motheater

io9 has several acclaimed authors among its alumni, including Annalee Newitz, Charlie Jane Anders, Evan Narcisse, and Andrew Liptak. Another former io9-er, Linda H. Codega, will join their ranks early next year with the release of fantasy novel Motheater, and we’re thrilled to be debuting the cover and talking to the…

Read more…

Robocaller behind AI Biden deepfake faces charges and hefty FCC fine

A political consultant who admitted to using a deepfake of President Joe Biden’s voice in a robocall scheme this year is facing several charges as well as a hefty fine from the Federal Communications Commission. Steve Kramer (pictured above) said his aim with the New Hampshire primary robocall was to warn people about the dangers of artificial intelligence, as The Hill notes.

Kramer previously worked for Dean Phillips, a long-shot Democratic presidential candidate who suspended his campaign in March. Kramer has called for “immediate action” on AI “across all regulatory bodies and platforms.”

He has now been charged with 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate. The phony Biden voice allegedly urged people not to participate in the primary and to “save your vote for the November election.” New Hampshire Attorney General John Formella, who announced the charges, said in February that the robocall reached as many as 25,000 voters.

The FCC has proposed a $6 million fine against Kramer, citing an alleged violation of the Truth in Caller ID Act as the robocall is said to have spoofed a local political consultant’s phone number. The agency also proposed a $2 million fine against Lingo Telecom, the telecom carrier that operated the phone lines, for allegedly violating caller ID authentication rules. The FCC banned AI-generated voices in robocalls soon after the Kramer incident.

“New Hampshire remains committed to ensuring that our elections remain free from unlawful interference and our investigation into this matter remains ongoing,” AG Formella said. “The Federal Communications Commission will separately be announcing an enforcement action against Mr. Kramer based on violations of federal law. I am pleased to see that our federal partners are similarly committed to protecting consumers and voters from harmful robocalls and voter suppression.”

Meanwhile, the FCC may soon require political advertisers to disclose the use of any AI in TV and radio spots. However, chairwoman Jessica Rosenworcel is not seeking to ban the use of AI-generated content in political ads. “As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement on Wednesday.

This article originally appeared on Engadget at https://www.engadget.com/robocaller-behind-ai-biden-deepfake-faces-charges-and-hefty-fcc-fine-201803214.html?src=rss

Linda H. Codega Tells io9 All About Their Queer Fantasy Debut Novel, Motheater

io9 has several acclaimed authors among its alumni, including Annalee Newitz, Charlie Jane Anders, Evan Narcisse, and Andrew Liptak. Another former io9-er, Linda H. Codega, will join their ranks early next year with the release of fantasy novel Motheater, and we’re thrilled to be debuting the cover and talking to the…

Read more…

Robocaller behind AI Biden deepfake faces charges and hefty FCC fine

A political consultant who admitted to using a deepfake of President Joe Biden’s voice in a robocall scheme this year is facing several charges as well as a hefty fine from the Federal Communications Commission. Steve Kramer (pictured above) said his aim with the New Hampshire primary robocall was to warn people about the dangers of artificial intelligence, as The Hill notes.

Kramer previously worked for Dean Phillips, a long-shot Democratic presidential candidate who suspended his campaign in March. Kramer has called for “immediate action” on AI “across all regulatory bodies and platforms.”

He has now been charged with 13 felony counts of voter suppression and 13 misdemeanor counts of impersonation of a candidate. The phony Biden voice allegedly urged people not to participate in the primary and to “save your vote for the November election.” New Hampshire Attorney General John Formella, who announced the charges, said in February that the robocall reached as many as 25,000 voters.

The FCC has proposed a $6 million fine against Kramer, citing an alleged violation of the Truth in Caller ID Act as the robocall is said to have spoofed a local political consultant’s phone number. The agency also proposed a $2 million fine against Lingo Telecom, the telecom carrier that operated the phone lines, for allegedly violating caller ID authentication rules. The FCC banned AI-generated voices in robocalls soon after the Kramer incident.

“New Hampshire remains committed to ensuring that our elections remain free from unlawful interference and our investigation into this matter remains ongoing,” AG Formella said. “The Federal Communications Commission will separately be announcing an enforcement action against Mr. Kramer based on violations of federal law. I am pleased to see that our federal partners are similarly committed to protecting consumers and voters from harmful robocalls and voter suppression.”

Meanwhile, the FCC may soon require political advertisers to disclose the use of any AI in TV and radio spots. However, chairwoman Jessica Rosenworcel is not seeking to ban the use of AI-generated content in political ads. “As artificial intelligence tools become more accessible, the commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement on Wednesday.

This article originally appeared on Engadget at https://www.engadget.com/robocaller-behind-ai-biden-deepfake-faces-charges-and-hefty-fcc-fine-201803214.html?src=rss