Fans of 3 Body Problem were thrilled to hear the show had been renewed by Netflix, but something about the announcement was puzzling. Usually, when a show gets greenlit for more, there’s a specific announcement about what that means: another season, two more seasons, a special array of finale episodes, etc. That…
The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release.
The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”
The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.
Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML.
According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement.
Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.
The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity.
“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss
Fans of 3 Body Problem were thrilled to hear the show had been renewed by Netflix, but something about the announcement was puzzling. Usually, when a show gets greenlit for more, there’s a specific announcement about what that means: another season, two more seasons, a special array of finale episodes, etc. That…
The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release.
The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”
The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.
Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML.
According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement.
Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.
The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity.
“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss
The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release.
The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”
The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.
Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML.
According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement.
Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.
The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity.
“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss
Star Wars fans have been buzzing about a potentially huge reveal in new footage from The Acolyte, and now a curious change to that footage on YouTube adds fuel to the fire of its accuracy.
Super Mario Maker and its sequel are terrific games that let fans create and share their own Mario levels with ease. But it was a bit of a disappointment that Nintendo only factored in the 2D Mario games. None of the plumber’s 3D incarnations have made it to a Mario Maker title to date. So thank goodness for modders.
A pair of modders named Arthurtilly and Rovertronic have released an open-source Super Mario 64 mod that aims to make it a cinch for players to create and share their own levels. You’ll need your own (legally obtained) Mario 64 game file and a separate piece of software to infuse the mod into it. It’s even possible to use Mario Builder 64 on a Nintendo 64 if you have a supported flashcart.
You’ll have more than 100 parts to build your levels with. The creation tool includes some custom parts from a previous mod, so you have extras like permanent powerups at your disposal. To share your creations and find those made by others, the recommended places to look are a website for Mario level modders and Rovertronic’s Discord server.
It’ll be interesting to see if ridiculous 3D kaizo-style levels start popping up, while the mod could allow speedrunners to create custom training grounds where they can practice strategies. Personally, I’m hoping for creators to build levels that rely on half-A presses to beat.
This article originally appeared on Engadget at https://www.engadget.com/a-super-mario-64-mod-may-be-as-close-as-we-ever-get-to-mario-maker-3d-204024562.html?src=rss
Hidden right above Disneyland’s Pirates of the Caribbean, the ride that started the Disney Parks attraction-to-movie pipeline, is the members-only Club 33, an exclusive destination for industry professionals, insiders, the rich and the famous, and their lucky friends. There’s plenty of mystique built into Club 33,…
The US Department of Justice arrested a Wisconsin man last week for generating and distributing AI-generated child sexual abuse material (CSAM). As far as we know, this is the first case of its kind as the DOJ looks to establish a judicial precedent that exploitative materials are still illegal even when no children were used to create them. “Put simply, CSAM generated by AI is still CSAM,” Deputy Attorney General Lisa Monaco wrote in a press release.
The DOJ says 42-year-old software engineer Steven Anderegg of Holmen, WI, used a fork of the open-source AI image generator Stable Diffusion to make the images, which he then used to try to lure an underage boy into sexual situations. The latter will likely play a central role in the eventual trial for the four counts of “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.”
The government says Anderegg’s images showed “nude or partially clothed minors lasciviously displaying or touching their genitals or engaging in sexual intercourse with men.” The DOJ claims he used specific prompts, including negative prompts (extra guidance for the AI model, telling it what not to produce) to spur the generator into making the CSAM.
Cloud-based image generators like Midjourney and DALL-E 3 have safeguards against this type of activity, but Ars Technica reports that Anderegg allegedly used Stable Diffusion 1.5, a variant with fewer boundaries. Stability AI told the publication that fork was produced by Runway ML.
According to the DOJ, Anderegg communicated online with the 15-year-old boy, describing how he used the AI model to create the images. The agency says the accused sent the teen direct messages on Instagram, including several AI images of “minors lasciviously displaying their genitals.” To its credit, Instagram reported the images to the National Center for Missing and Exploited Children (NCMEC), which alerted law enforcement.
Anderegg could face five to 70 years in prison if convicted on all four counts. He’s currently in federal custody before a hearing scheduled for May 22.
The case will challenge the notion some may hold that CSAM’s illegal nature is based exclusively on the children exploited in their creation. Although AI-generated digital CSAM doesn’t involve any live humans (other than the one entering the prompts), it could still normalize and encourage the material, or be used to lure children into predatory situations. This appears to be something the feds want to clarify as the technology rapidly advances and grows in popularity.
“Technology may change, but our commitment to protecting children will not,” Deputy AG Monaco wrote. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”
This article originally appeared on Engadget at https://www.engadget.com/the-doj-makes-its-first-known-arrest-for-ai-generated-csam-201740996.html?src=rss