Do You Have a Constitutional Right to TikTok?

President Biden signed the TikTok ban into law on Wednesday, forcing China-based Bytedance to sell the app or face a ban in American app stores. TikTok tells Gizmodo it will fight the law in court, a case likely to reach the Supreme Court, saying that Biden’s law “tramples” First Amendment protections. In interviews…

Read more…

Garry’s Mod faces deluge of Nintendo-related DMCA takedown notices

Facepunch Studios has announced on Steam that it’s removing 20 years’ worth of Nintendo-related workshop items for its sandbox game Garry’s Mod to comply with the Japanese company’s demands. Earlier this year, an X user with the name Brewster T. Koopa posted that a group of trolls was filing false DMCA claims against the game to get Nintendo add-ons removed and to get add-on makers to shut down. The perpetrators allegedly used a fake email to impersonate Nintendo’s lawyers to send DMCA takedown notices. Facepunch Studios said in its new announcement, that it believes the demands legitimately came from Nintendo and that it has to respect the company’s decision and start taking down items related to its IPs. 

“This is an ongoing process, as we have 20 years of uploads to go through,” the developer wrote. “If you want to help us by deleting your Nintendo related uploads and never uploading them again, that would help us a lot.”

Koopa said in a follow-up tweet that they sent an email to the company to let it know that the demands aren’t actually from Nintendo. They previously argued that the takedown notices couldn’t be from the Japanese gaming giant, because Nintendo add-ons have been around since 2005 and because the company would’ve contacted Valve, the publisher of Garry’s Mod, itself.

While the announcement is still up, Facepunch founder Garry Newman announced that his team has received people’s emails and DMs and that the developer is conducting an investigation. “We need to take these things seriously (particularly from Nintendo), but we also can’t let people misuse DMCA takedowns,” Newman wrote. We’ve reached out to Nintendo to ask whether the takedowns Facepunch received truly came from the company, and we’ll update this post once they respond.

This article originally appeared on Engadget at https://www.engadget.com/garrys-mod-faces-deluge-of-nintendo-related-dmca-takedown-notices-123027589.html?src=rss

Hidden Google Maps Features You Should Know About

I have always used Google Maps but never cared enough to look beyond the basic features. It wasn’t until today that I made an effort to explore the app and ended up going down a rabbit hole of all the cool things it’s capable of. You might already know some of these, but you might have missed a handful of interesting…

Read more…

The Morning After: Testing the Rabbit R1's AI assistant skills

Back in January, startup Rabbit revealed its first device at CES 2024. The R1 is an adorable, vibrant orange AI machine with a camera, scroll wheel, and ambitious demos. Now, the device is being sent out to early adopters (and tech reviewers), and we’ve got some proper hands-on experience to tide you over until we’ve wrapped up a full review.

It’s definitely cute, designed by Teenage Engineering, which has put its design talents to use on the Playdate as well as Nothing’s most recent phones as well as music gadgets. Like all those things, it combines a retro-futuristic aesthetic with solid build quality, shiny surfaces, glass and metal accents.

TMA
Engadget

Then again, the Humane AI Pin was a beautiful piece of tech too, but it was also… rubbish. The Rabbit R1 is a different device. First, it costs $199 — less than a third of the AI Pin’s $700. Humane also requires a monthly $24 subscription fee to use the thing — you don’t need a sub for the R1 at all. Immediately, that’s much better.

The category of AI assistant-centric devices is very new, however. Rabbit’s device is different to Humane’s in both hardware and features, but we know the R1 isn’t launching with all its features just yet. There are a few curiously simple tools missing, like alarms and calendar support.

Make sure you check out our first impressions here. Review incoming!

— Mat Smith

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Mercedes-Benz quad-motor G-Class could be the ultimate EV off-roader

TikTok Lite axes ‘addictive as cigarettes’ reward-to-watch feature

The best ereaders for 2024

JetBlue’s in-flight entertainment system just got a watch party feature

That thing that’s been happening since Saturday is still happening. But, well, TikTok still isn’t banned. In a statement, the company said it would challenge the law in court, which could delay an eventual sale or ban.

Continue reading.

Threads is still growing. During the company’s first-quarter earnings call, Mark Zuckerberg shared the latest user numbers of Meta’s latest spin-off social network, saying the app “continues to be on the trajectory that I hope to see.”

Notably — but perhaps not surprisingly — Threads seems to outperform X (formerly Twitter), with analytics firm Apptopia indicating Threads has more daily users than X in the United States.

Continue reading.

TMA
Microsoft

The latest update to Windows 11 comes out this week and includes ads for apps in the recommended section of the Start Menu. “The Recommended section of the Start menu will show some Microsoft Store apps,” says the release notes. Apps are apparently from a “small set of curated developers.” Thankfully, you can restore your previously ad-free Windows experience by going into Settings and selecting Personalization > Start and toggling off Show recommendations for tips, app promotions and more.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-testing-the-rabbit-r1s-ai-assistant-skills-111505087.html?src=rss

Hidden Google Maps Features You Should Know About

I have always used Google Maps but never cared enough to look beyond the basic features. It wasn’t until today that I made an effort to explore the app and ended up going down a rabbit hole of all the cool things it’s capable of. You might already know some of these, but you might have missed a handful of interesting…

Read more…

Adobe's new upscaling tech uses AI to sharpen video

Most new features and experiments Adobe has announced recently involve AI, like object addition and removal for Premiere Pro and text-based image generation in Photoshop. Now, the company has unveiled VideoGigaGAN, an experimental AI feature it says can upscale video by eight times without the usual artifacts like flickering or distortion, The Verge reported. 

VideoGigaGAN beats other Video Super Resolution (VSR) methods because it avoids the usual artifacts and flickering introduced by GAN (General Adversarial Networks), according to Adobe. At the same time, it adds sharpness and detail — where most other systems fail to do do both of those things at once. 

Of course, the system is making up detail that doesn’t exist out of whole cloth, so this wouldn’t be suitable for things like forensic video enhancement, à la CSI-style crime shows. But the detail it does add looks impressively real, like skin textures, fine hairs, swan feather details and more. 

The model builds on a large-scale image upsampler called GigaGAN, according to to Adobe’s researchers. Previous VSR models have had difficulty generating rich details in results, so Adobe married “temporal attention” (reducing artifacts that accumulate over time), feature propagation (adding detail where nond exists), anti-aliasing and something called “HF shuttle” (shuttling high-frequency features) to create the final result. 

If added to products like Premiere Pro or After Effects, it could allow video producers to make low-resolution shots look a lot better, though using AI too enhance people is a controversial practice. There’s no word yet on whether Adobe plans to do this, but plenty of companies (NVIDIA, Microsoft, Blackmagic Design and others) are working on upscalers as well. 

This article originally appeared on Engadget at https://www.engadget.com/adobes-new-upscaling-tech-uses-ai-to-sharpen-video-103431709.html?src=rss

Airlines Will Now Have to Give You Cash When They Screw Up

The Department of Transportation announced a final rule mandating that airlines automatically give cash refunds to passengers and inform passengers of their right to a refund on Wednesday. The policy change highlights the Biden Administration’s continued hardline support of consumer rights in the skies. The USDOT…

Read more…

Manhattan's DA wants to know why YouTube is pushing 'ghost gun' tutorials to kids

Alvin Bragg, Manhattan’s District Attorney, wants to meet with YouTube CEO Neal Mohan to discuss why the website allows the posting of videos on how to manufacture “ghost guns” and why its algorithm is pushing them to underage viewers who watch video game content. Ghost guns are firearms assembled using 3D-printed parts or components purchased as kits. That means they have no serial numbers, making them near impossible to trace, and don’t need any kind of background check to acquire. 

In a letter sent to Mohan (PDF) requesting a meeting, Bragg referenced a study conducted by the Tech Transparency Project in 2023, wherein it created four test YouTube accounts and gave them the profiles of 14-year-old and 9-year-old boys. Apparently, after playing at least 100 gaming videos, YouTube’s algorithm started recommending them instructional videos on how to make ghost guns. It doesn’t matter if they’d only watched, say, Call of Duty gameplay videos and had never interacted with any content featuring real guns. YouTube still pushed real gun content to their accounts, as well as other violence-related videos, such as those of school shootings and serial killers, even if they were supposed to be minors. Bragg also called YouTube’s attention to the fact that there’s no way for guardians to switch off the website’s recommendations in parental controls. 

A lot of young individuals being investigated for gun possession in New York City said they learned how to make ghost guns from YouTube, Bragg wrote. While the website does remove those videos when they’re flagged by gun safety groups, the DA said YouTube should be more proactive in removing them, should make sure they get blocked from being uploaded in the future and should provide viewers a way to switch off recommendations. Especially since the website does have a policy that prohibits the uploading of videos intending to sell firearms or to instruct viewers on how to make them. YouTube told New York Daily News in a statement that it’ll “carefully review” videos the Manhattan DA shares with the company and that it remains committed to “removing any content that violates [its] policies.”

This article originally appeared on Engadget at https://www.engadget.com/manhattans-da-wants-to-know-why-youtube-is-pushing-ghost-gun-tutorials-to-kids-070219455.html?src=rss

Airlines Will Now Have to Give You Cash When They Screw Up

The Department of Transportation announced a final rule mandating that airlines automatically give cash refunds to passengers and inform passengers of their right to a refund on Wednesday. The policy change highlights the Biden Administration’s continued hardline support of consumer rights in the skies. The USDOT…

Read more…

Manhattan's DA wants to know why YouTube is pushing 'ghost gun' tutorials to kids

Alvin Bragg, Manhattan’s District Attorney, wants to meet with YouTube CEO Neal Mohan to discuss why the website allows the posting of videos on how to manufacture “ghost guns” and why its algorithm is pushing them to underage viewers who watch video game content. Ghost guns are firearms assembled using 3D-printed parts or components purchased as kits. That means they have no serial numbers, making them near impossible to trace, and don’t need any kind of background check to acquire. 

In a letter sent to Mohan (PDF) requesting a meeting, Bragg referenced a study conducted by the Tech Transparency Project in 2023, wherein it created four test YouTube accounts and gave them the profiles of 14-year-old and 9-year-old boys. Apparently, after playing at least 100 gaming videos, YouTube’s algorithm started recommending them instructional videos on how to make ghost guns. It doesn’t matter if they’d only watched, say, Call of Duty gameplay videos and had never interacted with any content featuring real guns. YouTube still pushed real gun content to their accounts, as well as other violence-related videos, such as those of school shootings and serial killers, even if they were supposed to be minors. Bragg also called YouTube’s attention to the fact that there’s no way for guardians to switch off the website’s recommendations in parental controls. 

A lot of young individuals being investigated for gun possession in New York City said they learned how to make ghost guns from YouTube, Bragg wrote. While the website does remove those videos when they’re flagged by gun safety groups, the DA said YouTube should be more proactive in removing them, should make sure they get blocked from being uploaded in the future and should provide viewers a way to switch off recommendations. Especially since the website does have a policy that prohibits the uploading of videos intending to sell firearms or to instruct viewers on how to make them. YouTube told New York Daily News in a statement that it’ll “carefully review” videos the Manhattan DA shares with the company and that it remains committed to “removing any content that violates [its] policies.”

This article originally appeared on Engadget at https://www.engadget.com/manhattans-da-wants-to-know-why-youtube-is-pushing-ghost-gun-tutorials-to-kids-070219455.html?src=rss