AMD's latest updates address 9000X desktop CPU performance issues

After arriving two weeks late, AMD’s Ryzen 9000-series desktop processors disappointed some buyers and reviewers due to lackluster performance. Now, the company has addressed those issues with several new updates. 

The biggest speed brake for Ryzen 9000 desktop CPUs was the lack of Windows 11 branch prediction optimizations. For relief, you needed to either wait for Windows 11 24H2 (currently in the release preview channel), or add the optional KB5041587 update. However, AMD announced that the fix is now included by default in both Windows 11 version 23H2 build 22631.44112 or the latest 24H2 builds. That should boost performance by 3-13 percent across various games, with the biggest gains in Ryzen 9000 and Zen 5 processors. 

On top of that, AMD released the AGESA PI 1.2.0.2 BIOS update for Ryzen 5 9600X and Ryzen 9700X processors. That extends the warranty on those processors to allow for a TDP (max power) level of 105W, way up from the 65W launch TDP. That alone will boost speeds up to 10 percent on AM5 and X870 series motherboards, AMD said. 

It also introduced core-to-core latency optimization for Ryzen 9000 series multi-CCD (chiplet) models. Testers noticed that it sometimes took two transactions to both read and write when information was shared across cores on different CCDs. Though AMD called this a “corner case,” the latest BIOS update cuts the number of transactions in half, helping latency in that scenario. “Our lab tests suggest Metro, Starfield and Borderlands 3 can show some uplift, as well as synthetic tests like 3DMark Time Spy,” AMD wrote. 

Still on the speed theme, AMD noted that X870 and X870E motherboards are now available with support for PCIe Gen 5 graphics (i.e., the upcoming NVIDIA RTX 5000 GPUs), NVMe storage and USB4 as standard. AMD also introduced support for “enthusiast-class” DDR5-8000 EXPO memory support, with 1- to 2-nanoseconds of latency improvement. While not for everyone, “it’s a great option for enthusiasts who want to push their systems to the limit,” AMD said. 

This article originally appeared on Engadget at https://www.engadget.com/computing/amds-latest-updates-address-9000x-desktop-cpu-performance-issues-130038015.html?src=rss

Tesla's Full Self-Driving is now available for some Cybertrucks

Buyers that paid at least $93,990 to be among the first to own (and beta test) Tesla’s Cybertruck are finally getting a key, promised feature: Full Self-Driving (FSD). Several people on the Cybertruck Owners Club forum — including an Angeleno who posted a video — say that it has finally arrived in early access to select users, Electrek reported. 

After Tesla promised that FSD would arrive to Cybertrucks in September, the supervised version 12.5.5 v12 (the latest available) is shipping, but only to users in the early access program. That means the feature (included in the Tesla Cybertruck Foundation package) won’t be available to most buyers for at least another month, based on Tesla’s previous FSD history.

FSD worked smoothly for the short amount of time it was shown, according to the video above from Cybertruck Owner’s Club forum user espresso-drumbeat. It guided the vehicle through an urban area then onto a freeway ramp before arriving on the I5 toward LA, all in relatively light evening traffic.

According to the update description, FSD (Supervised) v12 includes vision-based attention monitoring with sunglasses and merges city and highway into a single software stack. In other words, it’s the first version to fully manage driving using end-to-end AI. 

Cybertruck deliveries first started 10 months ago, so FSD has been a long time coming. Recent testing by the independent automotive testing group AMCI determined that Tesla’s FSD can only go 13 miles on average before requiring human intervention. 

Elon Musk recently promised unsupervised self-driving by the end of 2025, but he has been making that same claim for nearly 10 years and it’s still not here. There’s more pressure now than ever, though, as the company is set to reveal its FSD-dependent robotaxi product on October 10th. 

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/teslas-full-self-driving-is-now-available-for-some-cybertrucks-120055932.html?src=rss

Apple’s rumored smart display may arrive in 2025 running new homeOS

Apple is planning to debut a new operating system called homeOS with its long-rumored smart displays, the first of which is expected to arrive as soon as 2025, according to Bloomberg’s Mark Gurman. Reports of a HomePod-like device with a display have been swirling for over a year, and Gurman said just this summer that Apple is working on a tabletop smart display equipped with a robotic arm that can tilt and rotate the screen for better viewing. In his latest report, Gurman says there are two versions in the works: a low-end display that will offer the basics, like FaceTime and smart home controls, and the high-end robotic variant that’ll cost upwards of $1,000.

We’ll reportedly see the cheaper version first — possibly next year — followed by the high-end display. Gurman previously said the robotic smart display could be released in 2026 at the earliest. You won’t have to wait for the premium model to get a taste of Apple’s vision for home AI, though. According to Gurman, Apple Intelligence will be a key part of the experience for both devices. The new homeOS will be based on Apple TV’s tvOS, he notes.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/apples-rumored-smart-display-may-arrive-in-2025-running-new-homeos-212401853.html?src=rss

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.” 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss

What we’re listening to: Harlequin (or LG 6.5), Rack and more

In What We’re Listening To, Engadget writers and editors discuss some of the recent music releases we’ve had on repeat. This installment has everything from jazz standards to The Jesus Lizard.

I wasn’t even a minute into Harlequin before I had the realization, Oh, I am going to become so annoying in my love for this. Unfortunately for everyone in my life (and doubly so because I’m singing along), I’ve had it blasting all weekend since the surprise drop on Friday. Gaga is a powerhouse, and as much as I adore her take on pop, I’m always blown away when I hear her do jazz. And Harlequin is brimming with it. 

Harlequin is a companion album to a soon-to-be-released movie (Joker: Folie à Deux) and almost entirely comprises cover songs — a combination that might typically put me off. But Gaga’s breezy versions of classics like “World on a String” and “Smile” are almost chilling. Her energy in tracks like “Gonna Build a Mountain” is through the roof. I could have done without “Oh, When the Saints,” but I’m really just nit-picking now. There are only two original songs on the album and they are completely different beasts, each impactful in its own way. “Happy Mistake” is a clear standout, and I’ll be softly weeping to that one for years to come.

On the exact opposite end of the spectrum, I’ve been really into punk band Babe Haven’s most recent album, Nuisance, lately. It’s 25-ish minutes of queer femme rage and I can’t get enough of it. Check it out on Bandcamp

— Cheyenne MacDonald, Weekend Editor

Even laudatory reviews of comeback albums lean on expectations tempered with preemptive apology or pity praise. A comparison to headier days of musical urgency is inevitable; it stings for the same reasons as hearing “you look great for your age.” I wish there were some way to take stock of Rack without that baggage, because The Jesus Lizard doesn’t merely sound better than a band which took three decades off has any right to, it simply does not sound as though time has passed at all

Rack broods with baffling inconspicuousness amid their oeuvre. Sure, “What If?” doesn’t reach the slash and sprawl of earlier meanderings like “Rodeo in Joliet,” but “Lord Godiva” glides on the most Duane Denison of Duane Denison riffs, lightning and crude oil. The manic physicality of David Yow’s voice is unaltered — neither more harried after 60+ years of swinging at ghosts, nor attenuated by the effort. 

So many bands seemingly frozen in amber reemerge denuded, as though covering themselves. They’d be frantically recapturing their glory days, if they had the energy to do anything frantic anymore. Rack, through sheer ferocity, is instead a band continuing to do exactly what it always has, just as well as it always has, and sounding really fucking cool doing it.

Avery Ellis, Deputy Editor, Reports

There’s a part of me that hates keeping up with pop music, and that’s the part of me that cringes when I realize the last few albums I’ve listened to have been the ones by pop princesses Ariana Grande, Billie Eilish, Taylor Swift and more. That’s also the part of me that resisted listening to Sabrina Carpenter’s latest album for months (and probably the part of me that refused to watch the incredible Schitt’s Creek until this year).

I say all that only to explain why I’m so late to appreciate the goodness that is Short n’ Sweet. And the non-self-judgy part of me has unabashedly loved Carpenter’s new music and been asking all my friends if they’ve listened to her songs. When I talked to my various friend groups about her, what became clear is how there’s something for everyone, regardless of the variety in our tastes.

I’m a fan of R&B, hip hop and basically anything I can dance or sing to. The tracks “bet u wanna,” “Taste” and “Feather” have become highly repeated items on my playlist and yes, I did go back into her older discography for some of those titles. However, my current absolute favorite is “Espresso.” It’s got a catchy hook, clever lyrics and a groovy beat that delicately straddles the line between upbeat and lowkey. I love the wordplay and how, when woven with the rhythm and melody, it initially sounded to me like Carpenter was singing in a different language. And as someone who works in tech and is occasionally a gamer, I especially adored the use of the words “up down left right,” “switch” and Nintendo. Truly, rhyming “espresso” with “Nintendo” wasn’t something I would have expected to work, but work it did.

But back to the point I was making earlier: Even if that sort of chill dance club vibe isn’t your thing, there’s plenty in Short n’ Sweet that might appeal to you. I wasn’t as huge a fan of “Please please please,” for example, but I know friends who love it. And while “Bed Chem” and “Good Graces” aren’t hitting my feels the same way “Espresso” is, those two are among her highest played songs on Spotify. I’m also starting to warm up to “Juno.”

All that is to say, we all have different tastes. Maybe you’re more of a Chappell Roan fan. I like some of her latest tracks too, just not as much as I’ve enjoyed Carpenter’s. I also really enjoy the brilliance that is “Die With a Smile” by Bruno Mars and Lady Gaga, which is something I’ll be adding to my karaoke duet repertoire, but am already playing less frequently nowadays. If you have a preference for music from the likes of Ariana Grande, NewJeans and Doja Cat, you’ll probably have a good time with Sabrina Carpenter. And since I’m so late to the party, you probably have already.

Cherlynn Low, Deputy Editor, Reviews

This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/what-were-listening-to-harlequin-or-lg-65-rack-and-more-003037241.html?src=rss

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.” 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.” 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.” 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.

“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss

Spotify confirms it’s having service issues and is working on a fix

It’s not just you — Spotify has confirmed it’s experiencing problems that have made the app and web player temporarily unusable for some people starting late Sunday morning. “We’re aware of some issues right now and are checking them out!” the Spotify Status account posted on X. Spotify users on social media have reported a variety of issues, from songs repeatedly pausing on them to being locked out of the streaming platform entirely. 

The problems spiked a little before 11AM ET, per Downdetector, and users were still encountering issues as of 1PM. At the time of writing, the web player is opening intermittently for me but keeps cutting to a “request timeout” screen after a few minutes. 

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/spotify-confirms-its-having-service-issues-and-is-working-on-a-fix-164159110.html?src=rss

YouTube blocks songs from artists including Adele and Green Day amid licensing negotiations

Songs from popular artists have begun to disappear from YouTube as the platform’s deal with the performing rights organization SESAC (Society of European Stage Authors and Composers) approaches its expiration date. As reported by Variety, certain songs by Adele, Green Day, Bob Dylan, R.E.M., Burna Boy and other artists have been blocked in the US, though their entire catalogs aren’t necessarily affected. Videos that have been pulled, like Adele’s “Rolling in the Deep,” now just show a black screen with the message: “This video contains content from SESAC. It is not available in your country.”

A black screen with the message: Video unavailable. This video contains content from SESAC. It is not available in your country

In a statement to Engadget, a YouTube spokesperson said the platform has been in talks with SESAC to renew the deal, but “despite our best efforts, we were unable to reach an equitable agreement before its expiration. We take copyright very seriously and as a result, content represented by SESAC is no longer available on YouTube in the US. We are in active conversations with SESAC and are hoping to reach a new deal as soon as possible.” According to a source that spoke to Variety, however, the deal hasn’t even expired yet — it’ll reportedly terminate sometime next week — and the move on YouTube’s part may be a negotiation tactic. SESAC has not yet released a statement.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-blocks-songs-from-artists-including-adele-and-green-day-amid-licensing-negotiations-151741653.html?src=rss