30 Tons Of Explosive Chemicals Vanish In Train Shipment, Sparking Investigation

Approximately 60,000 pounds of ammonium nitrate pellets are believed to have fallen onto the tracks in small quantities throughout the two-week trip.

Indiana Jones and the Dial of Destiny Finally Reveals a Bit About the Actual Dial

Right now, at this very moment, you can buy tickets to see a new Indiana Jones movie. And that’s a very exciting thing. Sure, Indiana Jones and the Dial of Destiny doesn’t open for another five weeks, but with tickets on sale, the anticipation is starting to get very, very real, no matter what early reviews say.

Read more…

Wes Anderson's Star-Studded Sci-Fi Film Asteroid City Looks Extremely Wes Anderson-y

Three new Asteroid City clips have been released, and each one shows Wes Anderson at his Wes Anderson-est. The director is treading new genre ground with a sci-fi film, and the results are strange and evocative (was Isle of Dogs sci-fi or fantasy? Who’s to say; genres are fake marketing tools anyway). Close, centered…

Read more…

'Verified' Twitter Accounts Spread Pentagon Explosion Hoax

Breaking news: The U.S. Pentagon is NOT under attack. But that didn’t stop false reports of an explosion from circulating widely online along with a seemingly AI-generated image of billowing black smoke near the Department of Defense headquarters on Monday.

Read more…

The first all-electric Escalade joins Cadillac’s EV lineup later this year

Cadillac confirmed today that the first all-electric Escalade will arrive “later this year.” However, the automaker didn’t reveal any details about the Escalade IQ, a name first trademarked in 2021. The new model’s “IQ” branding aligns with the Celestiq luxury sedan and Lyriq mid-sized SUV.

Earlier this year, Cadillac VP Rory Harvey said the company would reveal three new EVs in 2023. If you add that to the company’s previous comments to Car and Driver, stating all three will arrive for the same model year, we can assume the Escalade IQ will be a 2024 model. It is also expected to use GM’s Ultium battery tech.

We’ll have to wait until later this year to learn more about the first Escalade EV. But as for its mid-sized counterpart, Engadget’s Roberto Baldwin found the Lyriq ($60,000 and up with over 300 miles of range) to have “the fit and finish you’d expect” from Cadillac with “a polished ride and almost eerily quiet interior.”

This article originally appeared on Engadget at https://www.engadget.com/the-first-all-electric-escalade-joins-cadillacs-ev-lineup-later-this-year-160017592.html?src=rss

Andy Cohen On The Solo Parenting Moment That Brought Him To Tears

The “Watch What Happens Live” host is the father of two children, Ben and Lucy.

Novo Nordisk Weight Loss Pill Works as Well as Wegovy, Trial Data Shows

An effective pill for obesity treatment looks to be within reach. This week, Novo Nordisk announced the results of a Phase III clinical trial testing out an oral version of its in-demand drug semaglutide. The once-daily pill not only substantially outperformed a placebo but did as well as the injectable form of the…

Read more…

Ray Stevenson, Star of Thor, RRR, Punisher, and Star Wars, Has Died

Ray Stevenson was having a moment. Mere weeks ago, the impressive, talented Irish actor was revealed to be starring as a new character in the upcoming Star Wars show, Ahsoka. This came mere months after RRR, a film in which he plays the despicable villain, was a box office smash and took home an Oscar. The actor was…

Read more…

TikTok Users Can Apply to Make $1,000 to Watch Trending Content

Watching endless TikTok videos will become a lucrative pastime for the three social media users who land a job that will pay them to continuously scroll through the app. Ubiquitous, an influencer marketing agency, announced it will pay $100 per hour for successful applicants to scroll through TikTok for 10 hours…

Read more…

Meta’s open-source speech AI recognizes over 4,000 spoken languages

Meta has created an AI language model that (in a refreshing change of pace) isn’t a ChatGPT clone. The company’s Massively Multilingual Speech (MMS) project can recognize over 4,000 spoken languages and produce speech (text-to-speech) in over 1,100. Like most of its other publicly announced AI projects, Meta is open-sourcing MMS today to help preserve language diversity and encourage researchers to build on its foundation. “Today, we are publicly sharing our models and code so that others in the research community can build upon our work,” the company wrote. “Through this work, we hope to make a small contribution to preserve the incredible language diversity of the world.”

Speech recognition and text-to-speech models typically require training on thousands of hours of audio with accompanying transcription labels. (Labels are crucial to machine learning, allowing the algorithms to correctly categorize and “understand” the data.) But for languages that aren’t widely used in industrialized nations — many of which are in danger of disappearing in the coming decades — “this data simply does not exist,” as Meta puts it.

Meta used an unconventional approach to collecting audio data: tapping into audio recordings of translated religious texts. “We turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research,” the company said. “These translations have publicly available audio recordings of people reading these texts in different languages.” Incorporating the unlabeled recordings of the Bible and similar texts, Meta’s researchers increased the model’s available languages to over 4,000.

If you’re like me, that approach may raise your eyebrows at first glance, as it sounds like a recipe for an AI model heavily biased toward Christian worldviews. But Meta says that isn’t the case. “While the content of the audio recordings is religious, our analysis shows that this does not bias the model to produce more religious language,” Meta wrote. “We believe this is because we use a connectionist temporal classification (CTC) approach, which is far more constrained compared with large language models (LLMs) or sequence-to-sequence models for speech recognition.” Furthermore, despite most of the religious recordings being read by male speakers, that didn’t introduce a male bias either — performing equally well in female and male voices.

After training an alignment model to make the data more usable, Meta used wav2vec 2.0, the company’s “self-supervised speech representation learning” model, which can train on unlabeled data. Combining unconventional data sources and a self-supervised speech model led to impressive outcomes. “Our results show that the Massively Multilingual Speech models perform well compared with existing models and cover 10 times as many languages.” Specifically, Meta compared MMS to OpenAI’s Whisper, and it exceeded expectations. “We found that models trained on the Massively Multilingual Speech data achieve half the word error rate, but Massively Multilingual Speech covers 11 times more languages.”

Meta cautions that its new models aren’t perfect. “For example, there is some risk that the speech-to-text model may mistranscribe select words or phrases,” the company wrote. “Depending on the output, this could result in offensive and/or inaccurate language. We continue to believe that collaboration across the AI community is critical to the responsible development of AI technologies.”

Now that Meta has released MMS for open-source research, it hopes it can reverse the trend of technology dwindling the world’s languages to the 100 or fewer most often supported by Big Tech. It sees a world where assistive technology, TTS and even VR / AR tech allow everyone to speak and learn in their native tongues. It said, “We envision a world where technology has the opposite effect, encouraging people to keep their languages alive since they can access information and use technology by speaking in their preferred language.”

This article originally appeared on Engadget at https://www.engadget.com/metas-open-source-speech-ai-recognizes-over-4000-spoken-languages-161508200.html?src=rss