Sony’s WH-1000XM5 headphones are some of the most popular on the market, thanks to the improved sound quality, comfortable fit and highly effective active noise cancellation (ANC). If you’ve been looking at buying a pair, now is a good time to act. They’re currently on sale at Amazon in black, midnight blue and silver for $328, a solid 18 percent off the list price.
The WH-1000XM5 scored an excellent 95 in our Engadget review, thanks to improvements in nearly every way over our previous favorite headphones, the WH-1000XM4. Perhaps the biggest is in fit and comfort thanks to the more optimal weight distribution, synthetic leather ear cups and slightly reduced weight.
Sound quality also went up, due to the new 30mm carbon fiber drivers that deliver punchier bass. We also saw more clarity that helps you hear fine detail, along with improved depth that makes music more immersive. And Sony’s DSEE Extreme sound processing recovers detail lost to compression, without any noticeable impact on sound quality.
The ANC is equally impressive. With double the number of noise cancellation microphones found in the M4, along with a new dedicated V1 chip, the M5 does a better job at minimizing background noise. And in terms of the microphone, we found that the M5 offers superior call quality over its predecessor. Moreover, you get 30 hours of listening time with ANC enabled, enough for the longest of flights.
The main drawbacks of the WH-1000XM5 headphones compared to the previous model is that they no longer fold up, and don’t have the granular ANC adjustment found on other models like Bose’s QuietComfort Ultra. The other issue is the $400 price tag, but at $328, they’re a solid deal — and that price applies to all the main colorways.
What’s white, weighs roughly one trillion metric tons, and is now heading away from the Antarctic Peninsula? If you guessed iceberg A23a, you’re absolutely right, as this colossal chunk of snow and ice is finally leaving the cradle.
A series of AI-generated images of Taylor Swift in suggestive and pornographic situations are causing an uproar on X, the platform formerly known as Twitter. Though some of the images have been taken down, others remain on the platform. Swift’s army of fans have embarked on a campaign to bury the images by flooding…
If there’s one thing we can all agree upon, it’s that the 21st century’s captains of industry are trying to shoehorn AI into every corner of our world. But for all of the ways in which AI will be shoved into our faces and not prove very successful, it might actually have at least one useful purpose. For instance, by dramatically speeding up the often decades-long process of designing, finding and testing new drugs.
Risk mitigation isn’t a sexy notion but it’s worth understanding how common it is for a new drug project to fail. To set the scene, consider that each drug project takes between three and five years to form a hypothesis strong enough to start tests in a laboratory. A 2022 study from Professor Duxin Sun found that 90 percent of clinical drug development fails, with each project costing more than $2 billion. And that number doesn’t even include compounds found to be unworkable at the preclinical stage. Put simply, every successful drug has to prop up at least $18 billion waste generated by its unsuccessful siblings, which all but guarantees that less lucrative cures for rarer conditions aren’t given as much focus as they may need.
Dr. Nicola Richmond is VP of AI at Benevolent, a biotech company using AI in its drug discovery process. She explained the classical system tasks researchers to find, for example, a misbehaving protein – the cause of disease – and then find a molecule that could make it behave. Once they’ve found one, they need to get that molecule into a form a patient can take, and then test if it’s both safe and effective. The journey to clinical trials on a living human patient takes years, and it’s often only then researchers find out that what worked in theory does not work in practice.
The current process takes “more than a decade and multiple billions of dollars of research investment for every drug approved,” said Dr. Chris Gibson, co-founder of Recursion, another company in the AI drug discovery space. He says AI’s great skill may be to dodge the misses and help avoid researchers spending too long running down blind alleys. A software platform that can churn through hundreds of options at a time can, in Gibson’s words, “fail faster and earlier so you can move on to other targets.”
CellProfiler / Carpenter-Singh laboratory at the Broad Institute
Dr. Anne E. Carpenter is the founder of the Carpenter-Singh laboratory at the Broad Institute of MIT and Harvard. She has spent more than a decade developing techniques in Cell Painting, a way to highlight elements in cells, with dyes, to make them readable by a computer. She is also the co-developer of Cell Profiler, a platform enabling researchers to use AI to scrub through vast troves of images of those dyed cells. Combined, this work makes it easy for a machine to see how cells change when they are impacted by the presence of disease or a treatment. And by looking at every part of the cell holistically – a discipline known as “omics” – there are greater opportunities for making the sort of connections that AI systems excel at.
Using pictures as a way of identifying potential cures seems a little left-field, since how things look don’t always represent how things actually are, right? Carpenter said humans have always made subconscious assumptions about medical status from sight alone. She explained most people may conclude someone may have a chromosomal issue just by looking at their face. And professional clinicians can identify a number of disorders by sight alone purely as a consequence of their experience. She added that if you took a picture of everyone’s face in a given population, a computer would be able to identify patterns and sort them based on common features.
This logic applies to the pictures of cells, where it’s possible for a digital pathologist to compare images from healthy and diseased samples. If a human can do it, then it should be faster and easier to employ a computer to spot these differences in scale so long as it’s accurate. “You allow this data to self-assemble into groups and now [you’re] starting to see patterns,” she explained, “when we treat [cells] with 100,000 different compounds, one by one, we can say ‘here’s two chemicals that look really similar to each other.’” And this looking really similar to each other isn’t just coincidence, but seems to be indicative of how they behave.
In one example, Carpenter cited that two different compounds could produce similar effects in a cell, and by extension could be used to treat the same condition. If so, then it may be that one of the two – which may not have been intended for this purpose – has fewer harmful side effects. Then there’s the potential benefit of being able to identify something that we didn’t know was affected by disease. “It allows us to say, ‘hey, there’s this cluster of six genes, five of which are really well known to be part of this pathway, but the sixth one, we didn’t know what it did, but now we have a strong clue it’s involved in the same biological process.” “Maybe those other five genes, for whatever reason, aren’t great direct targets themselves, maybe the chemicals don’t bind,” she said, “but the sixth one [could be] really great for that.”
FatCamera via Getty Images
In this context, the startups using AI in their drug discovery processes are hoping that they can find the diamonds hiding in plain sight. Dr. Richmond said that Benevolent’s approach is for the team to pick a disease of interest and then formulate a biological question around it. So, at the start of one project, the team might wonder if there are ways to treat ALS by enhancing, or fixing, the way a cell’s own housekeeping system works. (To be clear, this is a purely hypothetical example supplied by Dr. Richmond.)
That question is then run through Benevolent’s AI models, which pull together data from a wide variety of sources. They then produce a ranked list of potential answers to the question, which can include novel compounds, or existing drugs that could be adapted to suit. The data then goes to a researcher, who can examine what, if any, weight to give to its findings. Dr. Richmond added that the model has to provide evidence from existing literature or sources to support its findings even if its picks are out of left-field. And that, at all times, a human has the final say on what of its results should be pursued and how vigorously.
It’s a similar situation at Recursion, with Dr. Gibson claiming that its model is now capable of predicting “how any drug will interact with any disease without having to physically test it.” The model has now formed around three trillion predictions connecting potential problems to their potential solutions based on the data it has already absorbed and simulated. Gibson said that the process at the company now resembles a web search: Researchers sit down at a terminal, “type in a gene associated with breast cancer and [the system] populates all the other genes and compounds that [it believes are] related.”
“What gets exciting,” said Dr. Gibson, “is when [we] see a gene nobody has ever heard of in the list, which feels like novel biology because the world has no idea it exists.” Once a target has been identified and the findings checked by a human, the data will be passed to Recursion’s in-house scientific laboratory. Here, researchers will run initial experiments to see if what was found in the simulation can be replicated in the real world. Dr. Gibson said that Recursion’s wet lab, which uses large-scale automation, is capable of running more than two million experiments in a working week.
“About six weeks later, with very little human intervention, we’ll get the results,” said Dr. Gibson and, if successful, it’s then the team will “really start investing.” Because, until this point, the short period of validation work has cost the company “very little money and time to get.” The promise is that, rather than a three-year preclinical phase, that whole process can be crunched down to a few database searches, some oversight and then a few weeks of ex vivo testing to confirm if the system’s hunches are worth making a real effort to interrogate. Dr. Gibson said that it believes it has taken a “year’s worth of animal model work and [compressed] it, in many cases, to two months.”
Of course, there is not yet a concrete success story, no wonder cure that any company in this space can point to as a validation of the approach. But Recursion can cite one real-world example of how close its platform came to matching the success of a critical study. In April 2020, Recursion ran the COVID-19 sequence through its system to look at potential treatments. It examined both FDA-approved drugs and candidates in late-stage clinical trials. The system produced a list of nine potential candidates which would need further analysis, eight of which it would later be proved to be correct. It also said that Hydroxychloroquine and Ivermectin, both much-ballyhooed in the earliest days of the pandemic, would flop.
And there are AI-informed drugs that are currently undergoing real-world clinical trials right now. Recursion is pointing to five projects currently finishing their stage one (tests in healthy patients), or entering stage two (trials in people with the rare diseases in question) clinical testing right now. Benevolent has started a stage one trial of BEN-8744, a treatment for ulcerative colitis that may help with other inflammatory bowel disorders. And BEN-8744 is targeting an inhibitor that has no prior associations in the existing research which, if successful, will add weight to the idea that AIs can spot the connections humans have missed. Of course, we can’t make any conclusions until at least early next year when the results of those initial tests will be released.
Yuichiro Chino via Getty Images
There are plenty of unanswered questions, including how much we should rely upon AI as the sole arbiter of the drug discovery pipeline. There are also questions around the quality of the training data and the biases in the wider sources more generally. Dr. Richmond highlighted the issues around biases in genetic data sources both in terms of the homogeneity of cell cultures and how those tests are carried out. Similarly, Dr. Carpenter said the results of her most recent project, the publicly available JUMP-Cell Painting project, were based on cells from a single participant. “We picked it with good reason, but it’s still one human and one cell type from that one human.” In an ideal world, she’d have a far broader range of participants and cell types, but the issues right now center on funding and time, or more appropriately, their absence.
But, for now, all we can do is await the results of these early trials and hope that they bear fruit. Like every other potential application of AI, its value will rest largely in its ability to improve the quality of the work – or, more likely, improve the bottom line for the business in question. If AI can make the savings attractive enough, however, then maybe those diseases which are not likely to make back the investment demands under the current system may stand a chance. It could all collapse in a puff of hype, or it may offer real hope to families struggling for help while dealing with a rare disorder.
This article originally appeared on Engadget at https://www.engadget.com/ai-is-coming-for-big-pharma-150045224.html?src=rss
A woman has a text chat with her long-dead lover. A family gets to hear a deceased elder speak again. A mother gets another chance to say goodbye to her child, who died suddenly, via a digital facsimile. This isn’t a preview of the next season of Black Mirror — these are all true stories from the Sundance documentary Eternal You, a fascinating and frightening dive into tech companies using AI to digitally resurrect the dead.
It’s yet another way modern AI, which includes large language models like ChatGPT and similar bespoke solutions, has the potential to transform society. And as Eternal You shows, the AI afterlife industry is already having a profound effect on its early users.
The film opens on a woman having a late night text chat with a friend: “I can’t believe I’m trying this, how are you?” she asks, as if she’s using the internet for the first time. “I’m okay. I’m working, I’m living. I’m… scared,” her friend replies. When she asks why, they reply, “I’m not used to being dead.”
Beetz Brothers Film Production
It turns out the woman, Christi Angel, is using the AI service Project December to chat with a simulation of her first love, who died many years ago. Angel is clearly intrigued by the technology, but as a devout Christian, she’s also a bit spooked out by the prospect of raising the dead. The AI system eventually gives her some reasons to be concerned: Cameroun reveals that he’s not in heaven, as she assumes. He’s in hell.
“You’re not in hell,” she writes back. “I am in hell,” the AI chatbot insists. The digital Cameroun says he’s in a “dark and lonely” place, his only companions are “mostly addicts.” The chatbot goes on to say he’s currently haunting a treatment center and later suggests “I’ll haunt you.” That was enough to scare Angel and question why she was using this service in the first place.
While Angel was aware she was talking to a digital recreation of Cameroun, which was based on the information she provided to Project December, she interacted with the chatbot as if she was actually chatting with him on another plane of existence. That’s a situation that many users of AI resurrection services will likely encounter: Rationality can easily overwhelm your emotional response while “speaking” with a dead loved one, even if the conversation is just occurring over text.
In the film, MIT sociologist Sherry Turkle suggests that our current understanding of how AI affects people is similar to our relationship with social media over a decade ago. That makes it a good time to ask questions about the human values and purposes it’s serving, she says. If we had a clearer understanding of social media early on, maybe we could have pushed Facebook and Twitter to confront misinformation and online abuse more seriously. (Perhaps the 2016 election would have looked very different if we were aware of how other countries could weaponize social media.)
Beetz Brothers Film Production
Eternal You also introduces us to Joshua Barbeau, a freelance writer who became a bit of an online celebrity in 2021 when The San Francisco Chronicle reported on his Project December chatbot: a digital version of his ex-fiancee Jessica. At first, he used Project December to chat with pre-built bots, but he eventually realized he could use the underlying technology (GPT-3, at the time) to create one with Jessica’s personality. Their conversations look natural and clearly comfort Barbeau. But we’re still left wondering if chatting with a facsimile of his dead fiancee is actually helping Barbeau to process his grief. It could just as easily be seen as a crutch that he feels compelled to pay for.
It’s also easy to be cynical about these tools, given what we see from their creators in the film. We meet Jason Rohrer, the founder and Project December and a former indie game designer, who comes across as a typical techno-libertarian.
“I believe in personal responsibility,” he says, after also saying that he’s not exactly in control of the AI models behind Project December, and right before we see him nearly crash a drone into his co-founders face. “I believe that consenting adults can use that technology however they want and they’re responsible for the results of whatever they’re doing. It’s not my job as the creator of the technology to prevent the technology from being released, because I’m afraid of what somebody might do with it.”
But, as MIT’s Turkle points out, reanimating the dead via AI introduces moral questions that engineers like Rohrer likely aren’t considering. “You’re dealing with something much more profound in the human spirit,” she says. “Once something is constituted enough that you can project onto it, this life force. It’s our desire to animate the world, which is human, which is part of our beauty. But we have to worry about it, we have to keep it in check. Because I think it’s leading us down a dangerous path.”
Beetz Brothers Film Production
Another service, Hereafter.ai, lets users record stories to create a digital avatar of themselves, which family members can talk to now or after they die. One woman was eager to hear her father’s voice again, but when she presented the avatar to her family the reaction was mixed. Younger folks seemed intrigue, but the older generation didn’t want any part of it. “I fear that sometimes we can go too far with technology,” her father’s sister said. “I would just love to remember him as a person who was wonderful. I don’t want my brother to appear to me. I’m satisfied knowing he’s at peace, he’s happy, and he’s enjoying the other brothers, his mother and father.”
YOV, an AI company that also focuses on personal avatars, or “Versonas,” wants people to have seamless communication with their dead relatives across multiple channels. But, like all of these other digital afterlife companies, it runs into the same moral dilemmas. Is it ethical to digitally resurrect someone, especially if they didn’t agree to it? Is the illusion of speaking to the dead more helpful or harmful for those left behind?
The most troubling sequence in Eternal You focuses on a South Korean mother, Jang Ji-sun, who lost her young child and remains wracked with guilt about not being able to say goodbye. She ended up being the central subject in a VR documentary, Meeting You, which was broadcast in South Korea in early 2020. She went far beyond a mere text chat: Jang donned a VR headset and confronted a startlingly realistic model of her child in virtual reality. The encounter was clearly moving for Jang, and the documentary received plenty of media attention at the time.
“There’s a line between the world of the living and the world of the dead,” said Kim Jong-woo, the producer behind Meeting You. “By line, I mean the fact that the dead can’t come back to life. But people saw the experience as crossing that line. After all, I created an experience in which the beloved seemed to have returned. Have I made some huge mistake? Have I broken the principle of humankind? I don’t know… maybe to some extent.”
Eternal You paints a haunting portrait of an industry that’s already revving up to capitalize on grief-stricken people. That’s not exactly new; psychics and people claiming to speak to the dead have been around for our entire civilization. But through AI, we now have the ability to reanimate those lost souls. While that might be helpful for some, we’re clearly not ready for a world where AI resurrection is commonplace.
This article originally appeared on Engadget at https://www.engadget.com/sundance-documentary-eternal-you-shows-how-ai-companies-are-resurrecting-the-dead-153025316.html?src=rss
A series of AI-generated images of Taylor Swift in suggestive and pornographic situations are causing an uproar on X, the platform formerly known as Twitter. Though some of the images have been taken down, others remain on the platform. Swift’s army of fans have embarked on a campaign to bury the images by flooding…
Star Wars: Ahsoka didn’t just pick up on threads from Star Wars Rebels, it also developed some new plotlines for its returning cast. Chief among them, Rebels character Sabine Wren ended up being Ahsoka’s apprentice and discovered she had Force powers, making her a Jedi Mandalorian—and the first of her kind, in the…
Star Wars: Ahsoka didn’t just pick up on threads from Star Wars Rebels, it also developed some new plotlines for its returning cast. Chief among them, Rebels character Sabine Wren ended up being Ahsoka’s apprentice and discovered she had Force powers, making her a Jedi Mandalorian—and the first of her kind, in the…
Suzanne Collins’ first Hunger Games novel released in 2008, and went onto become a massive franchise. In the wake of last year’s successful prequel film, Hunger Games: The Ballad of Songbirds & Snakes, Scholastic is building upon that momentum by bringing the first book to life again with an illustrated edition.
I’m not gonna pretend I’ve figured you, the Engadget reader, out. Trying to predict what is gonna get your little nerd hearts all a flutter is kind of a crapshoot. But, I’m pretty confident that a Tamagotchi in a guitar pedal is right up your alley. Ground Control Audio showed up to NAMM 2024 with the UwU virtual pet buffer pedal.
Now buffer pedals are about the least exciting piece of gear you can buy for your pedal board. Probably even less so than a tuner. But, if you’ve got a particularly large board or long cables, a buffer can dramatically improve your tone. Basically all it does is take the signal coming in and give it a little boost so you don’t lose precious high end to tone suck. Like I said, not exciting.
What UwU does is no different, except that it has a Tamagotchi-style virtual pet and a handful of mini-games built in. As you play, your new pedalboard buddy dances and gains experience points. As the little cat like creature gains experience it evolves over 30 levels with unique animations. As for what happens once you cross that 30 level threshold, well, the company hasn’t decided just yet. But there is still time to decide that since the pedal isn’t set to start shipping until March.
If simply having a new little virtual friend on your board isn’t enough whimsy for you, the UwU also has three mini-games built in. There’s Long Cat (a snake clone), Fishy Blox (vaguely Tetris-like) and Neko Invader. The tiny monochrome OLED and small buttons aren’t exactly ideal for playing games (and neither is hunching over a pedalboard I might add), but it feels true to its inspiration in old cellphone games.
Terrence O’Brien / Engadget
If you’re sitting there wondering, “why?” Well, first off, why not? Secondly, to keep you playing, obviously. Finding the time and drive to play or practice guitar can be tough. Especially if you’re teen with a hectic life of extra curriculars or, like me, a busy dad of two with a demanding day job. The UwU gives you a reason to play beyond just knowing you should. Carving out a few minutes every day to play will keep your adorable little UwU happy and healthy. Frankly, if I had one of these when I was younger and stubbornly clinging to my belief that I didn’t need to know music theory or technique, maybe I’d be a more proficient guitarist.
Of course none this would matter if the UwU was a crappy buffer. But it’s got 18v headroom and doesn’t color your tone at all. It’s even super tiny, so finding room for it on even the most crowded of pedalboards shouldn’t be too difficult.
The UwU virtual pet buffer is available now for preorder directly from Ground Control Audio for $139.
Terrence O’Brien / Engadget
This article originally appeared on Engadget at https://www.engadget.com/the-uwu-virtual-pet-buffer-is-a-tamagotchi-in-a-guitar-pedal-193633790.html?src=rss
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.