Chrome's latest safety update will be more proactive about protecting you

Chrome is getting a series of safety updates that could improve your security while browsing online. In a release, Google announced the new features, which include protecting against abusive notifications, limiting site permissions and reviewing extensions.   

Safety Check, Chrome’s security monitor, will now run continuously in the background to more readily take protective steps. The tool will let you know what steps it’s taking, which should include removing permissions from sites you no longer visit and ones Google Safe Browsing believes are deceiving you into giving permission. It will also flag any alerts it deems you might not want and notify you of issues that require attention, like security issues. Plus, Safety Check on your desktop should alert you to any Chrome extensions that might pose a risk. 

Google is also reducing the number of permissions that last for sites on Chrome for desktops or Android devices. The new feature will allow you to approve mic or camera access for one time only instead of always for the site. Instead, they will have to request your permission again on the next use. Plus, Google is also expanding the ability to unsubscribe from a site on Chrome with one button beyond Pixel devices to more Android ones.   

This article originally appeared on Engadget at https://www.engadget.com/chromes-latest-safety-update-will-be-more-proactive-about-protecting-you-160046221.html?src=rss

GM and Hyundai plan to work together on cars and clean-energy tech

It’s not totally uncommon for major automakers to buddy up on projects, share their knowledge and try to find ideas that benefit all parties. The latest to snuggle up are GM and Hyundai. Through their collaboration, they hope to improve their competitiveness while trying to reduce the costs and risks involved with developing new tech.

The two companies have signed a non-binding agreement and they’ll immediately start assessing joint opportunities and working toward binding agreements. According to GM CEO Mary Barra, the aim “is to unlock the scale and creativity of both companies to deliver even more competitive vehicles to customers faster and more efficiently.”

Projects that the two sides are looking at working on together include co-development and production of passenger and commercial vehicles, internal combustion engines and electric and hydrogen clean energy tech. They’ll also explore supply chain efficiency — combined sourcing for the likes of battery raw materials and steel could save them both a bundle. GM and Hyundai will look into ways that they can harness their scale and knowhow to do all of this while reducing costs.

It might be a while before we see any fruits of these labors, but it’s smart for automakers to team up and try to reduce costs, especially with the EV market being somewhat dicey. Ford’s EV division, for instance, is on track to lose around $5 billion this year.

There are other types of partnerships between automakers, of course. In June, Volkswagen and Rivian teamed up, with the former expected to invest $3 billion into the EV company and a further $2 billion on a joint venture between the two sides.

This article originally appeared on Engadget at https://www.engadget.com/transportation/evs/gm-and-hyundai-plan-to-work-together-on-cars-and-clean-energy-tech-162625133.html?src=rss

The FDA greenlights Apple’s Hearing Aid feature for AirPods Pro

The Food and Drug Administration has approved Apple’s over-the-counter Hearing Aid feature. Designed for people with mild to moderate hearing loss, it transforms the second-gen AirPods Pro into OTC hearing aids. This follows the FDA’s 2022 decision to allow adults with less-than-severe impairment to use corrective consumer hearing devices without a professional test, prescription or fitting.

The FDA says Apple’s software-based Hearing Test feature for AirPods Pro showed similar benefits to those who received a professional fitting of the wireless earbuds. “Results also showed comparable performance for tests measuring levels of amplification in the ear canal, as well as a measure of speech understanding in noise,” the FDA wrote in its announcement. The agency adds that it didn’t observe any “adverse events” from using the device as an OTC hearing aid.

Apple’s Hearing Aid feature, coming in iOS 18, starts with a hearing test on your paired iPhone or iPad. As the image above shows, the test begins by ensuring your earbuds have a good seal. After that, it activates active noise cancellation (ANC) and asks you to tap the screen when you hear tones in the left and right ears.

Once you finish, your results will live in the iOS Health app, where you can see how your results change (or not) over time. You can download your results and give them to an audiologist anytime. (If the test determines you have severe hearing loss, it will recommend you seek a professional assessment since the AirPods feature is only approved for those with mild to moderate impairment.)

Engadget’s Billy Steele got an early preview of the feature after Apple’s big iPhone 16 event earlier this week. “It seems to be as quick and easy as Apple describes,” our audio expert wrote. Although the demo was a simulation, it covered each step of the process, adding up to only about five minutes.

Apple developed the feature using 150,000 real-world audiograms and millions of simulations. The company’s FDA application was reviewed under the agency’s De Novo premarket pathway, which provides a runway for novel devices that don’t carry serious risk.

Apple’s Hearing Aid and Hearing Test features will arrive no earlier than when iOS 18 launches to the public on September 16. The AirPods Pro (second-gen) is required to use the feature.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/the-fda-greenlights-apples-hearing-aid-feature-for-airpods-pro-164912484.html?src=rss

The FDA greenlights Apple’s Hearing Aid feature for AirPods Pro

The Food and Drug Administration has approved Apple’s over-the-counter Hearing Aid feature. Designed for people with mild to moderate hearing loss, it transforms the second-gen AirPods Pro into OTC hearing aids. This follows the FDA’s 2022 decision to allow adults with less-than-severe impairment to use corrective consumer hearing devices without a professional test, prescription or fitting.

The FDA says Apple’s software-based Hearing Test feature for AirPods Pro showed similar benefits to those who received a professional fitting of the wireless earbuds. “Results also showed comparable performance for tests measuring levels of amplification in the ear canal, as well as a measure of speech understanding in noise,” the FDA wrote in its announcement. The agency adds that it didn’t observe any “adverse events” from using the device as an OTC hearing aid.

Apple’s Hearing Aid feature, coming in iOS 18, starts with a hearing test on your paired iPhone or iPad. As the image above shows, the test begins by ensuring your earbuds have a good seal. After that, it activates active noise cancellation (ANC) and asks you to tap the screen when you hear tones in the left and right ears.

Once you finish, your results will live in the iOS Health app, where you can see how your results change (or not) over time. You can download your results and give them to an audiologist anytime. (If the test determines you have severe hearing loss, it will recommend you seek a professional assessment since the AirPods feature is only approved for those with mild to moderate impairment.)

Engadget’s Billy Steele got an early preview of the feature after Apple’s big iPhone 16 event earlier this week. “It seems to be as quick and easy as Apple describes,” our audio expert wrote. Although the demo was a simulation, it covered each step of the process, adding up to only about five minutes.

Apple developed the feature using 150,000 real-world audiograms and millions of simulations. The company’s FDA application was reviewed under the agency’s De Novo premarket pathway, which provides a runway for novel devices that don’t carry serious risk.

Apple’s Hearing Aid and Hearing Test features will arrive no earlier than when iOS 18 launches to the public on September 16. The AirPods Pro (second-gen) is required to use the feature.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/the-fda-greenlights-apples-hearing-aid-feature-for-airpods-pro-164912484.html?src=rss

The FDA greenlights Apple’s Hearing Aid feature for AirPods Pro

The Food and Drug Administration has approved Apple’s over-the-counter Hearing Aid feature. Designed for people with mild to moderate hearing loss, it transforms the second-gen AirPods Pro into OTC hearing aids. This follows the FDA’s 2022 decision to allow adults with less-than-severe impairment to use corrective consumer hearing devices without a professional test, prescription or fitting.

The FDA says Apple’s software-based Hearing Test feature for AirPods Pro showed similar benefits to those who received a professional fitting of the wireless earbuds. “Results also showed comparable performance for tests measuring levels of amplification in the ear canal, as well as a measure of speech understanding in noise,” the FDA wrote in its announcement. The agency adds that it didn’t observe any “adverse events” from using the device as an OTC hearing aid.

Apple’s Hearing Aid feature, coming in iOS 18, starts with a hearing test on your paired iPhone or iPad. As the image above shows, the test begins by ensuring your earbuds have a good seal. After that, it activates active noise cancellation (ANC) and asks you to tap the screen when you hear tones in the left and right ears.

Once you finish, your results will live in the iOS Health app, where you can see how your results change (or not) over time. You can download your results and give them to an audiologist anytime. (If the test determines you have severe hearing loss, it will recommend you seek a professional assessment since the AirPods feature is only approved for those with mild to moderate impairment.)

Engadget’s Billy Steele got an early preview of the feature after Apple’s big iPhone 16 event earlier this week. “It seems to be as quick and easy as Apple describes,” our audio expert wrote. Although the demo was a simulation, it covered each step of the process, adding up to only about five minutes.

Apple developed the feature using 150,000 real-world audiograms and millions of simulations. The company’s FDA application was reviewed under the agency’s De Novo premarket pathway, which provides a runway for novel devices that don’t carry serious risk.

Apple’s Hearing Aid and Hearing Test features will arrive no earlier than when iOS 18 launches to the public on September 16. The AirPods Pro (second-gen) is required to use the feature.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/the-fda-greenlights-apples-hearing-aid-feature-for-airpods-pro-164912484.html?src=rss

I don't get why Apple’s multitrack Voice Memos require an iPhone 16 Pro

Apple’s recent iPhone event brought some nifty ideas, from the camera button to a reinvention of Google Lens and beyond. The company also announced that it’s bringing simple multitrack recording to Voice Memos. This was particularly exciting for me since, well, I use Voice Memos a lot. I have nearly 500 of these little recordings that were made during the lifetime of my iPhone 14 Pro and thousands more in the cloud. You never know when you’ll need a random tune you hummed while waiting for the subway in 2013. 

So this feature felt tailor-made for me. I write songs. I play guitar. I do everything that lady in the commercial does, including opening the fridge late at night for no real reason.

A lady in front of a fridge.
Apple

Then reality hit. This isn’t a software update that will hit all iPhone models. It’s tied to the ultra-premium iPhone 16 Pro, which starts at a cool $1,000. I don’t really want to upgrade right now, so the dream of singing over an acoustic guitar track right on the Voice Memos app is dead on arrival.

Why is this particular feature walled behind the iPhone 16 Pro? It’s a simple multitrack recording function. From the ad, it looks like the app can’t even layer more than two tracks at a time. This can’t exactly be taxing that A18 Pro chip, especially when the phone can also handle 4K/120 FPS video recording in Dolby Vision. 

Pro Tools, a popular digital audio workstation, was first introduced in 1991. This was two years before Intel released the Pentium chip. Computers of that era had no trouble layering tracks. For a bit of reference, last year’s A17 Pro chip had around 19 billion transistors. An original Pentium chip had around three million. In other words, a modern smartphone chip is around 6,300 times more powerful than a 1993 Pentium-based PC.

So let us layer tracks on Voice Memos, Apple! It can’t be that complicated. I’ve been using dedicated multitrack apps ever since the iPhone 3. Apple throws GarageBand in with every iPhone. Both GarageBand and third-party recording apps have a place, sure, but nothing beats the quickness and ease-of-use of Voice Memos. It’d sure be great to be able to make a quick-and-dirty acoustic demo of a song and send it out to someone without having to navigate a fairly complicated interface.

App in front of a refrigerator.
Apple

Yeah. I see the elephant in the room. There’s a part of the ad that I’ve been avoiding. The woman records the vocal layer over the guitar track without wearing headphones. She just sang into the phone while standing in front of that refrigerator. Now, that’s something old-school Pentiums could not do. There’s some microphone placement wizardry going on there, along with machine learning algorithms that reduce unwanted ambient noise. The iPhone 16 Pro has a brand-new microphone array, so I get that older models might not be able handle this particular part of the equation.

But who cares? That’s a really neat feature. It’s also completely unnecessary. If you’re reading this, you are likely already wearing earbuds/headphones or have some within reach. Record the first track without the headphones. Record the secondary layer while wearing headphones. That’s it. Problem solved. You can even do it in front of the refrigerator.

Also, both the base-level iPhone 16 and the Pro support Audio Mix, which lets people adjust various sound levels from various sources after capturing video. This is done without the new Studio Mics on the iPhone 16 Pro and seems to reduce ambient noise in a similar way. So it could be possible that there’s a software solution here to handle even that elephant in the room. After all, the company credits “powerful machine learning algorithms” for this tech — if it can erase environmental wind noise, surely it can handle music playing in the background? 

So I am once again asking for Apple to let the rest of us play around with multitrack recording on Voice Memos. There’s no reason every older iPhone model couldn’t compute its way to a simple guitar/vocal two-track wav file. Pop the feature into a software update. I hear there’s one for iOS 18 coming really soon, and another for Apple Intelligence after that.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/i-dont-get-why-apples-multitrack-voice-memos-require-an-iphone-16-pro-175134621.html?src=rss

A new report raises concerns about the future of NASA

A concerning report from the National Academies of Sciences, Engineering and Medicine (NASEM) expresses some serious concerns about the future of America’s space exploration agency.

The NASEM report was written by a panel of aerospace experts and lays out what it sees as a possible “hollow future” for the National Aeronautics and Space Administration (NASA). It addresses issues of underfunding due to “declining long-term national emphasis on aeronautics and civil space,” an assertion that NASA itself is aware of and agrees with. The report also notes that NASA’s problems extend far beyond having enough funding to carry out its missions and operations.

Some of the report’s “core findings” suggest areas of concern that could affect the space agency’s future. These include a focus on “short-term measures without adequate consideration for longer-term needs and implications,” reliance on “milestone-based purchase-of-service contracts” and inefficiency due to “slow and cumbersome business operations.” The report also raised concerns about the current generation of talent being siphoned off by private aerospace companies, and the next generation of engineers not receiving an adequate foundation of knowledge due to our underfunded public school systems. Finally the report states bluntly that NASA’s infrastructure “is already well beyond its design life.”

These and other issues could lead to even more serious problems. Norman Augustine, a former Lockheed Martin chief executive and the report’s lead author, told The Washington Post that reliance on the private sector could further erode NASA’s workforce, reducing its role to one of oversight instead of problem-solving.

Congress could allocate more funds to NASA to address these concerns but that’s not likely since it’s constantly struggling to prevent government shutdowns. Instead, Augustine says NASA could focus on prioritizing its efforts on more strategic goals and initiatives.

This article originally appeared on Engadget at https://www.engadget.com/science/space/a-new-report-raises-concerns-about-the-future-of-nasa-184643260.html?src=rss

OpenAI's new o1 model is slower, on purpose

OpenAI has unveiled its latest artificial intelligence model called o1, which, the company claims, can perform complex reasoning tasks more effectively than its predecessors. The release comes as OpenAI faces increasing competition in the race to develop more sophisticated AI systems. 

O1 was trained to “spend more time thinking through problems before they respond, much like a person would,” OpenAI said on its website. “Through training, [the models] learn to refine their thinking process, try different strategies, and recognize their mistakes.” OpenAI envisions the new model being used by healthcare researchers to annotate cell sequencing data, by physicists to generate mathematical formulas and software developers.  

Current AI systems are essentially fancier versions of autocomplete, generating responses through statistics instead of actually “thinking” through a question, which means that they are less “intelligent” than they appear to be. When Engadget tried to get ChatGPT and other AI chatbots to solve the New York Times Spelling Bee, for instance, they fumbled and produced nonsensical results.

With o1, the company claims that it is “resetting the counter back to 1” with a new kind of AI model designed to actually engage in complex problem-solving and logical thinking. In a blog post detailing the new model, OpenAI said that it performs similarly to PhD students on challenging benchmark tasks in physics, chemistry and biology, and excels in math and coding. For example, its current flagship model, GPT-4o, correctly solved only 13 percent of problems in a qualifying exam for the International Mathematics Olympiad compared to o1, which solved 83 percent.  

The new model, however, doesn’t include capabilities like web browsing or the ability to upload files and images. And, according to The Verge, it’s significantly slower at processing prompts compared to GPT-4o. Despite having longer to consider its outputs, o1 hasn’t solved the problem of “hallucinations” — a term for AI models making up information. “We can’t say we solved hallucinations,” the company’s chief research officer Bob McGrew told The Verge

O1 is still at a nascent stage. OpenAI calls it a “preview” and is making it available only to paying ChatGPT customers starting today with restrictions on how many questions they can ask it per week. In addition, OpenAI is also launching o1-mini, a slimmed-down version that the company says is particularly effective for coding. 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-new-o1-model-is-slower-on-purpose-185711459.html?src=rss

An artist says Nerf’s Destiny 2 hand cannon is a ripoff of their work

An artist who goes by @tofu_rabbit on X says that the look of Nerf’s Ace of Spades handgun from Bungie’s Destiny games came from a commissioned artwork they drew almost a decade ago.

Nerf and Bungie unveiled its newest foam dart gun collaboration on Tuesday featuring a limited edition version of Cayde-6’s iconic “Ace of Spades” blaster from Destiny 2 that is available for purchase on Bungie’s online store. The following morning, @tofu_rabbit posted images comparing Nerf’s newest foam dart launcher to a piece of art they made in 2015 and posted on their DeviantArt page based on the same gun from the game.

The artist pointed out 11 parts or designs on the Nerf gun that allegedly line up perfectly with their original design. They include features like an upside down spade on the handle, identical shaped cracks in a strip of paint on the bullet chamber and a paisley pattern etched on the gun just in front of the trigger. They claim the design of the Nerf gun “DIRECTLY lifts a commission” they did in 2015, and add that the likeness goes beyond just being “similar” or “coincidence.”

Bungie issued a statement on its official Destiny 2 X page that they are investigating the artist’s claims and “will share more on what next steps we are taking once we have gathered more information.” We’ve also reached out to Nerf’s parent company Hasbro for comment.

This article originally appeared on Engadget at https://www.engadget.com/gaming/an-artist-says-nerfs-destiny-2-hand-cannon-is-a-ripoff-of-their-work-224824750.html?src=rss

Elgato's latest Stream Deck is a $900 rackmount unit for pros

Elgato has introduced the Stream Deck Studio, a new version of its creative control tech that’s firmly targeting professional broadcasters. This 19-inch rackmount console has 32 LCD keys and two rotary dials. The $900 price tag shows that this is not an entry-level purchase.

The company collaborated with broadcast software specialist Bitfocus on the Stream Deck Studio. The device can run the Companion software that works on other Stream Deck models, but also supports the company’s new Buttons software. The Buttons app allows for additional interface customization designed specifically for the Stream Deck Studio.

Elgato has been expanding its Stream Deck line, which began life as a simple sidekick for livestreamers, to reach a broader range of users. For instance, it introduced an Adobe Photoshop integration aimed at visual artists. This push to reach more pro-tier customers could put Elgato into more frequent competition with rival brands like Loupedeck, which Logitech acquired last year, along with established broadcast brands like Blackmagic.

This article originally appeared on Engadget at https://www.engadget.com/computing/accessories/elgatos-latest-stream-deck-is-a-900-rackmount-unit-for-pros-215003305.html?src=rss