Amazon takes a new brick-and-mortar approach with a stake in Neiman Marcus

Amazon changed the face of retail over the last 20 years but has failed miserably to make inroads in the luxury goods market. Now, it’s trying something new. The online retailer has purchased a small stake in retailer Neiman Marcus and will reportedly provide data and logistics to Neiman and its new owner, Saks Fifth Avenue.

Yesterday, Saks Fifth Avenue and parent HBC announced the $2.65 billion acquisition of Neiman Marcus (which also owns Bergdorf Goodman), putting the largest US luxury retailers under the same roof, The Wall Street Journal reported. Amazon is a minority investor in the deal, which is still subject to regulatory approval.

“How do you future-proof a brand like Saks or Neimans or Bergdorf? You do that through technology,” Saks CEO Marc Metrick told Bloomberg. To that end, Amazon will gather high-quality customer data, analyze it to offer more personalized options and improve logistics. 

Amazon has attempted to access the luxury retail market over the years, but the major brands want nothing to do with it. “We believe the business of Amazon does not fit with LVMH, full stop, and it does not fit with our brands,” LVMH said back in 2016. The only place that LVMC (which owns Louis Vuitton, Dior, Givency and other labels) does business is in its own retail stores, at retailers like Neiman Marcus or on its own website.

In Europe, luxury brands won the right to block third-party sales of products online if they felt it damaged their image. In addition, the EU ruled in 2010 that brands with less than a 30 percent market share could prevent online retailers from selling their wares.

Amazon has tried to break into bricks-and-motor retail with varying degrees of success. Its ownership of Whole Foods is one positive example, but its cashierless Go stores have largely failed to take off.

With the acquisition of Neiman Marcus by Saks’ parent HBC, Amazon is getting involved in an organization expected to do a combined $10 billion worth of annual sales. There’s no word on the size of Amazon’s investment, but it seems a relatively safe bet compared to the more radical brick-and-mortar experiments it’s tried in the past.

This article originally appeared on Engadget at https://www.engadget.com/amazon-takes-a-new-brick-and-mortar-approach-with-a-stake-in-neiman-marcus-133019628.html?src=rss

Still Wakes the Deep is a modern horror classic

Don’t look down. Don’t look down. Don’t look down.

Waves the size of skyscrapers explode beneath me as I creep across a busted metal beam in the middle of the North Sea, suspended at the base of an oil rig that’s in the process of collapsing. I’m crawling swiftly but carefully, knees sliding on the wet metal and eyes locked on the platform in front of me. Don’t look down.

I look down. The cold sea is boiling just inches from my beam, white spray reaching up, threatening to pull me under miles of suffocating darkness and pressure. Fuck.

Still Wakes the Deep
The Chinese Room

In Still Wakes the Deep, horror comes in multiple forms. Violent creatures stalk the walkways on thin, too-long limbs that burst from their bodies like snapping bungee cords. Human-sized pustules and bloody ribbons grow along the corridors, emitting a sickly cosmic glow. The ocean is an unrelenting threat, wailing beneath every step. And then there’s the Beira D oil rig itself, a massive and mazelike industrial platform supported by slender tension legs in the middle of a raging sea, groaning and tilting as it’s ripped apart from the inside. Each of these elements is deadly; each one manifests a unique brand of terror.

Still Wakes the Deep is a first-person horror game from The Chinese Room, the studio behind Amnesia: A Machine for Pigs, Dear Esther and Everybody’s Gone to the Rapture. The game is set in the winter of 1975 and its action is contained to the Beira D, a hulking metal maze that offers mystery, a growing familiarity and death at every turn. The rig is filled with a rich cast of characters from the British Isles, most of them Scottish. Players assume the role of Caz, an electrician on the rig whose best friend is Roy, the cook.

Still Wakes the Deep
The Chinese Room

Still Wakes the Deep feels like a hit from the PS3 and Xbox 360 era, devoid of modern AAA bloat. It’s restrained like the original Dead Space, with a core loop that serves the narrative and vice versa. The mechanics steadily evolve without becoming repetitive or cumbersome. Its monsters are murderous but not overplayed. In Still Wakes the Deep, the horror is unrelenting but its source is constantly shifting — vicious eldritch beasts, the crumbling rig, the angry North Sea — and this diversity infuses the game with a buzzing tension until the breathtaking final scene.

The game is fully voice acted and its crew members are incredibly charming. An undercurrent of good-natured ribbing belies every interaction, and the dialogue is earnest and legitimately funny, even in life-or-death situations. This skillful sense of character development only makes the carnage more disturbing once the monsters board the Beira D.

After the oil rig drills through a mysterious substance deep in the North Sea, a giant eldritch organism takes over the structure, crunching its metal corridors and infesting the bodies of some crew members. Caz is on a mission to survive the creatures and escape the rig — and help save Roy, whose body is fading fast because he can’t get to his insulin.

Still Wakes the Deep
The Chinese Room

Gameplay in Still Wakes the Deep is traditional first-person horror fare, executed with elegance and expertise. The action involves leaping across broken platforms, balancing on thin ledges, running down corridors, climbing ladders, swimming through claustrophobic holes and hiding from monsters in vents and lockers. There are no guns on the Beira D, and Caz has just a screwdriver to help him break open locks and unscrew metal panels, placing the focus on pure survival rather than combat. Interactive materials tend to be highlighted in yellow, so it’s never a question of what to do or where to go, but rather how to get there without falling prey to the monsters, the sea or the rig.

Each input feels perfectly precise and responsive. Climbing a ladder, for instance, requires holding RT and pressing the analog stick in the proper direction — but if Caz slips, players need to suddenly press and hold LT as well, so he can regain his grasp in a quicktime event. In these moments of sudden panic, squeezing both triggers feels like the natural thing to do. It’s deeply satisfying to clasp the gamepad as tightly as Caz is holding the rungs of the ladder, player and character completely in sync in the aftermath of a sudden scare. Still Wakes the Deep is a prime example of intuitive game design.

Still Wakes the Deep
The Chinese Room

It’s also just a gorgeous game. I stopped short multiple times while playing Still Wakes the Deep simply to admire the crisp lines, complex lighting and photorealism of specific scenes, but every frame is dense with thoughtful and well-rendered details. The otherworldly structures littering the rig cause Caz’s vision to bubble like a melting film reel, and multicolored circles overtake the screen every time he passes too close to a pustule — it’s disorienting and eerily pretty, much like the rest of the game.

Still Wakes the Deep is an instant horror classic. It’s filled with heart-pounding terror and laugh-out-loud dialogue, and it all takes place in a setting that’s rarely explored in interactive media. Amid the sneaking, swimming, running and climbing on the Beira D, Still Wakes the Deep manages to tell a heartfelt and powerful story about relationships and sacrifice. Caz and Roy have a special friendship, but they also have family back on shore and returning to these people — alive, ideally — is a constant driving force.

Still Wakes the Deep
The Chinese Room

Still Wakes the Deep is available now on PC, PS5 and Xbox Series X/S, and it’s included in Game Pass. It’s developed by The Chinese Room and published by Secret Mode.

This article originally appeared on Engadget at https://www.engadget.com/still-wakes-the-deep-is-a-modern-horror-classic-175304800.html?src=rss

Epic says that Apple rejected its third-party app store for the second time

Epic says that Apple has once again rejected its submission for a third-party app store, according to a series of posts on X. The company says that Apple rejected the latest submission over the design and position of the “install” button on the app store, claiming that it too closely resembles Apple’s own “get” button. Apple also allegedly said that Epic’s “in-app purchases” label is too similar to its own label, used for the same reason. 

The maker of Fortnite suggests that this is just another salvo in the long-running dispute between the two companies. Epic says that it’s using the same “install” and “in-app purchases” naming conventions found “across popular app stores on multiple platforms.” As for the design language, the company states that it’s “following standard conventions for buttons in iOS apps” and that they’re “just trying to build a store that mobile users can easily understand.”

Epic has called the rejection “arbitrary, obstructive and in violation of the DMA.” To that end, it has shared concerns with the European Commission in charge of tracking potential Digital Markets Act (DMA) violations. The company still says it’s ready to launch both the Epic Games Store and Fortnite on iOS in the EU in “the next couple of months” so long as Apple doesn’t put up “further roadblocks.”

This is just the latest news from a rivalry that goes back years. The two companies have been sparring ever since Epic started using its own in-app payment option in the iOS version of Fortnite, keeping Apple away from its 30 percent cut.

This led to a lengthy legal battle in the US about Apple’s walled-garden approach to its app store. Epic sued Apple and Apple banned Epic. A judge issued a permanent injunction as a way to allow developers to avoid Apple’s 30 percent cut of sales. This didn’t satisfy anyone. Apple wasn’t happy, for obvious reasons, and Epic contested the language of the injunction, which didn’t call out Apple for having a monopoly. Both companies appealed, eventually making its way to the Supreme Court. The court decided not to hear the case. The justices must have had other things to do.

As the two companies continued bickering in the US, the EU passed the aforementioned DMA. This forced Apple’s hand into allowing third-party storefronts on iOS devices in Europe. Since then, Epic has been trying to get its storefront going but has been met by resistance from Apple

This article originally appeared on Engadget at https://www.engadget.com/epic-says-that-apple-rejected-its-third-party-app-store-for-the-second-time-183914413.html?src=rss

YouTube upgrades its 'erase song' tool to remove copyrighted music only

YouTube is trying to make it easy for its creators to remove songs from their videos and resolve copyright claims. In a new Creator Insider video, the website has announced that it has released an upgraded “erase song” tool that has the capability to remove music from video segments without deleting other audio, such as conversations, as well. 

When creators get a copyright claim for music, YouTube gives them the option to trim out the affected segment or to replace the song with an approved one in its audio library. Creators can’t monetize that particular video until they resolve the claim. The website has been testing its “erase song” tool for a while, but in the video, the company says it hasn’t been as accurate as it would like. To solve that problem, it redesigned the tool so that it now uses an AI-powered algorithm to accurately detect and remove copyrighted music from videos. 

Still, YouTube admits that the tool might not always work. If a song is particularly hard to remove, presumably due to audio quality or the presence of other sounds while it’s playing, creators may have to resort to other options. In addition to being able to trim out the offending segment or to replace its song, creators will also be able to mute that part of their video through the new erase tool. 

The website’s upgraded erase song tool will be available in YouTube Studio in the coming weeks. 

This article originally appeared on Engadget at https://www.engadget.com/youtube-upgrades-its-erase-song-tool-to-remove-copyrighted-music-only-140032261.html?src=rss

YouTube upgrades its 'erase song' tool to remove copyrighted music only

YouTube is trying to make it easy for its creators to remove songs from their videos and resolve copyright claims. In a new Creator Insider video, the website has announced that it has released an upgraded “erase song” tool that has the capability to remove music from video segments without deleting other audio, such as conversations, as well. 

When creators get a copyright claim for music, YouTube gives them the option to trim out the affected segment or to replace the song with an approved one in its audio library. Creators can’t monetize that particular video until they resolve the claim. The website has been testing its “erase song” tool for a while, but in the video, the company says it hasn’t been as accurate as it would like. To solve that problem, it redesigned the tool so that it now uses an AI-powered algorithm to accurately detect and remove copyrighted music from videos. 

Still, YouTube admits that the tool might not always work. If a song is particularly hard to remove, presumably due to audio quality or the presence of other sounds while it’s playing, creators may have to resort to other options. In addition to being able to trim out the offending segment or to replace its song, creators will also be able to mute that part of their video through the new erase tool. 

The website’s upgraded erase song tool will be available in YouTube Studio in the coming weeks. 

This article originally appeared on Engadget at https://www.engadget.com/youtube-upgrades-its-erase-song-tool-to-remove-copyrighted-music-only-140032261.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players’ services and how chaotic the company’s oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players’ services and how chaotic the company’s oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players’ services and how chaotic the company’s oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players’ services and how chaotic the company’s oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss

OpenAI hit by two big security issues this week

OpenAI seems to make headlines every day and this time it’s for a double dose of security concerns. The first issue centers on the Mac app for ChatGPT, while the second hints at broader concerns about how the company is handling its cybersecurity.

Earlier this week, engineer and Swift developer Pedro José Pereira Vieito dug into the Mac ChatGPT app and found that it was storing user conversations locally in plain text rather than encrypting them. The app is only available from OpenAI’s website, and since it’s not available on the App Store, it doesn’t have to follow Apple’s sandboxing requirements. Vieito’s work was then covered by The Verge, and after the exploit attracted attention, OpenAI released an update that added encryption to locally stored chats.

For the non-developers out there, sandboxing is a security practice that keeps potential vulnerabilities and failures from spreading from one application to others on a machine. And for non-security experts, storing local files in plain text means potentially sensitive data can be easily viewed by other apps or malware.

The second issue occurred in 2023 with consequences that have had a ripple effect continuing today. Last spring, a hacker was able to obtain information about OpenAI after illicitly accessing the company’s internal messaging systems. The New York Times reported that OpenAI technical program manager Leopold Aschenbrenner raised security concerns with the company’s board of directors, arguing that the hack implied internal vulnerabilities that foreign adversaries could take advantage of.

Aschenbrenner now says he was fired for disclosing information about OpenAI and for surfacing concerns about the company’s security. A representative from OpenAI told The Times that “while we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work” and added that his exit was not the result of whistleblowing.

App vulnerabilities are something that every tech company has experienced. Breaches by hackers are also depressingly common, as are contentious relationships between whistleblowers and their former employers. However, between how broadly ChatGPT has been adopted into major players’ services and how chaotic the company’s oversight, practices and public reputation have been, these recent issues are beginning to paint a more worrying picture about whether OpenAI can manage its data.

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-by-two-big-security-issues-this-week-214316082.html?src=rss