If you’re part of the intersection of virtual reality enthusiasts and major league baseball fans, then there’s good news for you. MLB has launched Home Run Derby VR on the Meta Quest Store, making it available for Quest 2, Quest 3 and Quest Pro users. The game was previously on Meta’s App Lab.
MLB Home Run Derby VR gives gamers the chance to explore 30 different ballparks and play up to 100 different levels. “This upgraded game offers an exciting opportunity to experience each venue like never before and utilizing advanced motion controls and realistic batting mechanics, users can step into the virtual batter’s box to emulate their favorite sluggers from anywhere in the world,” MLB shared in its announcement.
The game also offers multiplayer mode for up to four people or tournaments for up to eight. Users can choose winners by score, fastest exit velocity or longest home run. Plus, achievements can unlock bat skins, batting gloves and more for their Meta avatars. MLB Home Run Derby VR is available for $30 in the Meta Quest Store, but non-Meta users can pick it up on Rift or Steam VR platforms.
This launch isn’t MLB’s first foray into VR: The organization hosted its first “virtual ballpark” regular-season game in September. The experience allowed viewers to “enter” the stadium and watch avatars correspond to real-time gameplay between the Tampa Bay Rays and the Los Angeles Angels.
This article originally appeared on Engadget at https://www.engadget.com/mlbs-home-run-derby-vr-launches-on-the-meta-quest-store-130036093.html?src=rss
The LEGO Group and Epic Games have introduced new LEGO tools in Unreal Editor for Fortnite (UEFN) and Fortnite Creative, enabling creators worldwide to design and publish their own LEGO Islands within the game. This collaboration, announced at the 2024 Game Developers Conference, aims to empower Fortnite creators with the creative potential of the LEGO System in Play.
Creators are provided with LEGO Templates to facilitate the creation process and offer a variety of play experiences for gamers of all ages. Additionally, the LEGO Group has unveiled three new LEGO Islands within Fortnite, expanding the game’s creative possibilities.
The integration of LEGO elements, styles, and brand assets into Fortnite’s creative tools allows for the development of diverse experiences, which must adhere to ESRB and PEGI rating guidelines to ensure accessibility and safety for players. Creators enrolled in the Fortnite Island Creator Program may also receive payouts for their creations.
Building all new islands, limited only by your imagination.
Create your own LEGO® Island with LEGO Elements and Island Templates in UEFN and Creative – starting now! pic.twitter.com/KJif56WKa7
This initiative marks another milestone in the ongoing partnership between Epic Games and The LEGO Group, following the launch of LEGO Fortnite in December 2023. The new LEGO Islands include LEGO Prop Hunt, LEGO Battle Arena, and LEGO Cat Island Adventure, each offering unique gameplay experiences.
Kari Vinther Nielsen, Head of Play & Creator Growth at LEGO GAME, expressed excitement about democratizing creativity and enabling creators to leverage the LEGO brand in unprecedented ways. The LEGO Group plans to introduce additional LEGO-themed experiences in Fortnite throughout 2024 and beyond.
The LEGO Group and Epic Games have introduced new LEGO tools in Unreal Editor for Fortnite (UEFN) and Fortnite Creative, enabling creators worldwide to design and publish their own LEGO Islands within the game. This collaboration, announced at the 2024 Game Developers Conference, aims to empower Fortnite creators with the creative potential of the LEGO System in Play.
Creators are provided with LEGO Templates to facilitate the creation process and offer a variety of play experiences for gamers of all ages. Additionally, the LEGO Group has unveiled three new LEGO Islands within Fortnite, expanding the game’s creative possibilities.
The integration of LEGO elements, styles, and brand assets into Fortnite’s creative tools allows for the development of diverse experiences, which must adhere to ESRB and PEGI rating guidelines to ensure accessibility and safety for players. Creators enrolled in the Fortnite Island Creator Program may also receive payouts for their creations.
Building all new islands, limited only by your imagination.
Create your own LEGO® Island with LEGO Elements and Island Templates in UEFN and Creative – starting now! pic.twitter.com/KJif56WKa7
This initiative marks another milestone in the ongoing partnership between Epic Games and The LEGO Group, following the launch of LEGO Fortnite in December 2023. The new LEGO Islands include LEGO Prop Hunt, LEGO Battle Arena, and LEGO Cat Island Adventure, each offering unique gameplay experiences.
Kari Vinther Nielsen, Head of Play & Creator Growth at LEGO GAME, expressed excitement about democratizing creativity and enabling creators to leverage the LEGO brand in unprecedented ways. The LEGO Group plans to introduce additional LEGO-themed experiences in Fortnite throughout 2024 and beyond.
A month after taking full ownership of Hulu last November, Disney started beta testing integration with Disney+. Today, Hulu on Disney+ is officially out of beta, making it easy for subscribers to access content for both services. It’s also a way for Disney to push its Hulu bundle, which starts at $9.99 a month with ads. And if you want to go ad-free and download content for offline viewing, there’s the Duo Premium bundle for $19.99 a month.
All your favorite Hulu content is in its own tab, but the big shows (like Shogun) will feature in the main show carousel too. However, if you’re a long-running Hulu viewer, you’ll lose your viewing progress on things you’ve already watched or half-watched.
GLAAD found plenty of policy violations where Meta took no action.
Surprise! Meta is failing to enforce its own rules against anti-trans hate speech on its platforms. GLAAD warns that “extreme anti-trans hate content remains widespread across Instagram, Facebook and Threads.” It reported on dozens of examples of hate speech from Meta’s apps, between June 2023 and March 2024. Despite the posts clearly violating Meta’s policies, the company either claimed “posts were not violative or simply did not take action on them,” according to GLAAD. The group also shared two examples of posts from Threads, Meta’s newest app where the company has tried to tamp down “political” content and other “potentially sensitive” topics.
GLAAD’s report isn’t the first time Meta’s been criticized for not protecting LGBTQIA+ users. Last year, its own Oversight Board urged Meta to “improve the accuracy of its enforcement on hate speech towards the LGBTQIA+ community.”
Marvel Rivals is a third-person 6v6 team-based shooter that sounds very Overwatch-like. It’ll be free to play, and it’s set inside of a “continually evolving universe,” which probably means new levels, new characters and new gameplay modes over time. Testers will be able to play as Spider-Man, Black Panther, Magneto, Magik and eight or nine more unannounced characters. The developers added Rocket Raccoon, Groot, Hulk and Iron Man would also eventually be playable. The alpha will be available in May for PC players. There’s no word on a console release.
Yes, No Man’s Sky is still getting major updates. Developer Hello Games’ next update, due Wednesday, adds procedurally generated space stations (so they’ll be different every time), a ship editor and a Guild system to the nearly eight-year-old space exploration sim. The stations’ broader scale will be evident from the outside, while their interiors will include new shops, gameplay and things to do, including interacting with all those guilds.
Samsung is set to kick off the rollout of One UI 6.1starting March 28, extending its reach to the Galaxy S23 series, S23 FE, Z Fold5, Z Flip5, and the Tab S9 series. This update marks a significant milestone in Samsung’s ongoing efforts to integrate Galaxy AI features into its ecosystem, benefiting millions of users worldwide.
Earlier this year, Samsung introduced Galaxy AI, ushering in a new era of innovation for Galaxy smartphones. With the Galaxy S24 series leading the charge, users have already experienced the transformative potential of Galaxy AI. Now, with One UI 6.1, these cutting-edge features are expanding across a broader range of devices.
Among the standout features introduced with Galaxy AI are Circle to Search with Google, Live Translate, Chat Assist, and Generative Edit. Circle to Search with Google redefines the search experience by seamlessly integrating search functionality into users’ daily interactions.
Live Translate breaks down language barriers in real-time, facilitating fluid conversations with two-way voice translations and live captions during calls. Chat Assist enhances communication by offering translation aids, style enhancements, and grammar tweaks in 13 different languages. Generative Edit empowers users to unleash their creativity in photo editing, providing unparalleled control over visual narratives.
The software update will be available to devices sold through major carriers such as AT&T, T-Mobile, Verizon, and USCC, as well as retail partners including Amazon and Best Buy, ensuring widespread accessibility to these exciting new features.
Microsoft may be working on a white version of its current all-digital Xbox Series X console, according to leaked images reported by Exputerand documents seen by The Verge. The design appears to be identical to the current black disc version (sans the disc slot) and has the same “robot white” finish as the white Xbox Series S. If accurate, the news may mean delays to a rumored Xbox Series X refresh that carriesas a different design.
It’s not the first time rumors of a white all-digital Xbox Series X have leaked out. Last month, Exputer also reported that Microsoft planned to release a white, all-digital Xbox Series X sometime between June and July 2024, with a retail price $50 to $100 lower than the current Xbox Series X.
Last year, a large leak indicated that Microsoft would launch an all-digital Xbox Series X with a new cylindrical design, arriving in November of 2024 for $500. The device, code-named Brooklin, was tipped to come with Wi-Fi 6E, Bluetooth 5.2, USB-C front port, an all-new southbridge and a 6-nanometer die shrink. That would allow for a reduced (15 percent) power draw, a new low-power standby mode and increased use of recycled plastic.
Much of the news around Brooklin was effectively refuted by Xbox boss Phil Spencer shortly after the leak, though. He implied that it was based on early planning and no longer accurate. “It’s hard to see our team’s work shared in this way because so much has changed and there’s so much to be excited about right now and in the future,” he stated in an X post. “We will share the real plans when we are ready.”
This article originally appeared on Engadget at https://www.engadget.com/microsoft-may-be-working-on-a-cheaper-disc-free-xbox-series-x-104021193.html?src=rss
Philips Hue and Samsung SmartThings are expanding their collaboration to enhance the interaction between the Philips Hue Sync TV app, Samsung TVs, and the SmartThings ecosystem. This partnership aims to offer users a more immersive entertainment experience through synchronized lighting.
Starting in spring 2024, the Philips Hue Sync TV app will introduce a monthly subscription plan alongside the existing one-time purchase option. This subscription, priced at $2.99 per month, will allow users to unlock the app on up to three Samsung TVs within the same household. Additionally, the Sync TV app will expand its availability to Brazil, Hong Kong, Poland, the Czech Republic, and Slovakia.
Integration with the SmartThings mobile app will provide users with easier control over the Sync TV app and their lighting settings. Through the SmartThings app, users can adjust settings, choose different modes, and initiate or cease synchronization without interrupting their TV viewing experience. Advanced automations will enable users to create personalized entertainment routines with just a push of a button.
Music Mode
A new feature called Music mode will be introduced to the Philips Hue Sync TV app, allowing compatible 2024 Samsung TVs to synchronize lighting with any audio content played on the TV. This feature will transform the living room into a concert hall, enhancing the audio-visual experience.
Jasper Vervoort, Business Leader at Philips Hue, expressed excitement about the expanded partnership with SmartThings, emphasizing the aim to provide users with greater control over their entertainment experiences. Mark Benson, Head of SmartThings US, highlighted the partnership’s commitment to delivering seamless and innovative solutions for enriching users’ smart home experiences.
The Philips Hue Sync TV app is compatible with Samsung Q60 series or higher QLED TVs manufactured from 2022 onward. Integration with the SmartThings app and the introduction of monthly subscriptions will commence in spring 2024, alongside the launch of Music mode for compatible 2024 Samsung TVs. Compatibility with 2022 and 2023 Samsung TVs will follow later in the year.
It’s been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government’s use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” Vice President Kamala Harris told reporters on a press call.
Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an “unacceptable” impact on critical operations.
Impact on Americans’ rights and safety
Per the policy, an AI system is deemed to impact safety if it “is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of” certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in “a workplace, school, housing, transportation, medical or law enforcement setting.”
Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and “replicating a person’s likeness or voice without express consent.”
When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to “establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk.”
Transparency requirements
The second requirement will force agencies to be transparent about the AI systems they’re using. “Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.
As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations. If an agency can’t disclose specific AI use cases for sensitivity reasons, they’ll still have to report metrics
ASSOCIATED PRESS
Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI. “This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.
The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.
The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.
As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.
“AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity,” OMB Director Shalanda Young told reporters. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services.”
This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal billsin the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.
This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss
It’s been five months since President Joe Biden signed an executive order (EO) to address the rapid advancements in artificial intelligence. The White House is today taking another step forward in implementing the EO with a policy that aims to regulate the federal government’s use of AI. Safeguards that the agencies must have in place include, among other things, ways to mitigate the risk of algorithmic bias.
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” Vice President Kamala Harris told reporters on a press call.
Harris announced three binding requirements under a new Office of Management and Budget (OMB) policy. First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights. Otherwise, the agency will have to stop using an AI product unless its leaders can justify that scrapping the system would have an “unacceptable” impact on critical operations.
Impact on Americans’ rights and safety
Per the policy, an AI system is deemed to impact safety if it “is used or expected to be used, in real-world conditions, to control or significantly influence the outcomes of” certain activities and decisions. Those include maintaining election integrity and voting infrastructure; controlling critical safety functions of infrastructure like water systems, emergency services and electrical grids; autonomous vehicles; and operating the physical movements of robots in “a workplace, school, housing, transportation, medical or law enforcement setting.”
Unless they have appropriate safeguards in place or can otherwise justify their use, agencies will also have to ditch AI systems that infringe on the rights of Americans. Purposes that the policy presumes to impact rights defines include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or limiting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and “replicating a person’s likeness or voice without express consent.”
When it comes to generative AI, the policy stipulates that agencies should assess potential benefits. They all also need to “establish adequate safeguards and oversight mechanisms that allow generative AI to be used in the agency without posing undue risk.”
Transparency requirements
The second requirement will force agencies to be transparent about the AI systems they’re using. “Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.
As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations. If an agency can’t disclose specific AI use cases for sensitivity reasons, they’ll still have to report metrics
ASSOCIATED PRESS
Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI. “This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.
The vice president added that prominent figures from the public and private sectors (including civil rights leaders and computer scientists) helped shape the policy along with business leaders and legal scholars.
The OMB suggests that, by adopting the safeguards, the Transportation Security Administration may have to let airline travelers opt out of facial recognition scans without losing their place in line or face a delay. It also suggests that there should be human oversight over things like AI fraud detection and diagnostics decisions in the federal healthcare system.
As you might imagine, government agencies are already using AI systems in a variety of ways. The National Oceanic and Atmospheric Administration is working on artificial intelligence models to help it more accurately forecast extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to improve travel time.
“AI presents not only risk, but also a tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity,” OMB Director Shalanda Young told reporters. “When used and overseen responsibly, AI can help agencies to reduce wait times for critical government services to improve accuracy and expand access to essential public services.”
This policy is the latest in a string of efforts to regulate the fast-evolving realm of AI. While the European Union has passed a sweeping set of rules for AI use in the bloc, and there are federal billsin the pipeline, efforts to regulate AI in the US have taken more of a patchwork approach at state level. This month, Utah enacted a law to protect consumers from AI fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka the Elvis Act — seriously) is an attempt to protect musicians from deepfakes i.e. having their voices cloned without permission.
This article originally appeared on Engadget at https://www.engadget.com/the-white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-090058684.html?src=rss
Oregon Governor Tina Kotek has signed the state’s Right to Repair bill into law, and it even comes with a provision that potentially makes it stronger than California’s and Minnesota’s versions. It’s the first to prohibit (PDF) a practice called “parts pairing,” which requires the use of certain proprietary components for repair. Parts pairing prevents third-party repair services from replacing a broken component with one that didn’t come from the brand itself, because it wouldn’t work with the company’s software. People would usually get error messages if they try to install an unauthorized part, forcing them to buy from the company itself.
Under the new rules, preventing an independent provider from installing off-brand parts is prohibited. As is reducing the performance of a device that had been fixed with an unauthorized component. Even those error messages and warnings are not allowed. The ban on parts pairing doesn’t cover devices that are already out, though, and will only be applicate for anything manufactured after January 1, 2025.
While manufacturers like Apple seem to have changed their tune in recent years and now generally support the Right to Repair movement, Oregon’s parts pairing provision was still a point of contention. Apple senior manager John Perry told lawmakers in a testimony that his company “agrees with the vast majority of Senate Bill 1596.” However, it’s also worried about the security implications of allowing the use of unauthorized parts, such as biometric sensors, for replacement.
Regardless, the ban on parts pairing is now a rule under Oregon’s law, along with making compatible parts available to device owners through the company or authorized service providers for favorable prices and without any “substantial” conditions. Companies are also required to make documentation on how to fix their devices, as well as any special tools needed to repair them, available to repair shops. These rules will apply to all phones sold after July 1, 2021 and to other consumer electronic devices sold after July 1, 2015.
This article originally appeared on Engadget at https://www.engadget.com/oregons-right-to-repair-bill-is-now-a-law-064955635.html?src=rss
This is site is run by Sascha Endlicher, M.A., during ungodly late night hours. Wanna know more about him? Connect via Social Media by jumping to about.me/sascha.endlicher.