Amazon’s Zoox starts testing its robotaxis in Los Angeles

Amazon’s autonomous vehicle company Zoox has begun testing its robotaxis in Los Angeles. It has deployed a small fleet of retrofitted test vehicles throughout the city for the purposes of mapping and data-collection. This will, eventually, lead to paid rides for consumers. 

The company is sending out manually-driven Toyota Highlanders at first. These have been equipped with Zoox’s self-driving tech and will gather mapping data. Broader autonomous testing will start in LA this summer. This will happen after the Highlanders gather enough data regarding “driving conditions, potential roadwork, city events” and other potential surprises.

After that, it should send in the actual robotaxis, which operate without a steering wheel or pedals. The vehicles are already being test-driven in several cities but, again, not with commercial passengers. These areas include Foster City, San Francisco and Las Vegas.

Zoox says it should begin offering public rides in Las Vegas and San Francisco later this year. It’s also testing manned robotaxi drives in Miami, Austin and Seattle. This is done with a Highlander and a human safety operator behind the wheel.

This expansion is happening just a few weeks after the company issued a software recall on 258 of its vehicles, due to issues with the driving system unexpectedly hard braking. The National Highway Traffic Safety Administration (NHTSA) received two incident reports involving motorcycles crashing into the back of Zoox vehicles due to this hard braking.

Rival Waymo is the only autonomous vehicle company right now that offers actual paid rides. The Alphabet-owned company provides this service in several cities, including San Francisco, Phoenix and Austin. It plans on expanding to Atlanta, Miami and Washington DC in the next two years.

This article originally appeared on Engadget at https://www.engadget.com/transportation/amazons-zoox-starts-testing-its-robotaxis-in-los-angeles-172605497.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Hades II will launch on Switch 2 and Switch before PlayStation and Xbox

When Hades II moves out of Early Access and into v1.0 later this year, Nintendo will have a prime seat at the table. A “Creator’s Voice” promo video published on Tuesday (via Kotaku) echoed what developer Supergiant Games posted in a FAQ last week: Switch 2 and Switch will be the only consoles to play the roguelike on out of the gates.

The video’s mention of Hades II “launching first for consoles on Nintendo Switch 2” further confirms that PlayStation and Xbox owners will at least have to wait a while before playing the highly anticipated sequel on their systems. That was already established by a Supergiant FAQ update from April 2: “While we haven’t ruled out bringing Hades II to any other platforms, our current focus is only on the versions listed above,” referencing its Early Access platforms (PC and Mac, via Steam and Epic) and Nintendo’s Switch 2 and OG Switch.

In addition, Supergiant clarified to Engadget on Tuesday that the game will launch simultaneously on those platforms, so the PC, Mac, Switch 2 and Switch versions will all be available on its release date.

This follows a similar pattern to the one the developer used in Hades. It initially launched on PC, Mac and Switch before later landing on PlayStation 5/4, Xbox Series X/S and Xbox One.

Screenshot from the game Hades II.
Supergiant

The developer laid to rest any concerns that the roguelike won’t perform well on Nintendo’s platforms. “We have both versions running smoothly at a target 60 frames per second, with the Switch 2 version taking advantage of the bigger, higher-definition 1080p display,” Supergiant wrote last week.

We don’t know when Hades II, which arrived in Early Access last spring, will jump to v1.0 (apart from a general 2025 window). In February, Supergiant pushed the game’s Warsong Update, which added Ares, an updated Altar of Ashes and a final boss fight. A third big patch is also in the works before the sequel is released to the public.

Update, April 8, 2025, 12:54PM ET: This story has been updated with a confirmation from Supergiant that it will launch simultaneously on PC, Mac and Switch.

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/hades-ii-will-launch-on-switch-2-and-switch-before-playstation-and-xbox-163757321.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

The goofy multiplayer game What the Clash? hits Apple Arcade on May 1

Triband Games is back with another entry in its “What the” franchise. What the Clash? is an exclusive for Apple Arcade and will be available on May 1. Apple describes it as a “quirky, fast-paced multiplayer game” that features Triband’s take on popular minigames like table tennis, archery, racing and tag.

However, this isn’t just a simple multiplayer minigame collection. Players can use modifier cards to “create absurd combos.” This includes stuff like “giraffle, toasty archery, sticky tennis and milk the fish.” Remember, this is the developer behind the monumentally silly What the Car? and related titles.

The game offers simple touch controls, which makes sense given the platform, and there’s a solo mode for those who don’t want to goof on their friends and family. It includes leaderboards and tournaments. Also, everyone plays as a giant stretchy hand with legs (?!) that can be customized with clothes and accessories.

Trident also made the fantastic What the Golf? and the VR-focused What the Bat? Both are very good. Two of the company’s games ended up on our list of the best Apple Arcade titles. What the Car? also won mobile game of the year at the 2024 D.I.C.E. Awards.

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-goofy-multiplayer-game-what-the-clash-hits-apple-arcade-on-may-1-155951496.html?src=rss

The goofy multiplayer game What the Clash? hits Apple Arcade on May 1

Triband Games is back with another entry in its “What the” franchise. What the Clash? is an exclusive for Apple Arcade and will be available on May 1. Apple describes it as a “quirky, fast-paced multiplayer game” that features Triband’s take on popular minigames like table tennis, archery, racing and tag.

However, this isn’t just a simple multiplayer minigame collection. Players can use modifier cards to “create absurd combos.” This includes stuff like “giraffle, toasty archery, sticky tennis and milk the fish.” Remember, this is the developer behind the monumentally silly What the Car? and related titles.

The game offers simple touch controls, which makes sense given the platform, and there’s a solo mode for those who don’t want to goof on their friends and family. It includes leaderboards and tournaments. Also, everyone plays as a giant stretchy hand with legs (?!) that can be customized with clothes and accessories.

Trident also made the fantastic What the Golf? and the VR-focused What the Bat? Both are very good. Two of the company’s games ended up on our list of the best Apple Arcade titles. What the Car? also won mobile game of the year at the 2024 D.I.C.E. Awards.

This article originally appeared on Engadget at https://www.engadget.com/gaming/the-goofy-multiplayer-game-what-the-clash-hits-apple-arcade-on-may-1-155951496.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.