Meta brings ‘teen accounts’ to Facebook and Messenger

Meta is bringing its “teen accounts” to Facebook and Messenger. Like on Instagram, the company will begin automatically moving younger teens to the new accounts, which come with mandatory parental control features and restrictions on who they can message and interact with.

The company first introduced the feature on Instagram last fall and now has 54 million teens with the more locked-down accounts. (Instagram requires teens between the ages of 13 and 15 to use a teen account and has in-app tools meant to catch those lying about their ages.) Teen accounts on Facebook and Messenger will operate similarly. Teens won’t be able to interact with unknown contacts or change certain privacy settings unless a parent approves the action. Parents will also be able to monitor their child’s screen time metrics and friends list.

Meta is also adding new safety features to teen accounts on Instagram. With the change, teens under 16 will need parental permission to start a live broadcast. The app will also prevent younger teens from turning off nudity protection — the feature that automatically blurs images in direct messages that contain “suspected nudity” — unless they get parental approval.

Those may seem like obvious safeguards (they are) but they at least show that Meta is closing obvious gaps in its teen-focused safety features. The company has come under intense scrutiny over the effect its apps, particularly Instagram, have on teens in recent years. Dozens of states are currently suing Meta over alleged harms to younger users.

This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-brings-teen-accounts-to-facebook-and-messenger-100042497.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Waymo has 'no plans' to sell ads to riders based on camera data

Rumors circulated today that robotaxi company Waymo might use data from vehicles’ interior cameras to train AI and sell targeted ads to riders. However, the company has tried to quell concerns, insisting that it won’t be targeting ads to passengers.

The situation arose after researcher and engineer Jane Manchun Wong discovered an unreleased version of Waymo’s privacy policy that suggested the robotaxi company could start using data from its vehicles to train generative AI. The draft policy has language allowing customers to opt out of Waymo “using your personal information (including interior camera data associated with your identity) for training GAI.” Wong’s discovery also suggested that Waymo could use that camera footage to sell personalized ads to riders.

Later in the day, The Verge obtained comments on this unreleased privacy policy from Waymo spokesperson Julia Ilina. “Waymo’s [machine learning] systems are not designed to use this data to identify individual people, and there are no plans to use this data for targeted ads,” she said. Ilina said the version found by Wong featured “placeholder text that doesn’t accurately reflect the feature’s purpose” and noted that the feature was still in development. It “will not introduce any changes to Waymo’s Privacy Policy, but rather will offer riders an opportunity to opt out of data collection for ML training purposes.”

Hopefully Waymo holds to those statements. Privacy and security are huge concerns as AI companies try to feed their models as much information as possible. Waymo is owned by Alphabet and Google is developing its own AI assistant, Gemini, as well as other AI projects with its DeepMind division.

This article originally appeared on Engadget at https://www.engadget.com/transportation/waymo-has-no-plans-to-sell-ads-to-riders-based-on-camera-data-225340265.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Waymo has 'no plans' to sell ads to riders based on camera data

Rumors circulated today that robotaxi company Waymo might use data from vehicles’ interior cameras to train AI and sell targeted ads to riders. However, the company has tried to quell concerns, insisting that it won’t be targeting ads to passengers.

The situation arose after researcher and engineer Jane Manchun Wong discovered an unreleased version of Waymo’s privacy policy that suggested the robotaxi company could start using data from its vehicles to train generative AI. The draft policy has language allowing customers to opt out of Waymo “using your personal information (including interior camera data associated with your identity) for training GAI.” Wong’s discovery also suggested that Waymo could use that camera footage to sell personalized ads to riders.

Later in the day, The Verge obtained comments on this unreleased privacy policy from Waymo spokesperson Julia Ilina. “Waymo’s [machine learning] systems are not designed to use this data to identify individual people, and there are no plans to use this data for targeted ads,” she said. Ilina said the version found by Wong featured “placeholder text that doesn’t accurately reflect the feature’s purpose” and noted that the feature was still in development. It “will not introduce any changes to Waymo’s Privacy Policy, but rather will offer riders an opportunity to opt out of data collection for ML training purposes.”

Hopefully Waymo holds to those statements. Privacy and security are huge concerns as AI companies try to feed their models as much information as possible. Waymo is owned by Alphabet and Google is developing its own AI assistant, Gemini, as well as other AI projects with its DeepMind division.

This article originally appeared on Engadget at https://www.engadget.com/transportation/waymo-has-no-plans-to-sell-ads-to-riders-based-on-camera-data-225340265.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.

Waymo has 'no plans' to sell ads to riders based on camera data

Rumors circulated today that robotaxi company Waymo might use data from vehicles’ interior cameras to train AI and sell targeted ads to riders. However, the company has tried to quell concerns, insisting that it won’t be targeting ads to passengers.

The situation arose after researcher and engineer Jane Manchun Wong discovered an unreleased version of Waymo’s privacy policy that suggested the robotaxi company could start using data from its vehicles to train generative AI. The draft policy has language allowing customers to opt out of Waymo “using your personal information (including interior camera data associated with your identity) for training GAI.” Wong’s discovery also suggested that Waymo could use that camera footage to sell personalized ads to riders.

Later in the day, The Verge obtained comments on this unreleased privacy policy from Waymo spokesperson Julia Ilina. “Waymo’s [machine learning] systems are not designed to use this data to identify individual people, and there are no plans to use this data for targeted ads,” she said. Ilina said the version found by Wong featured “placeholder text that doesn’t accurately reflect the feature’s purpose” and noted that the feature was still in development. It “will not introduce any changes to Waymo’s Privacy Policy, but rather will offer riders an opportunity to opt out of data collection for ML training purposes.”

Hopefully Waymo holds to those statements. Privacy and security are huge concerns as AI companies try to feed their models as much information as possible. Waymo is owned by Alphabet and Google is developing its own AI assistant, Gemini, as well as other AI projects with its DeepMind division.

This article originally appeared on Engadget at https://www.engadget.com/transportation/waymo-has-no-plans-to-sell-ads-to-riders-based-on-camera-data-225340265.html?src=rss

Google AI Mode Adds Multimodal Search To Mobile App

In early March 2025, Google introduced a new feature called “AI Mode” within its mobile search app—Today (April 7th), the company announced a significant update to this feature, enabling multimodal search capabilities in the Google app for Android and iOS devices.

With the new update, users can now conduct searches by combining visual and textual inputs. This means a person can take a photo and ask a related question in the same search session. The results will not only provide answers but also include contextual information, relevant web links, and even comparisons between similar products when applicable. This aims to make the search process more dynamic, intuitive, and helpful for real-world scenarios.

To power this functionality, Google has implemented a technique known as “query distribution.” This method analyzes the image as a whole while also identifying and separating individual objects within the picture. By doing so, it can generate a series of focused queries that offer deeper and more specific results. For example, if a user takes a photo of a bookshelf, the tool can recognize the books shown and then recommend other titles based on the user’s apparent interests.

This enhancement is the result of combining two powerful technologies from Google’s ecosystem: the Gemini multimodal AI model and Google Lens. Together, they form the backbone of the improved AI Mode in Search, allowing users to interact with the app in a more natural and versatile way. In the example provided by Google, AI Mode was able to identify book covers in an image and suggest similar books to the user — demonstrating how the tool can offer not just factual answers but also personalized suggestions.

The updated AI Mode is gradually being made available in the Google app for both Android and iOS users. Interested users can access it by downloading or updating the app through the official app stores.

This development represents a broader move by Google to enhance its search tools with artificial intelligence, making information retrieval more seamless and context-aware. By enabling people to combine images and questions in a single query, Google is reshaping how users interact with information in everyday situations.

Google AI Mode Adds Multimodal Search To Mobile App

, original content from Ubergizmo. Read our Copyrights and terms of use.