National Security Council adds Gmail to its list of bad decisions

The Washington Post reports that members of the White House’s National Security Council have used personal Gmail accounts to conduct government business. National security advisor Michael Waltz and a senior aide of his both used their own accounts to discuss sensitive information with colleagues, according to the Post‘s review and interviews with government officials who spoke to the newspaper anonymously.

Email is not the best approach for sharing information meant to be kept private. That covers sensitive data for individuals such as social security numbers or passwords, much less confidential or classified government documents. It simply has too many potential paths for a bad actor to access information they shouldn’t. Government departments typically use business-grade email services, rather than relying on consumer email services. The federal government also has its own internal communications systems with additional layers of security, making it all the more baffling that current officials are being so cavalier with how they handle important information.

“Unless you are using GPG, email is not end-to-end encrypted, and the contents of a message can be intercepted and read at many points, including on Google’s email servers,” Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation told the Post.

Additionally, there are regulations requiring that certain official government communications be preserved and archived. Using a personal account could allow some messages to slip through the cracks, accidentally or intentionally.

This latest instance of dubious software use from the executive branch follows the discovery that several high-ranking national security leaders used Signal to discuss planned military actions in Yemen, then added a journalist from The Atlantic to the group chat. And while Signal is a more secure option than a public email client, even the encrypted messaging platform can be exploited, as the Pentagon warned its own team last week.

As with last week’s Signal debacle, there have been no repercussions thus far for any federal employees taking risky data privacy actions. NSC spokesman Brian Hughes told the Post he hasn’t seen evidence of Waltz using a personal account for government correspondence.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/national-security-council-adds-gmail-to-its-list-of-bad-decisions-222648613.html?src=rss

ChatGPT Image Generator Now Available For Free Users, But With Limitations

The popular chatbot ChatGPT became a trending topic recently because of its ability to generate images in the style of Studio Ghibli illustrations, a feature that previously was limited to paid subscribers (ChatGPT Plus, ChatGPT Pro, and ChatGPT Team users).

However, OpenAI has now made it available to all the users—Yup, including those on the free plan! The announcement was made by OpenAI’s CEO, Sam Altman, through a post on X (formerly Twitter). According to OpenAI’s official blog, free users can generate up to three images per day; Once they reach this limit, they are invited to upgrade to a paid plan for additional image generation.

The rise in interest was largely driven by the appeal of Studio Ghibli-inspired illustrations, influenced by the studio’s renowned films such as Princess Mononoke, Spirited Away, and My Neighbor Totoro. The ability to create artwork in this beloved style led to a surge in new users, with ChatGPT gaining over one million new users in just a single hour.

This feature quickly became a trending topic on social media, as users eagerly experimented with AI-generated illustrations—However, the excitement was accompanied by discussions about the potential consequences for artists. Many raised concerns that AI-generated art could devalue the work of human illustrators and impact the creative industry.

Despite these debates, ChatGPT continues to expand its user base at a rapid pace—According to OpenAI, the chatbot now has over 500 million weekly active users. The decision to make image generation available to everyone further cements ChatGPT’s role as a widely used AI tool, blending accessibility with advanced creative capabilities.

As AI-generated art becomes more common, discussions about its ethical and professional implications are likely to continue.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

, original content from Ubergizmo. Read our Copyrights and terms of use.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

The popular chatbot ChatGPT became a trending topic recently because of its ability to generate images in the style of Studio Ghibli illustrations, a feature that previously was limited to paid subscribers (ChatGPT Plus, ChatGPT Pro, and ChatGPT Team users).

However, OpenAI has now made it available to all the users—Yup, including those on the free plan! The announcement was made by OpenAI’s CEO, Sam Altman, through a post on X (formerly Twitter). According to OpenAI’s official blog, free users can generate up to three images per day; Once they reach this limit, they are invited to upgrade to a paid plan for additional image generation.

The rise in interest was largely driven by the appeal of Studio Ghibli-inspired illustrations, influenced by the studio’s renowned films such as Princess Mononoke, Spirited Away, and My Neighbor Totoro. The ability to create artwork in this beloved style led to a surge in new users, with ChatGPT gaining over one million new users in just a single hour.

This feature quickly became a trending topic on social media, as users eagerly experimented with AI-generated illustrations—However, the excitement was accompanied by discussions about the potential consequences for artists. Many raised concerns that AI-generated art could devalue the work of human illustrators and impact the creative industry.

Despite these debates, ChatGPT continues to expand its user base at a rapid pace—According to OpenAI, the chatbot now has over 500 million weekly active users. The decision to make image generation available to everyone further cements ChatGPT’s role as a widely used AI tool, blending accessibility with advanced creative capabilities.

As AI-generated art becomes more common, discussions about its ethical and professional implications are likely to continue.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

, original content from Ubergizmo. Read our Copyrights and terms of use.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

The popular chatbot ChatGPT became a trending topic recently because of its ability to generate images in the style of Studio Ghibli illustrations, a feature that previously was limited to paid subscribers (ChatGPT Plus, ChatGPT Pro, and ChatGPT Team users).

However, OpenAI has now made it available to all the users—Yup, including those on the free plan! The announcement was made by OpenAI’s CEO, Sam Altman, through a post on X (formerly Twitter). According to OpenAI’s official blog, free users can generate up to three images per day; Once they reach this limit, they are invited to upgrade to a paid plan for additional image generation.

The rise in interest was largely driven by the appeal of Studio Ghibli-inspired illustrations, influenced by the studio’s renowned films such as Princess Mononoke, Spirited Away, and My Neighbor Totoro. The ability to create artwork in this beloved style led to a surge in new users, with ChatGPT gaining over one million new users in just a single hour.

This feature quickly became a trending topic on social media, as users eagerly experimented with AI-generated illustrations—However, the excitement was accompanied by discussions about the potential consequences for artists. Many raised concerns that AI-generated art could devalue the work of human illustrators and impact the creative industry.

Despite these debates, ChatGPT continues to expand its user base at a rapid pace—According to OpenAI, the chatbot now has over 500 million weekly active users. The decision to make image generation available to everyone further cements ChatGPT’s role as a widely used AI tool, blending accessibility with advanced creative capabilities.

As AI-generated art becomes more common, discussions about its ethical and professional implications are likely to continue.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

, original content from Ubergizmo. Read our Copyrights and terms of use.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

The popular chatbot ChatGPT became a trending topic recently because of its ability to generate images in the style of Studio Ghibli illustrations, a feature that previously was limited to paid subscribers (ChatGPT Plus, ChatGPT Pro, and ChatGPT Team users).

However, OpenAI has now made it available to all the users—Yup, including those on the free plan! The announcement was made by OpenAI’s CEO, Sam Altman, through a post on X (formerly Twitter). According to OpenAI’s official blog, free users can generate up to three images per day; Once they reach this limit, they are invited to upgrade to a paid plan for additional image generation.

The rise in interest was largely driven by the appeal of Studio Ghibli-inspired illustrations, influenced by the studio’s renowned films such as Princess Mononoke, Spirited Away, and My Neighbor Totoro. The ability to create artwork in this beloved style led to a surge in new users, with ChatGPT gaining over one million new users in just a single hour.

This feature quickly became a trending topic on social media, as users eagerly experimented with AI-generated illustrations—However, the excitement was accompanied by discussions about the potential consequences for artists. Many raised concerns that AI-generated art could devalue the work of human illustrators and impact the creative industry.

Despite these debates, ChatGPT continues to expand its user base at a rapid pace—According to OpenAI, the chatbot now has over 500 million weekly active users. The decision to make image generation available to everyone further cements ChatGPT’s role as a widely used AI tool, blending accessibility with advanced creative capabilities.

As AI-generated art becomes more common, discussions about its ethical and professional implications are likely to continue.

ChatGPT Image Generator Now Available For Free Users, But With Limitations

, original content from Ubergizmo. Read our Copyrights and terms of use.

National Security Council adds Gmail to its list of bad decisions

The Washington Post reports that members of the White House’s National Security Council have used personal Gmail accounts to conduct government business. National security advisor Michael Waltz and a senior aide of his both used their own accounts to discuss sensitive information with colleagues, according to the Post‘s review and interviews with government officials who spoke to the newspaper anonymously.

Email is not the best approach for sharing information meant to be kept private. That covers sensitive data for individuals such as social security numbers or passwords, much less confidential or classified government documents. It simply has too many potential paths for a bad actor to access information they shouldn’t. Government departments typically use business-grade email services, rather than relying on consumer email services. The federal government also has its own internal communications systems with additional layers of security, making it all the more baffling that current officials are being so cavalier with how they handle important information.

“Unless you are using GPG, email is not end-to-end encrypted, and the contents of a message can be intercepted and read at many points, including on Google’s email servers,” Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation told the Post.

Additionally, there are regulations requiring that certain official government communications be preserved and archived. Using a personal account could allow some messages to slip through the cracks, accidentally or intentionally.

This latest instance of dubious software use from the executive branch follows the discovery that several high-ranking national security leaders used Signal to discuss planned military actions in Yemen, then added a journalist from The Atlantic to the group chat. And while Signal is a more secure option than a public email client, even the encrypted messaging platform can be exploited, as the Pentagon warned its own team last week.

As with last week’s Signal debacle, there have been no repercussions thus far for any federal employees taking risky data privacy actions. NSC spokesman Brian Hughes told the Post he hasn’t seen evidence of Waltz using a personal account for government correspondence.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/national-security-council-adds-gmail-to-its-list-of-bad-decisions-222648613.html?src=rss

National Security Council adds Gmail to its list of bad decisions

The Washington Post reports that members of the White House’s National Security Council have used personal Gmail accounts to conduct government business. National security advisor Michael Waltz and a senior aide of his both used their own accounts to discuss sensitive information with colleagues, according to the Post‘s review and interviews with government officials who spoke to the newspaper anonymously.

Email is not the best approach for sharing information meant to be kept private. That covers sensitive data for individuals such as social security numbers or passwords, much less confidential or classified government documents. It simply has too many potential paths for a bad actor to access information they shouldn’t. Government departments typically use business-grade email services, rather than relying on consumer email services. The federal government also has its own internal communications systems with additional layers of security, making it all the more baffling that current officials are being so cavalier with how they handle important information.

“Unless you are using GPG, email is not end-to-end encrypted, and the contents of a message can be intercepted and read at many points, including on Google’s email servers,” Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation told the Post.

Additionally, there are regulations requiring that certain official government communications be preserved and archived. Using a personal account could allow some messages to slip through the cracks, accidentally or intentionally.

This latest instance of dubious software use from the executive branch follows the discovery that several high-ranking national security leaders used Signal to discuss planned military actions in Yemen, then added a journalist from The Atlantic to the group chat. And while Signal is a more secure option than a public email client, even the encrypted messaging platform can be exploited, as the Pentagon warned its own team last week.

As with last week’s Signal debacle, there have been no repercussions thus far for any federal employees taking risky data privacy actions. NSC spokesman Brian Hughes told the Post he hasn’t seen evidence of Waltz using a personal account for government correspondence.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/national-security-council-adds-gmail-to-its-list-of-bad-decisions-222648613.html?src=rss

Arkansas social media age verification law blocked by federal Judge

An Arkansas law requiring social media companies to verify the ages of their users has been struck down by a federal judge who ruled that it was unconstitutional. The decision is a significant victory for the social media companies and digital rights groups that have opposed the law and others like it.

Arkansas became the second state (after Utah) to pass an age verification law for social media in 2023. The Social Media Safety Act required companies to verify the games of users under 18 and get permission from their parents. The law was challenged by NetChoice, a lobbying group representing the tech industry whose membership includes Meta, Snap, X, Reddit and YouTube. NetChoice has also challenged laws restricting social media access in Utah, Texas and California.

In a ruling, Judge Timothy Brooks said that the law, known as Act 689, was overly broad. “Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified,” Brooks wrote in his decision. “Arkansas takes a hatchet to adults’ and minors’ protected speech alike though the Constitution demands it use a scalpel.” Brooks also highlighted the “unconstitutionally vague” applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the “predominant or exclusive function [of]… direct messaging” like Snapchat.

“The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment,” NetChoice’s Chris Marchese said in a statement. “This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online.”

It’s not clear if state officials in Arkansas will appeal the ruling. “I respect the court’s decision, and we are evaluating our options,” Arkansas Attorney general Tim Griffin said in a statement.

Even with NetChoice’s latest victory, it seems that age verification laws are unlikely to go away anytime soon. Utah recently passed an age verification requirement for app stores. And a Texas law requiring porn sites to conduct age verification is currently before the Supreme Court.

This article originally appeared on Engadget at https://www.engadget.com/social-media/arkansas-social-media-age-verification-law-blocked-by-federal-judge-194614568.html?src=rss

Arkansas social media age verification law blocked by federal Judge

An Arkansas law requiring social media companies to verify the ages of their users has been struck down by a federal judge who ruled that it was unconstitutional. The decision is a significant victory for the social media companies and digital rights groups that have opposed the law and others like it.

Arkansas became the second state (after Utah) to pass an age verification law for social media in 2023. The Social Media Safety Act required companies to verify the games of users under 18 and get permission from their parents. The law was challenged by NetChoice, a lobbying group representing the tech industry whose membership includes Meta, Snap, X, Reddit and YouTube. NetChoice has also challenged laws restricting social media access in Utah, Texas and California.

In a ruling, Judge Timothy Brooks said that the law, known as Act 689, was overly broad. “Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified,” Brooks wrote in his decision. “Arkansas takes a hatchet to adults’ and minors’ protected speech alike though the Constitution demands it use a scalpel.” Brooks also highlighted the “unconstitutionally vague” applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the “predominant or exclusive function [of]… direct messaging” like Snapchat.

“The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment,” NetChoice’s Chris Marchese said in a statement. “This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online.”

It’s not clear if state officials in Arkansas will appeal the ruling. “I respect the court’s decision, and we are evaluating our options,” Arkansas Attorney general Tim Griffin said in a statement.

Even with NetChoice’s latest victory, it seems that age verification laws are unlikely to go away anytime soon. Utah recently passed an age verification requirement for app stores. And a Texas law requiring porn sites to conduct age verification is currently before the Supreme Court.

This article originally appeared on Engadget at https://www.engadget.com/social-media/arkansas-social-media-age-verification-law-blocked-by-federal-judge-194614568.html?src=rss

Arkansas social media age verification law blocked by federal Judge

An Arkansas law requiring social media companies to verify the ages of their users has been struck down by a federal judge who ruled that it was unconstitutional. The decision is a significant victory for the social media companies and digital rights groups that have opposed the law and others like it.

Arkansas became the second state (after Utah) to pass an age verification law for social media in 2023. The Social Media Safety Act required companies to verify the games of users under 18 and get permission from their parents. The law was challenged by NetChoice, a lobbying group representing the tech industry whose membership includes Meta, Snap, X, Reddit and YouTube. NetChoice has also challenged laws restricting social media access in Utah, Texas and California.

In a ruling, Judge Timothy Brooks said that the law, known as Act 689, was overly broad. “Act 689 is a content-based restriction on speech, and it is not targeted to address the harms the State has identified,” Brooks wrote in his decision. “Arkansas takes a hatchet to adults’ and minors’ protected speech alike though the Constitution demands it use a scalpel.” Brooks also highlighted the “unconstitutionally vague” applicability of the law, which seemingly created obligations for some online services, but may have exempted services which had the “predominant or exclusive function [of]… direct messaging” like Snapchat.

“The court confirms what we have been arguing from the start: laws restricting access to protected speech violate the First Amendment,” NetChoice’s Chris Marchese said in a statement. “This ruling protects Americans from having to hand over their IDs or biometric data just to access constitutionally protected speech online.”

It’s not clear if state officials in Arkansas will appeal the ruling. “I respect the court’s decision, and we are evaluating our options,” Arkansas Attorney general Tim Griffin said in a statement.

Even with NetChoice’s latest victory, it seems that age verification laws are unlikely to go away anytime soon. Utah recently passed an age verification requirement for app stores. And a Texas law requiring porn sites to conduct age verification is currently before the Supreme Court.

This article originally appeared on Engadget at https://www.engadget.com/social-media/arkansas-social-media-age-verification-law-blocked-by-federal-judge-194614568.html?src=rss