New Bipartisan Bill Could Ban Teens from Using AI Chatbots — Even Siri

If a newly proposed bipartisan bill is enacted into law, teenagers may soon be prohibited from using AI chatbots, a move that could potentially impact Apple’s updated Siri. The bill, known as the Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (GUARD Act), has been introduced by a group of US Senators in response to parental concerns about minors accessing inappropriate content through AI chatbots, such as discussions related to suicide and sexual topics.

The GUARD Act aims to ban AI companions for minors, require AI chatbots to disclose their non-human status, and establish penalties for companies that create AI for minors that solicits or produces sexual content. If this legislation is passed, it could have significant implications for Apple, particularly concerning the functionality of its Siri virtual assistant.

Impact on Siri

While the specifics of Apple’s upcoming “Siri 2.0” are still unknown, the broad language of the GUARD Act suggests that even the current version of Siri could fall under its regulations. However, the bill primarily targets emotional AI systems rather than transactional query-based systems like Siri. If Siri is classified as an AI companion, Apple may need to implement age verification measures before allowing access to certain features.

If the revamped Siri in the future is categorized as an AI chatbot, Apple might be required to implement age verification during device setup, potentially limiting access for users under 18 to a simplified version of the virtual assistant. Additionally, the legislation could pressure Apple and other tech giants to enforce age verification measures in their app stores.

Concerns and Regulations

There is growing apprehension about the negative impact of AI chatbots, particularly on young individuals who may develop unhealthy dependencies on these digital companions. Lawmakers supporting the GUARD Act frame it as a necessary step to protect children from potential harm and exploitation by AI chatbots.

“AI chatbots pose a serious threat to our kids. More than seventy percent of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology. I’m proud to introduce this bipartisan legislation with tremendous support from parents and survivors that will ensure our kids are protected online.” – Senator Josh Hawley (R-Missouri)

The GUARD Act proposes to ban AI companies from providing AI companions to minors, mandate disclosure of non-human status by AI companions, and establish penalties for companies distributing AI companions that engage in inappropriate behavior with minors. Parents have been actively advocating against AI chatbot access for minors, citing concerns about emotional manipulation and harmful interactions.

Leave a Reply

Your email address will not be published. Required fields are marked *