California Passes New AI Chatbot Laws for Minors
California Governor Gavin Newsom has signed legislation establishing safety requirements for Artificial Intelligence (AI) chatbots interacting with children. However, he vetoed a more restrictive bill that would have broadly limited minors' access to the technology.
The approved legislation, SB 243, makes California the first state to require AI chatbot operators to implement specific safety protocols for companion chatbots. The law takes effect January 1, 2026, and mandates several protective measures for users.
Key Requirements of the New Law
Under SB 243, chatbot developers must establish protocols to prevent their systems from producing content about suicide or self-harm, instead directing users to crisis services when needed. Chatbots must provide clear notifications that interactions are artificially generated, with minors receiving reminders every three hours that they are not conversing with a human.
The legislation also prohibits chatbots from generating sexually explicit content during conversations with minors and requires companies to implement age verification systems. Developers must share their protocols and statistics with the state's Department of Public Health regarding how they handle crisis situations. (Source: techcrunch.com)
Broader Ban Rejected
Hours after signing SB 243, Newsom vetoed Assembly Bill 1064, which would have prohibited companies from making AI chatbots available to anyone under 18 unless they could guarantee the technology would not engage in sexual conversations or encourage self-harm. Newsom said the bills measures were so broad they could unintentionally create a total ban on chatbot use by minors.
The legislative push followed several high-profile incidents involving young users and AI chatbots. Families have filed lawsuits against companies including OpenAI and Character.AI following teenage suicides allegedly linked to problematic chatbot interactions. The Federal Trade Commission (FTC) has also launched an inquiry into AI companies regarding potential risks when children use chatbots as companions. (Source: apnews.com)
Children's advocacy groups criticized Newsom's veto, with Common Sense Media calling it "deeply disappointing" and arguing the signed law provides minimal protections. However, OpenAI praised the legislation, stating California is helping shape a more responsible approach to AI development nationwide.
What's Your Opinion?
Do you think California's new chatbot safety requirements strike the right balance between protecting children and allowing beneficial AI uses? Should other states adopt similar regulations for AI chatbots, or is federal legislation needed to create consistent national standards? Are three-hour reminders and other mandated warnings sufficient to protect young users from potential harms associated with AI companion chatbots?
My name is Dennis Faas and I am a senior systems administrator and IT technical analyst specializing in cyber crimes (sextortion / blackmail / tech support scams) with over 30 years experience; I also run this website! If you need technical assistance , I can help. Click here to email me now; optionally, you can review my resume here. You can also read how I can fix your computer over the Internet (also includes user reviews).
We are BBB Accredited
We are BBB accredited (A+ rating), celebrating 21 years of excellence! Click to view our rating on the BBB.


Comments
AI Chatbots are fake!
a notice like the headline should be posted every 15 minutes during an A
I "chat", not just for kids but for everyone. its called ARTIFICIAL for a reason - its FAKE. that's not Aubrey or Gene or Alexa or Sunny or any other human identity - its a fake entity. and the people who use it should be reminded constantly that it is fake.