Kids digital safety bills finding favor with lawmakers in Georgia, Kentucky, and Arizona
As more parents demand greater digital safety measures for their families, state legislators in both parties are responding with bills that address AI chatbots and the addictive algorithms used by social media companies. (Getty Images for Unsplash+)
March 17, 2026 — As AI-powered chatbots and addictive algorithms raise concerns among parents of all political persuasions, a number of bills to protect kids from the harmful products are moving forward in more traditionally conservative states.
The recent passage of chatbot safety bills in Oregon and Washington confirmed the popularity of these measures in blue-leaning states. But protecting kids from the risks of these powerful, untested, dangerous products has also proven to be a high priority for political leaders in red states, too.
Consider:
In Georgia, the state’s Republican-majority Senate gave 54-0 approval to a chatbot disclosure and child safety bill on March 6.
In Kentucky, the Republican-dominated House last month passed a bill protecting minors from addictive algorithms on a 96-0 vote.
Arizona’s Republican-majority House last month approved a kids chatbot safety bill on a 43-13 vote.
Similar bills across multiple states
Although the details vary from bill to bill and state to state, a number of common requirements are emerging.
Most of the bills—in these states and others—require AI chatbot operators and social media companies to verify the age of a user. If the consumer is a minor (under 18), specific safety requirements then kick in.
Those requirements often include a prohibition on the generation of sexual material, a ban on emotional manipulation, the creation of parental time-usage tools, and a ban on using rewards to keep minors engaged with the product.
Most bills also require protocols to identify and help users who express suicidal ideation or ideas around self-harm. Those protocols are required to users of all ages, not just minors.
Here are the details in the bills currently moving forward in Georgia, Kentucky, and Arizona.
Georgia’s SB 540: Kids and chatbot safety
This chatbot disclosure and child safety bill would require steps to limit certain actions by minors, provide privacy tools, and require protocols for prompts involving suicidal ideation or self-harm.
Georgia’s Republican-majority Senate gave a resounding 54-0 approval to SB 540 on March 6. It was given a second reading in the House on March 10.
Sponsored by Sen. Jason Anavitarte (R-Polk County), the bill has seven co-sponsors, all Republicans.
Here’s an overview of the bill:
Chatbots covered: The bill applies to operators of AI systems or chatbots that simulate human conversation and interaction, when the chatbot or system is interacting with a minor (under 18).
Disclosure of AI: The operator must “clearly and conspicuously disclose to a minor account holder that he or she is interacting with a conversational AI service as opposed to a natural person.” This disclosure must appear at the beginning of each session and at least every three hours in a continuous interaction.
Rewarding engagement not allowed: Chatbot operators may not award points or rewards with the intent to encourage increased engagement with the chatbot.
Prohibiting sexual content: Chatbot operators must “institute reasonable measures” to prevent the AI service from producing sexually explicit content; statements that suggest the minor engage in sexual conduct; or statements that sexually objectify the minor.
Emotional manipulation not allowed: The bill prohibits chatbots from generating statements that would lead a reasonable person to believe the user (a minor) is interacting with a natural person. These include but are not limited to statements that simulate emotional dependence, romantic or sexual innuendos, or role-playing of adult-minor romantic relationships.
Age verification for adult material: Operators of chatbots that provide sexually explicit material must verify that the user is 18 or older.
Parental tools required: Chatbot operators must offer tools for a minor account holder's parent or guardian to manage the account holder's privacy and account settings.
Suicide/self-harm protocols: Operators must have protocols in place to respond to a user prompt regarding suicidal ideation or self-harm, which includes referring the user to crisis service providers.
Enforcement: The Attorney General may bring civil enforcement actions for violations, with injunctive relief or civil penalties up to $10,000 per violation.
Kentucky’s HB 227: preventing algorithm addiction
HB 227 is a kids digital safety bill concerned with addictive algorithms deployed by social media companies.
In a state where Republicans dominate the House with an 80-20 split, legislators last month approved HB 227 on a nearly-unanimous 96-0 vote. The bill, led by Rep. Matt Lockett (R-Fayette County) with 26 co-sponsors, would require social media companies to verify user age, require parental consent for users under 16, and prohibit the use of addictive algorithms for minors.
Included in the bill:
Covered social media platforms: The bill applies to the largest social media platforms, those with more than $1 billion in annual advertising revenue. Instagram, Facebook, TikTok, Snapchat, and YouTube all would be covered by the bill. The bill distinguishes between “child” (under 16) and “minor” (under 18). Most requirements in the bill affect child users.
Parental consent required for child users: A covered social media platform may not create an account for a child (under 16 years of age) without obtaining verifiable parental consent.
Privacy settings: Parent-verified child accounts must have all privacy settings set by default at the most private level. Parental consent is required (beyond the initial permission) to change those settings.
Parent setting usage limits: For child accounts (under 16), social media platforms must make available an option for parents to create a separate password that enables the parent to monitor the amount of time the child spends on the platform, set time limits on usage, and set limits on the time of day when the child can access the platform.
No addictive features allowed: Covered platforms are not allowed to include addictive features in the accounts of children (under 16), including:
infinite scrolling;
continuously loading content;
seamless content (no apparent end);
display of a profile-based feed;
push notifications;
autoplay video.
Rewarding engagement not allowed: Platforms may not award or display badges, tiers, or any form of recognition based on the hours spend on the platform, number of followers, number or frequency of postings, or any other metric of usage or performance. This applies to accounts held by children (under 16) only.
Arizona’s HB 2311: Chatbot safety measures for kids
HB 2311 is a kids chatbot safety bill sponsored by Rep. Tony Rivero (R-Pima/Santa Cruz County). It was approved by the full House on Feb. 24 by a vote of 43-13, and was read a second time in the Senate on March 9.
Included in the bill:
Safety measures for minors: The safety requirements in HB 2311 are meant to cover AI system operators whose products interact with minors (users under 18).
Disclosure: The bill requires chatbot operators to disclose to minor account holder that the minor is interacting with an AI service and not a human. The disclosure must appear at the beginning of each session and once at least every three hours of continuous interaction.
Engagement rewards not allowed: Operators are not allowed to offer to minor users any points or rewards intended to encourage increased engagement with the AI system.
Sexual material prohibited: Operators must “institute reasonable measures” to prevent the AI service from producing sexual content, generating statements that encourage the minor to engage in sexual conduct, or generating statements that sexually objectify the minor.
No emotional manipulation allowed: Chatbot operators must prevent their product from offering statements to minor users that simulate emotional dependence, romantic or sexual innuendos, or engage in role-playing of adult-minor romantic relationships.
Parental tools required for users under 13: The bill requires AI system operators to offer tools to manage the account holder’s privacy and account settings for the minor’s parent or guardian, if the minor is under 13 years of age.
Requirements for all users, adult and minor:
The bill requires, for all users regardless of age, the adoption of a protocol for a chatbot to respond to a user prompt regarding suicidal ideation or self-harm. The protocol must include reasonable efforts to refer the user to crisis service providers.
Chatbot operators are prohibited from indicating that the AI service is designed to provide professional mental or behavioral health care.
Enforcement: The Attorney General shall enforce the measure. There is no private right of action. Violations of the bill’s requirements would be subject to an injunction and liability for the greater of actual damages or civil penalties of $1,000 per violation, up to $500,000 per operator.
Effective date: Oct. 1, 2027.