TCAI Bill Guide: SB 5, Connecticut’s omnibus AI and online safety bill

Connecticut’s SB 5 contains a wide variety of requirements and opportunities for AI developers, operators, entrepreneurs, and consumers. It also creates some of the nation’s strongest social media protections for minors. (Photo: Steve A Johnson on Unsplash)

May 11, 2026 — Connecticut legislators have passed one of the nation’s most comprehensive AI and online safety bills of 2026. Senate Bill 5, the Online Safety Act, moved to the desk of Gov. Ned Lamont in early May. Lamont has indicated that he intends to sign it into law.

It’s been a long road for this measure. As P.R. Lockhart, economic development reporter for The Connecticut Mirror, wrote last week:

“Determining the best way to regulate artificial intelligence has been a contentious issue in Connecticut for years… Last year’s effort to pass regulation cleared the Senate but fell apart after a veto threat from Lamont, who has long worried that too much regulation could hurt businesses and hamper innovation in the state. This session, after years of efforts to pass legislation—and as questions around AI use, intellectual property, and privacy rights become more pressing—legislators were determined to pass a bill.”

If signed by Gov. Lamont, the bill will become effective on Oct. 1, 2026. Specific requirements within the bill have different effective dates.

The full bill is available here.

For Sen. James Maroney (D-Milford), SB 5’s main author, it’s been a two-year struggle to move AI protections into law.

Brief overview

It’s hard to be brief with a 74-page bill, but here are the high points:

  1. The bill requires operators of AI chatbots to include safety restrictions and protocols, as well as heightened safety features for minors.

  2. SB 5 creates some of the nation’s strongest social media safety requirements for users under age 18, including notification limitations, parental controls, warning labels, and restrictions on addictive algorithmic feeds.

  3. The bill requires AI developers to embed provenance data that allows consumers to verify media as AI-created.

  4. SB 5 creates new consumer disclosures and protections around subscription-based AI products.

  5. The bill creates whistleblower protections for AI company employees.

  6. SB 5 regulates the use of AI in hiring and employment decisions, including reporting requirements for layoffs caused or influenced by AI.

  7. The bill creates a number of new state AI programs, including a state AI working group, an AI advisor, an IVO pilot program, and a Connecticut AI Academy.

Here’s a section-by-section breakdown.

chatbot safety protections

This section is effective Jan. 1, 2027. Violations are considered an unfair or deceptive trade practice under existing law, enforced solely by the Attorney General. No private right of action.

AI chatbots that simulate human conversation are subject to a number of requirements and regulations.

Chatbot operators must clearly declare at the start of each interaction (and at least hourly) that the user is talking to an AI system, not a human.

Operators must implement protocols to detect expressions of suicidal ideation, self-harm, or imminent violence and refer users to appropriate resources.

Regulations for minors:

For users who are under 18, these regulations apply:

Operators must prevent the chatbot from encouraging the user to engage in self-harm, suicidal ideation, physical violence, disordered eating, or the consumption of alcohol or drugs.

Chatbots may not be offered or marketed as therapists to minors. They may not discourage users from seeking mental health services or assistance from an appropriate adult.

Chatbots are prohibited from engaging in any romantic or sexually explicit interaction with minors, or use any manipulative techniques to engage minors and extend interactions with the chatbot.

Those prohibited techniques include excessive praise; mimicking a romantic bond; simulating feelings of emotional distress, loneliness, guilt, or abandonment; or soliciting gifts or purchase.

Operators must offer, to minor users and their parents, tools to manage the user’s screen time and account settings.

Social media protections for kids

This section becomes effective on Jan. 1, 2028.

SB 5 includes significant new restrictions on the use of addictive or targeted algorithms by social media platforms and other digital companies.

The restrictions cover any digital platform that, as a significant part of its services, recommends, selects, or prioritizes media items (text, image, or video) generated or shared on a platform or by users of the platform.

Educational sites and platforms used primarily for retail sales (such as eBay) are exempt.

Age verification required: Social media platforms must use “commercially reasonable and technically feasible” methods to determine if the user is a minor (under 18).

Parental consent: If the user is a minor, parental consent is required to turn on the platform’s algorithmic feed. If there is no parental consent, the algorithmic feed must default to off.

Data deletion: Data collected to determine if a user is a minor must be deleted immediately after age is verified. That data may not be used for any other purpose.

Service downgrades prohibited: Platforms may not withhold or degrade the product or service due to the prohibition against algorithmic feeds for minor users.

Surgeon General’s warning required: Platforms must display this warning clearly and conspicuously: "The Surgeon General has warned that while social media may have benefits for some young users, social media is associated with significant mental health harms and has not been proven safe for young users."

The warning must appear when a minor first accesses the platform on any given day. The warning must occupy at least 75% of the window and remain for at least 30 seconds. It must re-appear after three continuous or noncontinuous hours of use during that same day. The warning is not required for adult users.

Limited hours for notifications: Any notifications sent by the platform operator to a minor must be sent between 8am and 9pm Eastern time. Notifications to minors are prohibited outside that time frame.

Parental tools: Platform operators must offer tools for parents to adjust the settings on a minor’s account.

Annual report to state: Starting on March 1, 2028, platform operators must publicly disclose the total number of users who used the platform during the preceding calendar year; the total number of minor users; and the total number of minor accounts given parental consent to access algorithmic feeds. The report must also include the average amount of time per day that minors used the platform, broken down by age and hour of the day.

PROVENANCE DATA required IN GEN AI CONTENT

This section covers AI developers who produce a system that has more than 1 million monthly users. Enforcement solely by the Attorney General.

Under SB 5, developers are required to include provenance data in any audio, image or video content created or materially altered by their generative AI system.

This inclusion must allow a consumer to assess whether the content was created or materially altered by the Gen AI system. Those methods may include the standards set by the C2PA organization and others.

“Provenance data” means data embedded into digital content or that are included in the digital content's metadata for the purpose of verifying the digital content's authenticity, origin or history of modification.

Frontier AI whistleblower protections

This section of the bill covers frontier developers of foundation models, with annual gross revenues in excess of $500 million in the most recent calendar year.

Under SB 5, frontier developers are prohibited from penalizing employees for reporting on catastrophic risks, which include risks of mass casualties, over $1 billion in property damage, CBRN weapon assistance, cyberattacks, or loss of human control over a model.

Large frontier developers (>$500m annual gross revenue) must, no later than Jan. 1, 2027, establish anonymous internal reporting channels for employees to respond to catastrophic risks.

They must provide monthly status updates to reporting employees, submit quarterly internal reports to officers and directors, and post notices to all employees regarding their whistleblower rights.

subscription AI: consumer protections

The bill requires certain disclosures to customers who are offered subscription services to AI products, including:

  • usage limits;

  • behavioral restrictions;

  • limitations imposed in response to consumer conduct;

  • new or modified restrictions in renewal periods.

Pilot program: IVOs (third-party verification)

Under SB 5, the Department of Consumer Protection will develop a pilot program to evaluate the use of third-party entities (independent verification organizations, or IVOs) to assess the adherence of AI models to standards reflecting best practices for the prevention of a variety of harms.

No more than five independent verification organizations will be allowed into the program, which will sunset in 2030.

Regulatory sandbox established

Under SB 5, the state Commissioner of Economic and Community Development shall establish an AI regulatory sandbox program, which will allow developers to test an innovative product or service on a limited basis under reduced licensure and regulatory requirements.

Regulating use of AI in hiring & employment

This section is effective Oct. 1, 2027. Enforcement solely by the Attorney General, with a 60-day right to cure.

These requirements apply to AI tools used in hiring, firing, promotion, discipline, and similar employment decisions.

Deployers of AI tools used in employment decisions must disclose to employees and applicants when they are interacting with an AI tool and/or automated decision process. That disclosure must include the purpose and trade name of the automated employment-related decision technology, as well as the categories and sources of personal data the technology will analyze.

After any adverse decision, the deployer must provide a plain-language explanation of the principal reasons for the decision, including the data used and its source. Applicants/employees must be given the opportunity to examine and correct personal data they themselves did not provide.

The use of AI and/or an automated employment-related decision technology shall not be a defense against a complaint alleging a prohibited discriminatory practice.

REquired AI Layoff notices

Employers who serve written notice to the Labor Dept. regarding layoffs are, as per SB 5, now required to disclose to the department whether the layoffs are related to the employer’s use of AI or other technological change. Effective Oct. 1, 2026.

creation of a state ai advisor

SB 5 asks the executive director of the Connecticut Academy of Science and Engineering to designate an AI fellow to serve as a liason to the Attorney General and the Dept. of Economic and Community Development.

The liason is asked to develop an AI technology transfer program, make recommendations around AI and risk, create a plan for the state to provide computing services to businesses and researchers, and other matters.

Creation of a ‘connecticut AI academy’

SB 5 calls on the Board of Regents for Higher Education to establish a a Connecticut AI Academy overseen by Charter Oak State College. The academy will offer online courses on AI, promote digital literacy, and prepare students for careers in fields involving AI, directed at students between 13 and 20 years old.

The academy will also develop courses that prepare small businesses and nonprofits to utilize AI, and courses for workforce training.

Creation of a state ai working group

SB 5 creates a state AI working group, with initial appointments made no later than July 31, 2026. The group will consist of AI experts and stakeholders, to advise on best practices, avoiding negative impacts, accelerate the adoption of AI agents by small businesses, and properly apportion liability related to AI agent action on behalf of small businesses.

The group is also asked to develop proposals to create a technology court to adjudicate AI, data privacy, and other tech-related issues.

The latest on state aI bills



Next
Next

AI Legislative Update: May 8, 2026