TCAI Bill Guide: SB 1142, California’s Digital Dignity Act
California State Sen. Josh Becker (D-Menlo Park) testifying on behalf of SB 1142, which would extend digital deepfake protections to all individuals in California.
April 7, 2026 — A new bill designed to extend digital dignity protections to California’s 40 million residents was given its first hearing before the Senate Privacy, Digital Technologies, and Consumer Protection Committee yesterday.
The proposed Digital Dignity Act, SB 1142, would require an AI product, service, internet website, or application to implement and maintain a mechanism by which users can revoke access to their digital replica created by other people using the large online platform’s generative AI tool at any time.
Users would have to obtain a court order prior to exercising their rights under this law.
Sponsored by Sen. Josh Becker, the author of 2024’s groundbreaking AI Transparency Act, SB 1142 would clarify that false impersonation includes the use of a digital replica with the intent to impersonate another for purposes of prescribed criminal provisions.
The Transparency Coalition’s 401c(4) arm, the Transparency Coalition Action Fund, is partnering with Sen. Becker in bringing this bill to the legislature.
The full and most recent version of the bill is available here: SB 1142.
The full committee staff analysis is available here.
The bill in brief
The Act would require an AI product, service, internet website, or application to implement and maintain a mechanism by which users can revoke access to their digital replica created by other people using the large online platform’s generative AI tool at any time.
What problem does the bill address?
Existing law protects performers, celebrities, and deceased individuals from the unauthorized use of their digital likeness. SB 1142 would extend those protections to all individuals within the state of California.
From the California legislative analysis:
Given the transformative capabilities of generative AI to produce realistic digital replicas, aka deepfakes, there is a need to fortify existing laws to protect Californians from the invasive and nonconsensual use of their own likeness.
Recent laws have created protections for digital replicas used post-mortem and those used pursuant to contracts for the performance of personal or professional services.
However, continuing concerns about the creation and distribution of nonconsensual digital replicas, including nonconsensual intimate imagery, highlight that existing laws may not adequately protect individuals from false impersonation resulting from the use of digital replicas created using AI.
What SB 1142 would require
The bill concerns the providers of generative AI (GenAI) tools that allow users to create digital replicas.
Under SB 1142, an operator would be required to create a mechanism for users to revoke access to their digital replica using the platform’s tool. It also requires their terms of service to prohibit unlawful digital replicas.
Platforms would be required to establish a mechanism to report unlawful digital replicas and a process to respond to such reports. The bill also provides enhanced liability for those using, with actual knowledge, a digital replica that violates specified criminal laws or defamation law.
Bill Sponsor’s overview
Excerpts from Sen. Josh Becker’s testimony before the Senate Privacy, Digital Technologies, and Consumer Protection Committee on April 6, 2024:
“We’ve entered an era where anyone's face, voice, or identity can be replicated and weaponized. This bill is about protecting Californians' dignity in a digital world where their likeness can be weaponized against them.
The Digital Dignity Act establishes commonsense guardrails for large online platforms, gives consumers added control over their likeness, and deters bad actors who seek to defame or defraud Californians using their digital replica.
As AI becomes integrated with social media, some companies are reverting to a growth at all cost mindset at the expense of the dignity, reputation, and privacy of Californians.
At the beginning of this year, Elon Musk's social media company, X, and its accompanying AI system, Grok, was used to generate pornographic content of women and children with an actual person's likeness available. These product design choices were made without regard for the well-being of their users or the legality of the content being generated, which harmed vulnerable populations.
Without the protections in SB 1142, we risk allowing platforms to publish tools that allow for unprotected access to our likeness, violating both our privacy and dignity.”
Transparency Coalition testimony
Excerpts from the testimony of Jai Jaisimha, co-founder and COO of the Transparency Coalition:
“This bill isn't about technology. It's about the fundamental right of every Californian to control their own identity, their voice, their face, and their very likeness in an era where generative AI can steal and weaponize these attributes with a single click.
We're currently facing an epidemic of non-consensual digital likeness abuse. Bad actors are increasingly using AI to create digital replicas, highly realistic deepfakes used for harassment, extortion, and misinformation. A staggering number of deepfake victims have been women and girls who have been targeted with non-consensual intimate imagery.
Victims describe the experience as a digital haunting, where they must spend thousands of dollars on lawyers just to ask a platform to take down a fake image, only to be met with silence or slow-moving bureaucracy while the content goes viral. Under current law, they cannot even often pursue private legal action for the creation and use of their likeness in the synthetic format.
This bill closes many of these gaps by establishing a more balanced and enforceable framework. It mandates that large online platforms provide users a mechanism to revoke access to digital replicas of themselves. It provides clear penalties and holds platforms accountable with a forty-eight-hour takedown regime.”