Complete guide to the California Report on Frontier AI Policy
California Gov. Gavin Newsom’s office released a 53-page report on AI policy, which is expected to shape state AI policy in the coming months.
June 18, 2025 — California Gov. Gavin Newsom yesterday released The California Report on Frontier AI Policy, a 53-page overview of policy recommendations from the Joint California Policy Working Group on AI.
The group was convened by Gov. Newsom in late 2024 after he vetoed SB 1047, the controversial AI regulatory bill authored by Sen. Scott Wiener. An early draft of the report was released in March 2025.
A positive step forward
Steve Wimmer, Transparency Coalition Senior Technical and Policy Advisor, found the report to be a positive step forward in AI policy and governance.
“The California Working Group’s report acknowledges the urgent need for guardrails against 'irreversible harms,' which is a critical step towards responsible governance,” he said.
“The true test,” he added, “lies in translating their insights into concrete, transparent policies. That involves mandating public disclosure of AI interactions, training data provenance, and decision-making processes. We urge California to continue its role as a leader in establishing robust accountability and frameworks that ensure innovation doesn't outpace public safety and oversight.”
Report urges Transparency, disclosure, sensible regulation
Among the report’s highlights:
The report persuasively makes the case for the importance of acting now on AI regulation, rather than wait for problems to emerge (p.4).
This is a clear statement, from the most advanced minds in AI research, that a strong regulatory framework is a foundation for, not an obstacle to, greater innovation (p.19).
The Working Group emphasizes the importance of transparency as a core foundational element for effective AI policy (p. 24-27).
AI disclosures—informing consumers that they are working with AI, or AI tools are involved in their interaction in some way—are of the utmost importance. Consumers must also be made aware of any potential harms from working with the AI tool.
The Working Group recognizes the importance of protecting personal information and intellectual property (p.29-30).
The report recognizes the need to have companies self-report breaches and harm events (p.31).
The report strongly recommends a role for third party verification of self-assessments to strengthen confidence in the safety and reliability of AI (p. 27).
The report includes a recommendation to enact legislation that protects whistleblowers in the tech/AI space (p.29). This is a critically important part of any enforcement mechanism.
The task force recommends requiring the reporting of adverse AI events (p.31).
The report acknowledges that corporations with vested interests in the proliferation of tech can't be trusted to act in self-regulating ways.
full report available here
Click on image to access full report.
Gov. Newsom: ‘safety of californians’ top of mind
Gov. Newsom said in a statement accompanying the report:
“California is the home of innovation and technology that is driving the nation’s economic growth — including the emerging AI industry.
As Donald Trump chooses to take our nation back to the past by dismantling laws protecting public safety, California will continue to lead the way with smart and effective policymaking.
I thank the experts and academics who responded to my call for this important report to help ensure that, as we move forward to help nurture AI technology, we do so with the safety of Californians at the top of mind.”
what the report covers
The Working Group focused its work on governance of powerful foundation models, not all AI applications and tools. The report does not directly address labor, environmental, or data-center issues.
The report is meant to strike a balance: Encouraging innovation while proactively mitigating potentially severe or irreversible harms from AI misuse, malfunctions, or systemic impacts.
executive summary, summarized
As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI while addressing substantial risks that could have far-reaching consequences for the state and beyond.
The report leverages broad evidence—including empirical research, historical analysis, and modeling and simulations—to provide a framework for policymaking on the frontier of AI development.
Building on this approach, the report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI—principles rooted in an ethos of “trust but verify.”
The report does not argue for or against any particular piece of legislation or regulation. Instead, it examines the best available research on foundation models and outlines policy principles grounded in this research that state officials could consider in crafting new laws and regulations that govern the development and deployment of frontier AI in California.
three major AI risk categories
The report acknowledges that frontier models (e.g., OpenAI’s o3, Anthropic’s Claude, Google’s Gemini) have rapidly advanced in reasoning and code-writing capabilities in just the past few years.
As they have advanced, new risks have emerged:
Malicious actors misusing foundation models to deliberately cause harm, as with:
simulated content such as non-consensual intimate imagery (NCII), child sexual abuse material (CSAM), and cloned voices used in financial scams;
manipulation of public opinion via disinformation;
cyberattacks;
CBRN (Chemical, Biological, Radiological, and Nuclear) attacks using AI tools.
Malfunction risks, with non-malicious actors use foundation models as intended, yet
unintentionally cause harm. These include":
reliability issues where models may generate false content;
bias against certain groups or identities;
loss of control where models operate in harmful ways without the direct control of a human overseer.
Systemic risks associated with the widespread deployment of foundation models, including:
labor market disruption;
global AI R&D concentration;
market concentration;
single points of failure;
environmental risks;
privacy risks;
copyright infringement.
What has changed in a year: AI technical leaps & risks
The report notes that foundation model capabilities have rapidly improved in the 10 months since SB 1047 was vetoed.
Inference scaling: There have been substantial improvements in AI models’ ability to engage in multiple-step, chain-of-thought reasoning. This improvement comes from inference scaling, which means using more computing power during the actual operation of AI models, not just during training. Recent examples of this approach’s effectiveness include:
the strong benchmark performances of OpenAI’s o1 and o3 models and DeepSeek’s R1 model;
DeepSeek’s R1 model also demonstrates that inference scaling has resulted in greater cost-efficiency for AI systems: The amount of compute required to build a model with a given level of performance has declined.
Increased CBRN risk: Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the report’s early draft in March 2025.
OpenAI’s April 2025 o3 and o4-mini System Card states: “As we wrote in our deep research system card, several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold. We expect current trends of rapidly increasing capability to continue, and for models to cross this threshold in the near future.”
Alignment scheming is increasing: Recent AI models have demonstrated increased evidence of alignment scheming, meaning strategic deception where models appear aligned during training but pursue different objectives when deployed, and reward hacking behaviors in which models exploit loopholes in their objectives to maximize rewards while subverting the intended purpose. This raises broader concerns about AI autonomy and control.
New evidence suggests that models can often detect when they are being evaluated, potentially introducing the risk that evaluations could underestimate harm new models could cause once deployed in the real world.
takeaways from past case studies
The Working Group examined three past case studies—internet regulation, tobacco regulation, and the regulation of energy with respect to the emerging awareness of climate change—and gleaned three main lessons from those experiences.
Early design choices create path dependency: The foundations of the internet demonstrate how initial technical decisions can persist despite emerging risks, emphasizing the need to anticipate interconnected sociotechnical systems.
The importance of early policy windows highlights the need for generating broad evidence to inform carefully scoped policies and building safety into early design choices.
Transparency is critical for generating holistic evidence: Many consumer products deliver tremendous societal benefits. But in specific cases like tobacco, suppressing independent research limited consumer choice, undermined more robust state-level policy efforts to protect consumers, and resulted in expensive, unnecessary litigation against companies.
Trust expertise, but verify claims through third-party evaluation: The energy industry’s internal documentation of climate change risks highlights the importance of independent assessment to avoid conflicts of interest and incorporating modeling and simulation as part of a broad evidence base.
lessons learned from other industries
Well-calibrated policies can create a thriving entrepreneurial culture for consumer products: Technological applications have brought enormous benefits to consumers. Thoughtful policy can enhance innovation of these consumer products and promote widespread distribution of their benefits.
Transparency is necessary but insufficient for consumers to make informed decisions: The history of the tobacco industry reveals the importance of developing frameworks that promote transparency around companies’ internal risk assessments and research findings.
In the AI context, frontier AI labs possess the most holistic information about their models’ capabilities and risks. Making this information accessible to policymakers and external experts can promote policy informed by a holistic understanding of the state-of-the-art of evidence produced by those closest to the technology, supporting informed oversight without stifling innovation.
Transparency alone is insufficient. As the tobacco industry’s history shows, companies can distort public understanding despite available evidence. Independent verification mechanisms are necessary to validate industry claims and ensure that evidence is accurately represented.
Lack of transparency on product safety can result in avoidable, costly litigation: The tobacco case illustrates that when novel technologies cause injuries and information about safety practices is opaque, litigation is a predictable consequence. In the tobacco case, litigation ultimately brought the requisite information to light, but it resulted in irreversible reputational damage to tobacco companies and devastating public health consequences.
examples of positive outcomes
History is ripe with examples in which evidence-based policy struck an appropriate balance between promoting innovation and ensuring sufficient accountability and transparency, allowing industry to thrive while minimizing risks. Three examples:
Lessons From Pesticide Regulation: Balance of safety and innovation.
The regulatory framework governing pesticide use in the United States exemplifies a balance between safeguarding public health and supporting agricultural productivity.
Governing bodies in this space, the Environmental Protection Agency (EPA) and California’s Department of Toxic Substances Control, for example, evaluate pesticide safety through comprehensive toxicity studies, considering factors such as human exposure, environmental impact, and long-term ecological effects. Pesticides are subject to reevaluation and evolving enforcement measures, ensuring that current scientific knowledge informs statutes and regulations.
The United States’ approach to these toxic but necessary substances explicitly accounts for the economic imperatives of the agricultural sector, a cornerstone of national and global food security.
Lessons From Building Codes: Clear standards are important.
Building codes are constantly evolving to incorporate advances in engineering, green building principles, seismology, and fire resilience. Buildings have standardized stair heights—7.5 inches maximum height and 10 inches minimum depth. Our brains are conditioned to expect a certain footfall when climbing household steps. Similarly, legal codes prevent an unreinforced masonry building being constructed atop a fault line or a house without circuit breakers.
Despite these rigorous and extremely prescriptive building standards, homes get built every day. The building code—through significant trial and error—has achieved balance between safety and economic output.
Lessons from seat belts: A false safety-innovation binary.
In the past four decades, seat belts have saved over 8,900 lives per year. This is a technology that demonstrably works. But the path to a national requirement was neither fast nor straight.
In 1959, Volvo introduced the first modern, three-point seat belt. It took nearly a decade of factfinding and debate—and overcoming opposition from the auto industry—for the US Government to adopt a mandate in 1968 that all new cars include these safety devices.
Today, seat belts are a ubiquitous part of daily life. It took decades for this law to go from frontier concept to common acceptance.
The pace of change of AI is many multiples that of cars—while a decades-long debate about seat belts may have been acceptable, society certainly has just a fraction of the time to achieve regulatory clarity on AI.
Lesson learned: tech thrives in a regulated environment
The report authors write: Technology can thrive under a regulated environment as long as policy creates incentives that align company behavior with economic opportunity and public safety, is specific and prescriptive for a particular reason, and does not impair the use of the technology for its stated beneficial purpose.
The same can certainly hold true for AI policy in a subnational context such as California.
experts in the working group
The Working Group was divided into these cohorts:
Co-leads
Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley
Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace
Li Fei-Fei, Professor at Stanford University and Founding Co-Director of Stanford’s Human-Centered AI Institute
Lead Writers
Rishi Bommasani, Society Lead at the Stanford Center for Research on Foundation Models (CRFM)
Scott R. Singer, Carnegie Endowment for International Peace
Senior Advisors
Daniel E. Ho, Stanford University Law School
Percy Liang, Director of Stanford University’s Center for Research on Foundation Models (CRFM)
Dawn Song, University of California, Berkeley
Joseph E. Gonzalez, University of California, Berkeley
Jonathan Zittrain, Carnegie Endowment for International Peace and Harvard University