Guide to ‘America’s AI Action Plan’ released by the Trump Administration

President Trump released the White House’s ‘AI Action Plan,’ a document that lays out his Administration’s guidance for AI development and governance. (Image: Getty Images for Unsplash+)

July 23, 2025 — President Trump this morning released “America’s AI Action Plan,” a comprehensive initiative that outlines measures the White House intends to take regarding the development and governance of artificial intelligence.

The New York Times reported that the president is scheduled to deliver his first major speech on AI on Wednesday afternoon. The president is also expected to sign new executive orders related to AI.

This Guide is intended to offer an accurate, concise, nonpartisan overview of the Trump Administration’s AI Action Plan.

Three areas of focus

The AI Action Plan is structured around innovation, infrastructure, and international diplomacy.

  • Innovation focuses on accelerating AI development and removing regulatory barriers. ​

  • Infrastructure emphasizes building AI capabilities and energy resources. ​

  • International diplomacy aims to establish American technology as the global standard.

Read the full report

Trump Administration AI Plan

Select the image for full report.

Actions affecting AI regulation

The Action Plan includes elements of the AI moratorium that was stripped out of the budget bill that passed in Congress earlier this month.

The Plan says:

To maintain global leadership in AI, America’s private sector must be unencumbered by bureaucratic red tape. President Trump has already taken multiple steps toward this goal, including rescinding Biden Executive Order 14110 on AI that foreshadowed an onerous regulatory regime. AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level.

The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation. 

Recommended policy actions include:

  • Launch a Request for Information from businesses and the public at large about current Federal regulations that hinder AI innovation and adoption, and work with relevant Federal agencies to take appropriate action. 

  • Work with Federal agencies to identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment.

  • Work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award. 

  • Evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.5.

  • Review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. Furthermore, review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation.

encouraging open-source, open-weight ai development

The Action Plan notes that open-source and open-weight AI models, made freely available by developers for anyone in the world to download and modify, have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider. Therefore, the Plan states, the Federal government should create a supportive environment for open models. 

Recommended policy actions include:

  • Ensure access to large-scale computing power for startups and academics by improving the financial market for compute. Currently, a company seeking to use large-scale compute must often sign long-term contracts with hyperscalers—far beyond the budgetary reach of most academics and many startups. America has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities.

    Through collaboration with industry, NIST at DOC, OSTP, and the National Science Foundation’s (NSF) National AI Research Resource (NAIRR) pilot, the Federal government can accelerate the maturation of a healthy financial market for compute.

  • Partner with leading technology companies to increase the research community’s access to world-class private sector computing, models, data, and software resources as part of the NAIRR pilot. 

  • Continue to foster the next generation of AI breakthroughs by publishing a new National AI Research and Development (R&D) Strategic Plan to guide Federal AI research investments. 

federal encouragement of ai adoption

The Action Plan asserts that “many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, ‘try-first’ culture for AI across American industry.”

Recommended policy actions include:

  • Establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results.

  • Launch several domain-specific efforts (e.g., in healthcare, energy, and agriculture), led by NIST at DOC, to convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems.

  • Regularly update joint Department of Defense (DOD) - Intelligence Community (IC) assessments of the comparative level of adoption of AI tools by the United States, its competitors, and its adversaries’ national security establishments.

  • Prioritize, collect, and distribute intelligence on foreign frontier AI projects that may have national security implications.

free speech issues

The Action Plan states:

AI systems will play a profound role in how we educate our children, do our jobs, and consume media. It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas. 

Recommended policy actions include:

  • Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.  

  • Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias. 

  • Led by DOC through NIST’s Center for AI Standards and Innovation (CAISI), conduct research and, as appropriate, publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship. 

Combat synthetic media in the legal system

The Action Plan specifically targets malicious deepfakes in the legal system.

The Plan states:

While President Trump has already signed the TAKE IT DOWN Act, which was championed by First Lady Melania Trump and intended to protect against sexually explicit, non-consensual deepfakes, additional action is needed. In particular, AI-generated media may present novel challenges to the legal system. For example, fake evidence could be used to attempt to deny justice to both plaintiffs and defendants. The Administration must give the courts and law enforcement the tools they need to overcome these new challenges. 

Recommended policy actions include:

  • Consider developing NIST’s Guardians of Forensic Evidence deepfake evaluation program into a formal guideline and a companion voluntary forensic benchmark. 

  • Led by the Department of Justice (DOJ), issue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules. 

  • Led by DOJ’s Office of Legal Policy, file formal comments on any proposed deepfake-related additions to the Federal Rules of Evidence. 

We will continue to update this post with more information throughout the day.

Previous
Previous

Future Caucus launches new AI policy task force to empower young state lawmakers

Next
Next

Washington State AI Task Force works on getting transparency and disclosure right