
LEarn
This is a resource for legislators, parents, policy makers, journalists, thought leaders, and researchers.
Artificial intelligence can be confusing. We aim to provide clarity and understanding.
These modules explain fundamental concepts in artificial intelligence and AI governance in accurate and non-technical language. New articles will be added as the technology and language of AI evolve—and they’re evolving quickly.
topics
AI 101
Your startup guide to understanding artificial intelligence.
companion chatbots
What they are, how they work, where the risks and dangers lie.
AI Safeguards
Exploring the foundations of AI safeguards and mitigation.
The ai developer’s Duty of care
Learn about duty of care, product liability, and how these concepts apply to artificial intelligence products.
Training Data transparency
Learn about the foundational ingredients of AI models, and why and how they should be disclosed.
DISCLOSING AI USE
Understand the importance of AI disclosure laws, and how content provenance makes disclosure possible.
tcai bill tracker
discover what’s happening in your state
TCAI research: further resources
TCAI guides to AI lawsuits, state data privacy laws, and more.
Complete Resource Library
Companion Chatbots 101
Companion chatbots are digital characters created by powerful AI systems, designed to respond to a consumer in a conversational, lifelike fashion.
Growing Niche: Romantic Companion Bots
One of the fastest growing sectors within the companion chatbot industry is the romantic companion chatbot. These are also called intimate chatbots.
The Risk: ‘Unsafe Products’ for Kids and Teens
A recent assessment of companion chatbots, including products offered by CharacterAI and Replika, concluded that the products present a real risk of harm to children and teenagers.
What is a ‘duty of care’ and how does it apply to artificial intelligence?
AI developers and deployers have a duty of care that’s no different than any other product manufacturer. Here’s what that means.
TCAI Guide to Search Tools: Was Your Data Used to Train an AI Model?
Search engines have emerged recently that allow individuals to check specific types of content—books and images—for use as AI training data.
We link to the search tools, and include tips on preventing your data from being used to train AI models.
AI Safeguards: Where to Start
At the Transparency Coalition we believe AI policy discussion and legislative action happen at many levels simultaneously. Our mission is to address known AI safety and privacy risks with practical solutions. We’re focused on bringing transparency to both AI inputs and AI outputs.
Input Safeguards: Require Transparency in AI Training Data
Transparency in AI training data is the foundation of ethical AI.
State legislatures should consider measures that require developers of AI systems and services to publicly disclose specified information related to the datasets used to train their products.
TCAI Guide to AI Lawsuits
The hailstorm of AI-related lawsuits over the past year can make the litigation space feel chaotic and confusing. In fact, the lawsuits can be roughly sorted into two buckets: copyright infringement and harmful AI-driven outcomes.
This TCAI curated guide offers a clear and concise overview of today’s AI legal battlefield.
TCAI Guide to State Data Privacy Laws
The United States has no national data privacy law.
In the absence of a national regulatory mechanism, many individual states have adopted their own digital privacy laws to protect their citizens from the misuse of personal data.
We’ve gathered information on individual state laws, as well as national and local bills, in this TCAI guide.
Understanding Synthetic Data
In today’s AI ecosystem there are two general types of training data: organic and synthetic.
Organic data describes information generated by actual humans, whether that’s a piece of writing, a numerical dataset, a song, an image, or a video. Synthetic data is created by generative AI models using organic data as a base material.
Synthetic Data and AI ‘Model Collapse’
Just as a photocopy of a photocopy can drift away from the original, when generative AI is trained on its own synthetic data, its output can also drift away from reality, growing further apart from the organic data that it was intended to imitate.
Transparency and Synthetic Data
The use of synthetic data isn’t inherently good or bad. In medical research, for example, it’s a critically important tool that allows scientists to make new discoveries while protecting the privacy of individual patients.
At the Transparency Coalition, we are not calling for limitations on the creation or use of synthetic data. What’s needed is disclosure: Developers should be transparent in their use of synthetic data when using it to train an AI model.
Training Data: What the Machine Learns
Training data is the foundation of artificial intelligence. It’s what AI systems like ChatGPT use to provide answers to the prompts we provide. It’s what generative image systems like Midjourney and DALL-E use to conjure AI-created art.
Why and How to Disclose the Use of AI
With the emergence of generative AI, it now takes just a few button-clicks for anyone to create or manipulate data and convince others that fake content is real.
Why Training Data Is Not a Trade Secret
Training data is the foundation of artificial intelligence. It’s what AI systems like ChatGPT use to provide answers to the prompts we provide. It’s what generative image systems like Midjourney and DALL-E use to conjure AI-created art.
Emerging Standards in Disclosure
Today most media/tech companies are coalescing around the standard created by the Coalition for Content Provenance and Authenticity (C2PA) .
How to Format an AI Training Data Declaration
Developers of AI systems should be required to provide documentation for all training data used in the development of an AI model.
Legislating the Disclosure of AI Use
Legislative policies requiring the disclosure of the use of AI are developing side-by-side along the emerging standards in AI provenance. They’re not perfectly in sync—and that’s okay.
Data Privacy in the Age of AI
With the emergence of artificial intelligence systems like ChatGPT and CoPilot, data privacy has emerged as one of the most urgent consumer protection issues of the 2020s.