Investigation Finds AI Image Generation Models Trained on Child Abuse

Stanford Internet Observatory (SIO) investigation identified hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI text-to-image generation models, such as Stable Diffusion... Read the full article here.

Previous
Previous

AI companies would be required to disclose copyrighted training data under new bill

Next
Next

E.U. Agrees on Landmark Artificial Intelligence Rules