The White House has announced that several major AI vendors, including OpenAI and Microsoft, have committed to taking steps to combat nonconsensual deepfakes and child sexual abuse material.
Adobe, Cohere, Microsoft, OpenAI and data provider Common Crawl said that they’ll “responsibly” source and safeguard the datasets they create and use to train AI from image-based sexual abuse. These organizations — minus Common Crawl — also said that they’ll incorporate “feedback loops” and strategies in their dev processes to guard against AI generating sexual abuse images. And Adobe, Microsoft and OpenAI (but not Cohere) said they’ll commit to removing nude images from AI training datasets “when appropriate and depending on the purpose of the model.”
The commitments are voluntary, it’s worth noting. Many AI vendors opted not to participate (e.g. Anthropic, Midjourney, etc.). And OpenAI’s pledges in particular are suspect, given that CEO Sam Altman said in May that the company would explore how to “responsibly” generate AI porn.
Still, the White House touted them as a win in its broader effort to identify and reduce the harm of deepfake nudes.
{Categories} *ALL*{/Categories}
{URL}https://techcrunch.com/2024/09/12/white-house-extracts-voluntary-commitments-from-ai-vendors-to-combat-deepfake-nudes/{/URL}
{Author}Kyle Wiggers{/Author}
{Image}https://techcrunch.com/wp-content/uploads/2024/08/in-brief-ai-v2.jpg?resize=1200,675{/Image}
{Keywords}AI,deepfakes,In Brief{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}