Coalition Takes Action Against AI-Generated Child Abuse Material

0 0
Read Time:3 Minute

The Fight Against Child Sexual Abuse Material

To combat the proliferation of child sexual abuse material (CSAM), a collective of leading generative AI developers such as Google, Meta, and OpenAI have committed to implementing safeguards around this emerging technology. This coalition was assembled through the efforts of two non-profit organizations: Thorn, a children’s tech group, and New York-based All Tech is Human. Thorn, initially known as the DNA Foundation, was established in 2012 by actors Demi Moore and Ashton Kutcher.

A New Approach to AI Development

Their joint initiative was unveiled on a recent Tuesday, coinciding with the release of a new Thorn report advocating for a “Safety by Design” approach to generative AI development. This approach aims to prevent the creation of CSAM throughout the entire life cycle of an AI model. Thorn emphasized the importance of companies involved in the development, deployment, and utilization of generative AI technologies to embrace these Safety by Design principles. By doing so, they demonstrate their commitment to forestalling the propagation of CSAM, AIG-CSAM, and other forms of child sexual abuse and exploitation.

AIG-CSAM, as defined in the report, refers to AI-generated CSAM, which the document illustrates as being relatively straightforward to generate.

The Role of Thorn in Child Safety

Thorn is actively involved in developing tools and resources dedicated to shielding children from sexual abuse and exploitation. In its 2022 impact report, Thorn disclosed the discovery of over 824,466 files containing child abuse material. Additionally, the organization reported the identification of more than 104 million files suspected of CSAM in the U.S. alone last year.

Amid the rising prevalence of deepfake child pornography online following the accessibility of generative AI models, Thorn highlighted how generative AI facilitates the generation of vast volumes of content. According to Thorn, a single perpetrator could potentially produce substantial amounts of CSAM by repurposing original images and videos into new content.

The report issued by Thorn elaborates on a set of principles that generative AI developers should adhere to in order to prevent the misuse of their technology for producing child pornography. These include responsibly sourcing training datasets, integrating feedback loops and stress-testing mechanisms, incorporating content history or “provenance” with adversarial misuse in consideration, and hosting AI models with care.

Other notable signatories of this commitment include Microsoft, Anthropic, Mistral AI, Amazon, Stability AI, Civit AI, and Metaphysic, each issuing separate statements affirming their dedication to child safety in generative AI technology.

Industry Voices on Responsible AI Development

In a statement, Metaphysic’s chief marketing officer Alejandro Lopez underscored the company’s commitment to responsible AI development, particularly in protecting vulnerable members of society, such as children, from the exploitation of AI technology for nefarious purposes.

OpenAI, another key player in the field of AI, expressed its support for the initiative through a statement from its child safety lead Chelsea Carlson. Carlson emphasized OpenAI’s prioritization of safety and responsible usage in their tools, such as ChatGPT and DALL-E, and affirmed their collaboration with Thorn, All Tech is Human, and the broader tech community to uphold the Safety by Design framework to mitigate potential risks to children.

Despite outreach attempts by Decrypt to other members of the coalition, additional comments were not immediately available.

Meta and Google, two prominent tech giants, highlighted their ongoing efforts to proactively detect and remove CSAE material, including AI-generated CSAM, through a combination of technological solutions and human reviews. These companies remain steadfast in their commitment to combatting child exploitation online.

As the Internet Watch Foundation cautioned in October, AI-generated child abuse material has the potential to inundate the internet, underscoring the urgency of collective action in safeguarding children from digital harm.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %