Enhancing Online Safety: Twitter’s Collaboration with Thorn
Social media behemoth Twitter recently unveiled a new system designed to combat the dissemination of child sexual abuse material (CSAM) on its platform. This innovative technology, seamlessly integrated into its existing infrastructure, was the result of a fruitful collaboration with the non-profit organization Thorn.
AI-Powered Solution to Safeguard Vulnerable Users
The Twitter Safety team proudly announced its participation in a beta test of Thorn’s cutting-edge AI-powered Safer solution. This tool was specifically engineered to proactively identify, remove, and report text containing instances of child sexual exploitation, thereby fortifying Twitter’s commitment to online safety.
Through a longstanding partnership with Thorn, Twitter has intensified its efforts to provide a secure environment for its users. By deploying the Safer solution during its beta phase, the social media giant aimed to enhance its capacity to detect and combat child sexual exploitation, particularly focusing on content posing an imminent threat to minors.
According to Twitter Safety, the implementation of this self-hosted solution seamlessly integrated into their detection mechanisms, enabling a targeted approach towards identifying high-risk accounts. This strategic move underscores Twitter’s unwavering dedication to combatting online exploitation and safeguarding vulnerable individuals.
Thorn’s Ongoing Initiative to Protect Children
Founded in 2022 by renowned actors Demi Moore and Ashton Kutcher, Thorn has been at the forefront of developing tools and resources dedicated to shielding children from sexual abuse and exploitation. In a recent development, tech giants Google, Meta, and OpenAI pledged their support to Thorn’s mission, agreeing to impose stringent safeguards around their AI models.
Rebecca Portnoff, Thorn’s VP of data science, highlighted the invaluable insights gained from the beta testing phase of the Safer AI model. Through this rigorous testing, the team observed the tangible impact of machine learning and AI in detecting and addressing child sexual abuse within textual content. Portnoff emphasized the model’s effectiveness in generating multi-label predictions for text sequences, offering a scalable solution to combat child safety violations online.
While Portnoff refrained from disclosing the specific social media platforms involved in the Safer suite beta test, she expressed optimism regarding the positive reception from industry partners. Several companies acknowledged the model’s utility in identifying and addressing harmful child sexual abuse activities, prioritizing reported messages, and supporting investigations of malevolent actors.
Challenges in the Age of AI-Powered Content Generation
The rise of generative AI tools, exemplified by ChatGPT’s debut in 2022, has raised concerns among internet watchdog groups like the UK-based Internet Watch Foundation. These organizations warn of a surge in AI-generated child pornography circulating on dark web platforms, posing a significant threat to online safety.
Amidst these challenges, the European Union recently called upon Twitter to elucidate reports of diminishing content moderation resources. Concerns about Elon Musk’s cost-cutting measures reducing the platform’s content moderation team and the languages monitored underscore the need for robust measures to uphold online safety standards.
The EU’s formal proceedings against Twitter, initiated in December 2023 under the Digital Services Act, underscore the gravity of ensuring effective risk management, content moderation, and adherence to regulatory directives. Twitter faces stringent demands to address these concerns promptly and enhance its compliance with regulatory frameworks.
Image/Photo credit: source url