Brazilian Kids’ Photos Misused to Power AI

0 0
Read Time:1 Minute

Unauthorized Use of Brazilian Children’s Photos in AI Tools Raises Privacy Concerns

Human Rights Watch (HRW) has issued a warning about the unauthorized use of photos of Brazilian children in AI tools, including popular image generators like Stable Diffusion. The organization’s report highlighted the urgent privacy risks faced by these children, as their images are being used without consent.

An HRW researcher, Hye Jung Han, discovered that photos of Brazilian children, sourced from personal and parenting blogs, were included in the LAION-5B dataset. This dataset, compiled by the German nonprofit LAION, contains image-text pairs derived from billions of online images and captions since 2008.

Although LAION has removed the links to the children’s images, HRW’s report suggests that this action may not fully address the issue. There are concerns that the dataset still references personal photos of children from around the world, posing a significant privacy threat.

Risks of Non-Consensual AI-Generated Images

HRW’s analysis revealed that the identities of many Brazilian children were easily traceable in the dataset due to included names and locations in image captions. This raises concerns about potential misuse of these images, especially in generating non-consensual AI clones.

At a time when students are increasingly vulnerable to online bullying and exploitation, AI tools could use these images to create explicit content. HRW’s report described the captured moments from childhood as intimate and potentially exploitable.

See also
Meta Launches New AI Assistant Across All Apps

Efforts to Address the Issue

LAION has taken down all publicly available versions of the LAION-5B dataset as a precaution following the discovery of illegal content, including instances of child sexual abuse material. The organization is working with various entities to remove references to illegal content before republishing a revised dataset.

In Brazil, reports have emerged of girls facing harassment through AI-generated deepfakes created using their social media photos. HRW emphasized the long-lasting harm such content can cause and called for urgent government action to protect children’s data from AI misuse.

Stability AI, the maker of Stable Diffusion, has not yet commented on the issue. The implications of using children’s photos in AI tools without consent remain a pressing concern that requires further attention and regulatory action.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %