Today, tech giants such as Google, Meta, OpenAI, Microsoft, and Amazon have pledged to review their AI training data for child sexual abuse material (CSAM) and eradicate it from future models. These companies have endorsed a new set of principles aimed at curbing the spread of CSAM. They commit to ensuring that training datasets are free of CSAM, avoiding datasets with a high risk of containing such material, and removing CSAM imagery or links from data sources. Furthermore, the companies vow to rigorously test AI models to ensure they do not generate CSAM imagery and will only release models that have been evaluated for child safety.
Additional signatories to these principles include Anthropic, Civitai, Metaphysic, Mistral AI, and Stability AI. The rise of generative AI has raised concerns about the proliferation of deepfake images, including fake CSAM photos online. A December report from Stanford researchers revealed that a popular dataset used to train certain AI models contained links to CSAM imagery. Moreover, researchers found that the National Center for Missing and Exploited Children (NCMEC) tip line, already struggling to handle the influx of reported CSAM content, is increasingly inundated by AI-generated CSAM images.
The non-profit organization Thorn, in collaboration with All Tech Is Human, emphasizes that AI image generation can hinder efforts to identify victims, fuel demand for CSAM, introduce new avenues for victimizing and re-victimizing children, and facilitate the dissemination of problematic material.
Google stated that besides abiding by the principles, it has also augmented ad grants for NCMEC to bolster its initiatives. Susan Jasper, Google’s vice president of trust and safety solutions, mentioned in the post that backing these campaigns heightens public awareness and equips individuals with tools to recognize and report abuse.
For More Details: https://thecioworld.com