Connect with us

Lifestyle

Google Partners with StopNCII to Combat Non-Consensual Images

Editorial

Published

on

Alphabet Inc.’s Google is set to collaborate with StopNCII, a nonprofit organization dedicated to preventing the spread of non-consensual images online. This partnership represents a notable advance in Google’s efforts to address image-based abuse, a concern that has gained increasing attention in recent years.

StopNCII employs technology that enables victims of image-based abuse to create digital fingerprints, known as hashes, of intimate images. These hashes are shared with partner platforms such as Facebook, Instagram, Reddit, and OnlyFans, which use them to block the reupload of these images without requiring users to report them actively. Google announced this partnership during the NCII summit held at its London office, where it hosted StopNCII’s parent charity, the SWGfL.

For victims, the implications are profound. David Wright, the chief executive officer of SWGfL, emphasized the importance of this initiative, stating, “Knowing that their content doesn’t appear in search — I can’t begin to articulate the impact that this has on individuals.” Although Google will not immediately be listed as an official partner of StopNCII, a spokesperson indicated that the company is currently testing the technology and anticipates implementing it within the next few months.

Delays and Criticism

Google’s slower pace in adopting this technology has faced criticism from advocates and lawmakers. StopNCII launched in late 2021, leveraging detection tools initially developed by Meta. Facebook and Instagram were among the first platforms to adopt these measures, with TikTok and Bumble joining in December 2022. Microsoft integrated the system into its Bing search engine in September 2023, significantly ahead of Google.

In April 2024, Google informed UK lawmakers that it had “policy and practical concerns about the interoperability of the database,” which has hindered its participation until now. Critics argue that the company, given its extensive resources, should take more proactive steps to eliminate non-consensual images without placing the burden on victims to create hashes.

Addressing AI-Generated Content

Some advocates believe that Google’s initiative, while a positive step, does not go far enough. Adam Dodge, founder of the advocacy group End Technology-Enabled Abuse, pointed out that relying on victims to self-report remains a significant issue. He stated, “It’s a step in the right direction. But I think this still puts a burden on victims to self-report.”

Notably absent from Google’s announcement was any mention of AI-generated non-consensual imagery, commonly referred to as deepfakes. The technology employed by StopNCII focuses on known images, which means it cannot preemptively block synthetic or entirely different images. Wright explained, “If it’s a synthetic or an entirely different image, the hash is not going to trap it.”

In 2023, a study by Bloomberg revealed that Google Search was the leading traffic source for websites hosting deepfakes or sexually explicit AI-generated content. Since that time, Google has initiated efforts to reduce and downrank such content in its search results.

As Google moves forward with this partnership, it faces the challenge of addressing both the immediate needs of victims and the evolving landscape of digital abuse, including the rise of AI-generated imagery.

Continue Reading

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.