Home » News » Google AI Flags Parent’s Account Over Potential Abuse Of Child

Google AI Flags Parent’s Account Over Potential Abuse Of Child

fb twitter pinterest linkedin
Google AI Flags Parent’s Account Over Potential Abuse Of Child-GadgetAny
Google AI

A concerned father says that after using his Android smartphone to take photos of an infection on his toddler’s groin, Google flagged the images as child sexual abuse material (CSAM), according to a report from The New York Times. The company closed his accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC), which prompted a police inquiry, underscoring the complexes of trying to tell the difference between potential abuse and an innocuous photo once it becomes part of a user’s digital library, whether on their personal device or in cloud storage. 

However, Apple announced its Child Safety plan to highlight the difference between what should be considered private and non-private. As part of the plan, Apple would locally scan images on Apple devices before uploading them to iCloud and then match them with the NCMEC’s hashed database of known CSAM. If enough matches were found, a human moderator would review the content and lock the user’s account if it contained CSAM.

Google AI

The Electronic Frontier Foundation (EFF), a nonprofit digital rights team, denounced Apple’s plan, saying it could “open a backdoor to your private life” and that it represented “a sharp fall in privacy for all iCloud Photos users, not an enhancement.”

The incident took place in February 2021, when some doctor’s offices were still closed owing to the Covid-19 pandemic.

According to the report, Mark (whose last name was not disclosed) observed swelling in his child’s genital area and sent images of the issue prior to a video consultation at a nurse’s request. The doctor ended up prescribing antibiotics that cured the infection.

Mark got a text notification from Google two days after taking the photos, saying that his accounts had been locked due to “harmful content” that was “a severe infringement of Google’s policies and might be illegal.”

Like many internet firms, including Facebook, Twitter, and Reddit, Google has used hash matching with Microsoft’s PhotoDNA for scanning uploaded images to track potential matches with known CSAM.

In 2012, it led to the arrest of a man who was a registered sex offender and used Gmail to send images of a young girl.

Awanish Kumar

By Awanish Kumar

I keep abreast of the latest technological developments to bring you unfiltered information about gadgets.

Leave a Reply

Related news