Google Suspends Parents' Accounts for Nude Photos of Children, YouTube Removes Tesla Autopilot Test Videos of Children

Google Suspends Parents' Accounts for Nude Photos of Children, YouTube Removes Tesla Autopilot Test Videos of Children

There have been several recent incidents that demonstrate Google’s growing concern for child safety. And in some cases, this concern becomes excessive and harmful.

Help

"Buy me a fighter." Fundraising for the Air Force of the Armed Forces of Ukraine

For example, a worried father said that he took pictures of his child’s groin infection on an Android smartphone, after which Google flagged the images as child sexual abuse material (CSAM). The company subsequently closed his accounts, filed a report with the National Center for Missing and Exploited Children (NCMEC), and initiated a police investigation.

The incident took place in February 2021, when many doctors’ offices were still closed due to the COVID-19 pandemic. Mark (father’s name has been changed) noticed swelling in the genital area of his child and, at the request of the nurse, sent a photo of the problem area before the video consultation with the doctor. Thanks to the resulting image, the doctor prescribed antibiotics, which cured the infection.

But photography created a lot of problems for my father. 2 days after the shoot, he received a notice from Google saying that his accounts were suspended due to “harmful content” which is “a serious violation of Google policy and may be illegal”.

Learn how to expand the web interface, how to be like customers

REGISTER!

UI/UX Design

As a result, the child’s father lost access to his emails, contacts, photos and even his phone number because he used the Google Fi service. He immediately tried to appeal against Google’s decision, but the company denied his request. The San Francisco Police Department launched an investigation into Mark in December 2021 and received all the information he had stored on Google. The investigator in the case ultimately determined that the incident “did not meet the criteria for a crime, and there was no crime.”

This situation has once again highlighted the difficulty of distinguishing between potential child abuse and an innocent photograph in a user’s digital library, whether it be local storage on a device or cloud storage.

Like many internet companies, including Facebook, Twitter and Reddit, Google uses hash matching against Microsoft PhotoDNA to scan uploaded images for matches with known cases of CSAM. In 2018, Google announced the launch of its own Content Safety API AI toolkit that can “proactively identify never-before-seen CSAM images so they can be viewed and, if confirmed as CSAM, removed and reported as quickly as possible.” This toolkit is used in Google’s own services along with the CSAI Match video hash matching solution developed by YouTube engineers. These tools are also available to other companies.

A Google spokesperson said that Google only scans users’ private images when the user takes “affirmative action,” which apparently could include backing up their images to Google Photos. When Google flags potentially exploitative images, the company is required by federal law to report the potential offender to the CyberTipLine to NCMEC. In 2021, Google reported 621,583 potential CSAM cases to NCMEC’s CyberTipLine, while NCMEC alerted authorities to 4,260 potential victims, the list of which includes Mark’s son.

As early as last year, concerns began to emerge about blurring the lines between what should be considered private. This was the result of Apple’s announcement of its Child Safety plan, which involved locally scanning the image on Apple devices and then matching known CSAM cases to a hashed NCMEC database. Following criticism of this decision, Apple put the plan on hold, but began rolling out some of its elements in iOS 15.2.

While protecting children from abuse is undoubtedly important, critics argue that the practice of scanning users’ photos unnecessarily infringes on their privacy. John Callas, director of technology projects at the Electronic Frontier Foundation (a non-profit digital rights advocacy group), called Google’s practices “intrusive.” The organization has also previously criticized Apple’s initiative.

This is not the only time Google has focused on the rights and interests of children on its platforms. YouTube has removed a video of Tesla drivers conducting their own safety tests to determine if Full Self-Driving (FSD) features will automatically stop the car when children are on the road. The video contains several episodes of system testing. In one of them, Tad Park drives a Tesla Model 3 towards one of his children who is standing on the road. In another fragment, he again tries to drive along the road that his other child is crossing. In both cases, the car automatically stopped before reaching the children.

The video has been removed due to violation of platform rules. YouTube’s support page has specific policies for content that “threatens the emotional and physical well-being of minors,” including “dangerous stunts, challenges, or pranks.” YouTube spokesperson Ivy Choi said the video violated the service’s policy on harmful and dangerous content, the platform “does not allow content that shows minors engaging in dangerous activities or encourages minors to engage in dangerous activities.”

In his video, Park said that he was in control of the car during testing and could brake on his own at any time. The vehicle was not traveling faster than 8 miles per hour (about 13 km/h). The video was posted on YouTube and Twitter. At the same time, it was deleted on YouTube (by August 18, the video managed to collect more than 60 thousand views), and on Twitter it is still available. The Twitter administration has not yet commented on the request whether the video violates any rules of the platform.

In response to the video, the National Highway Traffic Safety Administration (NHTSA) issued a statement warning against using children to test automated driving technology.

The idea to test the FSD system with real children came to the author of the video after ads and videos were shown on Twitter in which Tesla electric vehicles could not identify child-sized dummies and collided with them.

Note that the Tesla FSD software does not make the vehicle fully autonomous. When using this feature, drivers should still keep their hands on the wheel and be ready to take control at a moment’s notice.

Source: The Verge 1 , 2

Related Posts

UK to regulate cryptocurrency memes: illegal advertising

Britain’s financial services regulator has issued guidance to financial services companies and social media influencers who create memes about cryptocurrencies and other investments to regulate them amid…

unofficial renders of the Google Pixel 9 and information about the Pixel 9 Pro XL

The whistleblower @OnLeaks and the site 91mobiles presented the renders of the Google Pixel 9 phone. Four images and a 360° video show a black smartphone with…

Embracer to sell Gearbox (Borderlands) to Take-Two (Rockstar and 2K) for $460 million

Embracer continues to sell off assets – the Swedish gaming holding has just confirmed the sale of The Gearbox Entertainment studio to Take-Two Interactive. The sum is…

photo of the new Xbox X console

The eXputer site managed to get a photo of a new modification of the Microsoft Xbox game console. The source reports that it is a white Xbox…

Israel Deploys Massive Facial Recognition Program in Gaza, – The New York Times

The Technology section is powered by Favbet Tech The images are matched against a database of Palestinians with ties to Hamas. According to The New York Times,…

Twitch has banned chest and buttock broadcasts of gameplay

Twitch has updated its community rules and banned the focus of streams on breasts and buttocks. According to the update, starting March 29, “content that focuses on…

Leave a Reply

Your email address will not be published. Required fields are marked *