Apple, Google Caught Pushing Deepfake Apps

Apple, Google Caught Pushing Deepfake Apps

Apple and Google are under fire after a report said both companies were helping people find apps that can generate sexualized AI images from ordinary photos. The concern is not just about adults being targeted. The report says kids could run into this stuff too.

According to 9to5Mac, citing findings published in January by the Tech Transparency Project, both the Apple App Store and Google Play were helping users find apps that create deepfake nude images of women. The report said the stores were also promoting some of these apps and even autocompleting search terms tied to them.

The issue centered on searches like “nudify,” “undress,” and “deepnude.” The Tech Transparency Project said about 40 percent of the top 10 apps that appeared in those searches could “render women nude or scantily clad.” These apps can take one normal photo and one sexual image, then blend them into a fake image that sexualizes the person in the original picture.

9to5Mac also said it reached out to one app developer, who replied that they “had no idea it was capable of producing such extreme content.”

Apple later told the outlet the apps were not allowed under its review rules, which ban sexual content. The company said it has removed 15 apps and warned that others will be taken down if they keep breaking the rules.

The backlash comes at a time when AI abuse is getting more attention. In January, California Democratic Gov. Gavin Newsom attacked X over a similar issue.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” he said.

“I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

That criticism put a spotlight on one company, but the latest report raises a broader question about the major app stores themselves. If the tools are this easy to find, the problem is bigger than one platform. It is about how fast harmful AI content can spread, and how little friction it takes for someone to access it.

For parents, that is the real warning sign. For Apple and Google, it is a reminder that app store rules mean little if dangerous apps keep slipping through.

Send this to a friend