Apple and Google have been exposed for actively directing users to apps that create deepfake nude images, according to a new investigation by the Tech Transparency Project. Despite policies banning nonconsensual sexualized content, both tech giants’ search and advertising systems actively promote these dangerous tools.
Tech Giants Enable Digital Abuse
The investigation reveals that Apple’s App Store and Google Play not only host dozens of so-called nudify apps but also use their own search algorithms and autocomplete features to steer users toward them. These apps use artificial intelligence to digitally remove clothing from photographs, create pornographic deepfakes, and transform real people into explicit chatbots without their consent. After Bloomberg News confronted the companies with these findings, Apple removed fifteen apps and Google deleted seven—a tacit admission of the problem.
Lawmakers Move to Stop the Threat
The timing of this exposure coincides with growing legislative action. Minnesota lawmakers are reportedly close to passing an outright ban on artificial intelligence nudification apps. In the United Kingdom, the Children’s Commissioner has demanded immediate prohibition of these tools, warning they enable what she calls deepfake sexual abuse of children. The Tech Transparency Project first reported in January that both platforms hosted these apps, but this new investigation shows the problem goes deeper—the companies are not just passive hosts but active promoters.
Advertising Revenue Over Safety
The investigation found that Apple and Google displayed paid advertisements for nudify apps within their own search results, turning violations of their stated policies into revenue streams. Both companies have explicit rules prohibiting apps that create nonconsensual sexualized images, yet their advertising and search suggestion systems guided users directly to these applications. The companies only acted after journalists exposed the practice, removing a fraction of the available apps. Critics argue this reactive approach shows the tech giants prioritize profits over protecting users, particularly women and children who are overwhelmingly the targets of these deepfake tools.
