Meta Targets AI 'Nudify' Apps in New Crackdown Effort
- The New York Editorial Desk - Arif
- Jun 12
- 2 min read
Tone & Political Bias: Center
Why: The article highlights corporate accountability and privacy concerns without expressing partisan views.

Meta Sues Nudify App Maker For Violating Ad Rules
Meta has filed a lawsuit against Joy Timeline HK Limited, a Hong Kong-based company behind AI-powered “nudify” applications such as Crush AI. These apps digitally remove clothing from images, creating non-consensual explicit content.
According to Meta, the company behind Crush AI placed thousands of ads on Facebook and Instagram, many of which evaded the platform’s content review system. Meta stated the app developer repeatedly tried to bypass ad policies after prior removals for violations.
“This follows multiple attempts by Joy Timeline HK Limited to circumvent Meta’s ad review process,” Meta wrote in an official blog post.
Joy Timeline HK Limited has not yet responded to public requests for comment.
Background: Alarming Rise Of AI-Fueled Explicit Image Tools
Nudify apps have drawn increasing concern from researchers and journalists. A CBS News investigation found “hundreds” of ads promoting such apps across Meta platforms, including offers to remove clothes from images of celebrities.
Crush AI alone had reportedly run more than 8,000 ads since late 2024, according to Alexios Mantzarlis, Director of Cornell Tech’s Security, Trust and Safety Initiative. His research, first published in January, called out Meta for allowing such ads to run for months.
Meta Introduces New Detection Tools
In addition to the lawsuit, Meta has announced technical steps to prevent similar abuse of its platforms in the future. The company says it has deployed new AI tools to detect nudify ad content—even when the ads do not display nudity.
Key updates from Meta include:
Improved Ad Detection: New systems use matching technology to detect and remove repeat or copycat ads quickly.
Expanded Term Lists: Meta updated its databases to recognize more safety-related words, emojis, and phrases associated with Nudify content.
External Collaboration: Meta says it is working with other tech platforms, including app store operators, to share information about entities that violate its advertising rules.
Broader Issue: AI Manipulation In Advertising
The crackdown on nudify apps is part of a wider struggle by Meta to police AI-generated content on its platforms. Beyond non-consensual nudes, the company has faced criticism for failing to block deepfake videos of celebrities promoting scams.
Meta’s own Oversight Board recently faulted the company for under-enforcing its rules on manipulated media. It called for clearer policies and stronger enforcement to prevent abuse of public figures and protect users from deceptive content.
Meta Under Scrutiny Amid Ongoing Content Policy Gaps
The company's current actions follow a wave of criticism from both internal and external watchdogs. Experts say Meta’s moderation systems have repeatedly been too slow to respond to the spread of harmful AI content.
While the lawsuit and detection improvements mark progress, critics argue that Meta allowed thousands of violative ads to run before taking meaningful steps. Meta's shift comes as global tech regulators and advocacy groups call for stricter rules around AI-driven content generation and platform accountability.
Comments