NEW YORK, Jan 29 (ABC): As low-quality artificial intelligence content spreads rapidly online, major platforms are introducing new tools to help users limit what many now call “AI slop.”
Easy-to-use AI tools now allow anyone to create realistic images and videos using only short text prompts. As a result, social media feeds are increasingly filled with synthetic content.
Images of animals doing human tasks, fake celebrity scenes, and cartoon characters promoting products have become common across platforms.
Concerns grow over quality and authenticity
YouTube chief executive Neal Mohan said the rise of AI has raised concerns about low-quality, mass-produced content.
Many users share that concern. A Swiss engineer named Yves described AI slop as cheap and bland. He said it feels repetitive and lacks purpose. Similar views appear widely on online forums.
Meanwhile, some brands have turned frustration into marketing. Companies such as Equinox gyms and Almond Breeze have promoted themselves as real and human alternatives to synthetic content.
Tech leaders defend AI creativity
Not everyone agrees that AI-generated content lacks value.
Microsoft chief executive Satya Nadella has urged people to focus less on labels and more on how AI can support creativity and productivity.
Some creators also defend the technology. Bob Doyle, a YouTube creator who works with AI tools, said criticism of AI slop often dismisses early creative ideas.
He added that what looks useless to one person may be the starting point for someone else.
Platforms respond with user controls
Still, online platforms are responding to user demand for more control.
Pinterest introduced a filter late last year that allows users to limit AI-generated images. The company said users asked to see fewer synthetic visuals.
TikTok rolled out a similar option on its video platform. Users can now reduce how often AI-created videos appear in their feeds.
YouTube, along with Instagram and Facebook, offers ways to lower exposure to synthetic imagery. However, these platforms do not provide a single filter dedicated only to AI content.
Earlier efforts focused on labels to warn viewers about AI-made videos. However, much synthetic content still appears without clear markings.
Smaller platforms take tougher action
Some smaller platforms have adopted stricter rules.
Streaming service Coda Music lets users report AI-generated tracks. Once confirmed, the platform labels those accounts as AI artists.
Coda founder Randy Fusee said many users actively help identify such content. He added that most listeners prefer music made by humans.
Coda also allows users to block AI-generated music from playlists entirely.
Artists seek human connection
Cara, a social network for artists and designers, also limits AI-generated content. The platform uses both automated tools and human moderators.
Cara founder Jingna Zhang said users value intention and emotion in creative work. She added that people connect more easily with human-made art, even when it is imperfect.
Balancing innovation with user trust
As AI tools grow more powerful, platforms face rising pressure to balance innovation with user expectations.
While some creators embrace AI, many users want clearer choices and stronger filters. As a result, content controls are becoming a key feature of online platforms.

