AI slop is not AI, and it’s distracting you from the real problems.

Brandon Booth

12/24/20254 min read

Never miss a post! Signup for my email updates today and get expert guidance delivered directly to your inbox!

Alright let’s get the clickbait headline out of the way. AI “slop” is the flood of low-quality, obviously machine-generated content cluttering the internet, and, I admit, it’s a fun hobby to spot slop on the internet and make fun of it. But AI slop is not AI.

AI slop is actually AI being sloppily used by lazy humans. And your obsession with AI slop is distracting you from the real problems of generative AI.

Let me break it down for you.

AI “slop” is not actually AI. It’s just sloppily generated.

Despite your snarky comment on your mom’s slop Facebook post, you actually can’t recognize AI generated content. You can recognize badly generated AI content.

Let’s start with language. A recent study published by the Columbia Law School, discovered that while readers overwhelmingly disliked generic AI writing, simple prompt changes and a little fine-tuning was all it took to completely reverse the results — even among professional writers — and made it practically impossible to tell if the output was written by AI or not.

So when you think you can spot AI generated writing because of those EM dashes or cliched sentence structures, you're only spotting someone’s lazy use of AI. (Sidenote: EM dashes are a fantastic piece of punctuation, I have intentionally used them in this piece because they help readability. My EM dashes are man-made!)

And how about images? Well, this recent study by Pubmed Central found that while human-made images are more readily recognized as human-made, AI generated images are frequently misclassified as human-made. They conclude “that individuals are generally unable to accurately determine the source of an image, which in turn affects their assessment of its credibility.”

Again, it’s easy to spot AI slop images. Precisely because they are sloppy! With a little practice, and the right AI tools, even you can make AI images that will pass under the nose of most humans. And AI is improving rapidly. The PubMed study was done before Google’s release of “Nano Banana,” which has been widely recognized as a huge leap in quality and ability over any other models.

Finally, what about music? Think you can hear the “soulless” quality of AI generated music? Perhaps, but that didn’t stop Breaking Rust, a completely AI country music artist, from going to the top of the Billboard's Digital Song Sales chart in November of 2025.

Listen to the viral song yourself. Try to forget that you already know it’s AI. It’s certainly formulaic, but so is most everything else in the “pop” genre, regardless of who, or what, made it.

Here’s the point: You can recognize AI slop because it’s sloppily generated by humans. AI “slop” is not inherently created by AI because of its training data or its lack of soul. In fact, we are rapidly approaching a time when the AI will compensate for sloppy use and create high-quality content regardless.

AI “slop” is distracting you from the real problems of generative AI.

AI slop rotting our brain is not our worst problem. The real problem is our false sense of security. You think you can spot AI, but you can’t, and that’s the real danger. Because that false confidence is exactly what bad actors are counting on, and they are using AI skilfully!

We already live in a time when you cannot trust your eyes, or your ears, and certainly not the news in your social media feed. We used to say “Trust but verify.” That time is long gone. We live in the days of “Never trust! And always verify your verifications.”

AI has not created this problem, it’s merely accelerated it beyond anything we’ve ever seen. There have always been people creating false information for evil reasons. AI simply allows them to do it at warp speed.

So, what are we to do? Here’s a few suggestions:

  1. We need to stop complaining about AI “slop” and start teaching real critical thinking. Thinking skills that are useful in evaluating all kinds of content, AI generated or not. The ability to think critically is universally helpful, so we should be doing this anyway, but now more than ever we need to teach people how to carefully discern truth from falsehood.

  2. Overhaul how we regulate Social Media. I think it’s high time we enforce age and identity verification on social media platforms. Accountability and transparency is a fantastic antidote to evil. And we can do this without having to highly regulate what people say. Instead we just require them to own what they say. And age verification will allow us to better restrict access to dangerous content by children.

  3. Definitely NOT use AI to teach our children, and encourage/require children to learn to read, write, and think the hard ways: with pen and paper. This should be strictly required for grammar levels and increasingly relaxed as students age. They need to learn these crucial skills for themselves before they can know how to use AI to augment their use of these skills.

I’m sure there are other ways to improve this situation! I welcome your comments and questions, send them to brandon@brandonbooth.com. And if you are wondering how to navigate AI responsibly for your organization, I’d love to help you think it through.

Let's work together!

Let's find the digital tools and processes that make your organization sing!