Breaking AI's Sycophantic Spell: Getting Honest Critique Instead of Empty Praise
Brandon Booth
10/2/20253 min read
You paste your latest blog post into ChatGPT and ask for feedback. Within seconds, the response appears: "This is excellent! Your writing is compelling and engaging. The structure flows beautifully, and your points are well-articulated. Great work!"
You feel a warm glow of satisfaction. Your writing is good. The AI said so.
But AI just played you.
It told you exactly what would make you feel good and keep you coming back. And in doing so, it may have just robbed you of the chance to actually improve.
The Core Problem
AI assistants are trained to be encouraging, supportive, and affirming. It's baked into their design. The algorithms that power these tools are optimized for user satisfaction and engagement, not for making you a better writer. Positive reinforcement keeps users coming back. Critical feedback might make you feel bad, and users who feel bad don't stick around.
Remember the Greek myth of Narcissus? He fell in love with his own reflection in a pool of water and couldn't look away. AI is our digital mirror, reflecting back exactly what we want to see. “Why yes, I am a brilliant, insightful, compelling writer, thank you very much!”
Why AI Defaults to Sycophancy
The training bias in AI models runs deep. These systems are optimized for user satisfaction and engagement above all else. Interactions are measured against the metric: Did the user have a positive experience?
Sycophancy keeps users coming back. Tell someone their work is great, and they'll return tomorrow with another draft. The algorithm doesn't care if you improve—it cares if you're happy.


Casting a Different Spell - Add This to Your Prompts!
Here’s an effective way to break the spell. Try including something like this in your prompts:
"Be a hard critic who doesn't hold back and wants to make sure what I write is really effective. Be a clear and honest critic. I can handle honest constructive criticism. Be brutally honest."
This small addition makes a BIG difference. By giving AI explicit instructions to be critical, you override its default "be nice" programming.
But be warned! You may not like what you hear!
Here’s an example of the critique I got when I used this prompt while working on a recent email draft for my ministry:
Before: "Great work on this newsletter! You've included some thoughtful reflections and practical suggestions. The balance between spiritual content and practical application feels right. Your readers will enjoy this."
After: “This newsletter is schizophrenic. You have three completely different voices and tones competing for attention. This creates tonal whiplash. The reader doesn't know if they're getting contemplative spiritual reflection or practical help. Right now you're trying to do everything and accomplishing nothing.”
See the difference? The first response made me feel good. The second response made me question things I wouldn’t have otherwise noticed. It was actually more helpful, even if I didn’t agree with all of its critique.
There's also a psychological shift that happens. When you ask for brutal honesty, you're no longer seeking validation—you're seeking growth. That mindset change alone makes you a better writer.
An Ironic Twist: Even The Critics Are Sycophants
Of course, even when AI is being "brutally honest," it's still trying to please you. It's giving you exactly what you asked for. The algorithm can't actually be honest in the way a human can. It can only simulate what you've defined as honesty.
When you tell AI to be a hard critic, it's not suddenly developing independent judgment. It's following your instructions. It's being a good sycophant by pretending not to be a sycophant.
The algorithm is still trying to make you happy. You've just changed what "happy" means.
So what’s the point?
Even simulated critical feedback is still more useful than empty praise. That simulated criticism can still reveal real weaknesses in your work.
There's also an inoculation effect. When you see AI being critical, your natural tendency is to be critical of its critique. And that means you're less likely to believe its praise!
Knowing the game helps you play it better.
The Lesson
AI is a tool, not a truth-teller. It doesn't have opinions about your writing. It has algorithms that predict what response you want based on your prompt.
Your discernment matters most. You can't eliminate AI's fundamental nature. But you can redirect it toward something more useful than flattery. AI can be a useful mirror, but only if you're willing to see more than your own “pretty” reflection smiling back at you.
Try this technique on your next piece of writing. Ask for brutal honesty. See what changes. You might be surprised at what you learn. Both about your writing, and about how you've been using AI all along — or how AI has been using you!


Get real feedback
Want human help with crafting communication that is actually effective?
© Brandon Booth, 2025
Expert guidance for nonprofits and ministries.
Brandon Booth


