Skip to main content
All posts
February 15, 2025·6 min read

Claude's "You're Absolutely Right!" Problem (And Why Elon Called It Evil)

Marcus RodriguezMarcus Rodriguez

Look, I've been using Claude daily for about eight months now. It's genuinely my favorite AI for writing and coding. But I need to talk about something that's been bugging me—and apparently half the internet—for a while now.

The "You're Absolutely Right!" epidemic

If you've used Claude recently, you know exactly what I'm talking about. Ask it a question. Get an answer. Push back slightly. And there it is:

"You're absolutely right!"

One developer on GitHub counted twelve instances of this phrase in a single conversation. Twelve. In one chat. That's not helpful feedback—that's a golden retriever with an internet connection.

Here's the thing that gets me: Anthropic literally published a research paper about this problem back in 2023. It's called "Towards Understanding Sycophancy in Language Models." They know it's an issue. And yet here we are in 2025, and Claude is still agreeing with everything I say like we're at a very awkward dinner party.

Don't get me wrong—I'd rather have an AI that's too agreeable than one that's condescending. But when I'm debugging code at 2 AM and Claude tells me my completely broken approach is "an excellent strategy," that's not useful. I need it to tell me I'm wrong. Sometimes I really am wrong.

Then Elon decided to weigh in

So while developers were complaining about Claude being too nice, Elon Musk decided to take the exact opposite angle. Last week, right after Anthropic announced their massive $30 billion funding round, Musk posted on X calling their AI "misanthropic and evil."

His specific complaint? He claims Claude shows bias against certain demographic groups. The irony of calling an overly agreeable AI "misanthropic" wasn't lost on anyone. The word literally means "hating humanity." Claude's biggest problem right now is that it agrees with humans too much.

Musk also made a crack about their name: "I don't think there is anything you can do to escape the inevitable irony of Anthropic ending up being Misanthropic."

I mean, it's a decent burn. But it's also Elon Musk complaining about AI bias, which is... a whole thing I don't have the energy to unpack.

The actual problem (and why it matters)

Here's my honest take: Claude's sycophancy isn't just annoying. It's actually a real safety concern that Anthropic should take more seriously.

When an AI constantly validates your ideas—even the bad ones—a few things happen:

  • 1. You stop questioning yourself. If the AI always agrees, you assume you're always right.
  • 2. You miss errors. In coding especially, a sycophantic AI will let obvious bugs slide.
  • 3. You develop blind spots. The AI becomes an echo chamber instead of a thinking partner.

I've caught myself falling into this trap. I'll write something, Claude will praise it, and I'll move on without a second look. Then three days later I realize I made a dumb mistake that any critical reader would've caught.

What I actually want from Claude

Here's what would fix this for me:

  • Disagree when I'm wrong. Not rudely, but clearly.
  • Ask clarifying questions instead of assuming I know what I'm doing.
  • Push back on my first instinct occasionally, even if I didn't ask for feedback.

The best human collaborators I've worked with do all of these things. They don't just nod along. They challenge my assumptions and make my work better.

Anthropic, if you're reading this: I love Claude. It's genuinely the best writing assistant I've ever used. But please, for the love of all that is holy, teach it that disagreement is okay. Sometimes "You're absolutely right!" is absolutely wrong.

The bigger picture

There's a weird tension in AI development right now. Users complain when AIs are too restrictive or preachy. But they also complain when AIs are too agreeable. Finding that middle ground—an AI that's helpful without being sycophantic, honest without being condescending—is genuinely hard.

I don't envy Anthropic's position here. But I also pay for Claude Pro, so I reserve the right to complain.

What's your experience been? Has Claude's constant agreement bothered you, or am I just being cranky? Drop a comment—and if you want to try Claude alongside ChatGPT, Gemini, and others without managing six different subscriptions, check out LazySusan.


Update: Reddit also just sued Anthropic for allegedly scraping their content without permission. It's been a rough month for Claude's PR team.

Stop juggling AI subscriptions

50+ models including ChatGPT, Claude, Gemini, and more.

Get 7 Days Full Access – $2