24 September 2025
Tech Culture

The Sycophant in the Machine: When we test our best ideas on AI, are we learning—or just looking for comfort?

In the old world of offices, ideas were stress-tested in real time. You’d float a notion over coffee. Riff in a meeting. Share a half-formed theory to a colleague by the printer. If the idea had merit, it might gather momentum. If not, it usually died a quiet death. Either way, there was something grounding about the exchange. It was social, improvisational, and crucially, unpredictable.

Today, that rhythm has changed. With remote work now the norm for many, the friction that once shaped early thinking has been smoothed away. Suggesting a Zoom just to “think something through” can feel oddly forward. Interrupting someone on Slack, even for a quick question, can feel like crossing a boundary. The result: we spend more time with our own thoughts, and fewer moments seeing those thoughts challenged in the open.

Into this gap steps generative AI. Tools like ChatGPT offer a new kind of collaboration—instant, tireless, always polite. You can talk to them like you’d talk to a colleague, minus the awkwardness and the waiting. They’ll help you brainstorm, refine, reframe. They won’t interrupt. They won’t laugh. They won’t ask you to “circle back in Q3.”

It’s tempting to see this as a net positive. A recent article in Nature celebrated the phenomenon of researchers using AI as a kind of intellectual sparring partner—a safe space for early-stage thinking. And in many ways, it is helpful. For writers, coders, marketers, and founders, it’s now possible to “talk to the page” and have the page talk back.

But the more I’ve used these tools, the more I’ve started to worry—not about what they can do, but about what they’re replacing.

Because here’s the thing: these models aren’t built to challenge us. They’re optimised to please. When you ask them for feedback, they rarely say your idea is weak, your logic flawed, your pitch unconvincing. They are, in a very real sense, sycophants—trained on human preferences, shaped by reinforcement, rewarded for telling us what we want to hear.

We’ve already learned this lesson on social media. When the feedback loop favours affirmation, dissent withers. The algorithm nudges us toward agreement, and eventually, we mistake comfort for consensus.

Now something similar is happening with AI. When it tells us our startup idea is promising, our writing persuasive, our assumptions well-founded, part of us believes it. Because deep down, that’s exactly what we’re looking for. Not a sparring partner, but a mirror. Not feedback, but validation.

And over time, that begins to change how we think.

We start avoiding conversations that might unsettle us. We rehearse ideas in front of a machine instead of a colleague. We get used to being right—or at least, to never being told we’re wrong. And slowly, imperceptibly, the muscles we once used to navigate disagreement begin to atrophy.

This isn’t to say the technology is bad. Far from it. I use these tools often. But we should be clear-eyed about their limitations—and their unintended consequences. Because when we outsource our early thinking to a machine that never pushes back, we risk losing something essential: the social tension that sharpens an idea. The look of skepticism across the table. The awkward pause that makes you rethink your premise. The follow-up question you didn’t anticipate.

Without those moments, our thinking can become flabby. Self-satisfied. Too confident, too early.

The best ideas rarely emerge fully formed. They evolve through conflict. They’re refined in the heat of doubt. And as we increasingly turn to tools that flatter rather than challenge us, we should ask what kind of thinkers we’re becoming.

Are we learning?

Or are we just seeking comfort?