I can tell when AI is used to comment on my posts
I can tell when AI is used to comment on my posts.
So can everyone else.
Anu A. wrote a great piece this week on "Doomprompting as the new Doomscrolling", referring to the endless, passive iteration with AI that's doesn't deliver the user better outcomes and actively makes the human less capable over time.
It feels like working - there's collaboration, even "creation"!
But it's just synthetic, passive conversations that lead nowhere.
AI as a sparring partner? Great. Blind spot detector? Absolutely! Automation builder (where you understand what you're automating)? Usually!
But as a replacement brain for thinking? 🤦
When I use AI, I don't (only) ask it to do my work. I'm using it to pressure-test my ideas, find holes in my logic and push back on my assumptions. I'm still doing the heavy lifting and the LLM is the wall I'm bouncing things off of.
More and more though, I see the opposite.
For example, there's an influx of AI-generated comments on LinkedIn posts that repeat what the post stated with a lead-in that says how smart the post is. It's then followed by a super obvious question to drive engagement. They're regurgitated statistical averages of everyone else's thoughts masquerading as comments.
This isn't just lazy. It's self-defeating.
When you outsource your thinking to AI, you're not just producing generic content. You're training yourself out of the ability to think originally.
You're literally practicing being average.
And we can all see it happening in real-time in the comments, in the emails, in the work itself.
The divide is already here between those who use AI to think harder, and those who use it to avoid thinking.
Every interaction with AI is practice.
What are you practicing?