Field Notes

Is AI making us lazy thinkers?

If AI makes us lazy thinkers, that's our fault, not the technology's. Anthropic's new Claude for Education gets this right by forcing students to think before giving answers, exactly what professionals should be doing....

Is AI making us lazy thinkers?

If AI makes us lazy thinkers, that's our fault, not the technology's.

Anthropic's new Claude for Education gets this right by forcing students to think before giving answers, exactly what professionals should be doing.

Back when ChatGPT first came out, I confidently shared an AI-generated explanation with a vendor, boldly critiquing why they weren't complying with some standard. They gently explained that what I'd shared was complete bullshit. I hadn't verified a single fact, assuming the AI was correct because the answer looked polished.

That embarrassing moment made something critical painfully obvious to me: AI doesn't absolve us of responsibility—it magnifies it.

Here's how we should be approaching AI tools in the workplace:

Stop using LLMs as replacement brains. They're sparring partners for refining your thinking, not outsourcing it.

Own what you publish. When you put your name on AI-generated content without verification, you're risking your reputation on a random number generator.

Push for precision. LLMs are tuned to please you, not necessarily to be right. Explicitly instruct them to find reasoning errors and challenge your assumptions.

Fact-check. Twice. If an LLM cites a statistic or makes a claim that matters, verify it. Having to own up to blindly copying LLM outputs in 2025 is very different than it was in 2023.

Schools banning AI tools are missing the point entirely. We should be teaching students how to use these tools effectively while maintaining critical thinking—exactly what the future workplace demands.

The divide won't be between those who use AI and those who don't. It will be between those who use it thoughtfully and those who blindly trust whatever it produces.