Field Notes

AI and Psychological Safety

It’s not safe to say you don’t know about AI. 

AI and Psychological Safety

It’s not safe to say you don’t know about AI.

That’s dangerous.

If it’s not safe to say, people will pretend or hide in order to not stand out.

If it’s not safe to say, they’ll rapidly fall behind those who are leveraging AI until the point where they can’t compete in the job market.

If it’s not safe to say, companies can’t provide support and training to those who need it to get up to speed.

These are new tools that often require a new way of thinking and approaching problems to maximize their value.

This puts both users and companies in a bad spot where fear of AI turns into a self fulfilling prophecy that results in bad outcomes for both the employee and employer.

How can you prevent this?

Make it OK and even celebrated to highlight what you don’t know - not to glorify ignorance but to show that we all have gaps and are learning as we go.

Ensure you’re providing ongoing training and knowledge sharing. So many companies give AI based tools to their employees without ongoing training on how to use them effectively.

With traditional software where you just need to learn how to push the right buttons in the right order to maximize value.

With LLM based tools, this often requires sharing best practices and nuances learned over time. The ongoing discussion keeps the team learning together and prevents others from being left behind.

Actively solicit feedback on how much value people are getting from various AI powered tools.

If you see a wide range, that’s a clear signal you’re losing productivity to the fact that some people don’t know how to use the tool effectively. Efforts to close those gaps benefit your employees, your team’s morale and their overall performance.

How are you creating a culture where everyone can safely learn, admit knowledge gaps, and grow their AI skills together?