AI Risks Lurking in Your Organization (and How to Tackle Them)
If you aren’t already helping your users to understand how to safely use AI tools to drive efficiency in the workplace, two things are happening, neither of which are likely aligned with your goals:
If you aren’t already helping your users to understand how to safely use AI tools to drive efficiency in the workplace, two things are happening, neither of which are likely aligned with your goals:
Half are already using them on personal accounts where they’re sharing company and customer data without any protections or control over where it’s going or what it might be used for.
The other half are waiting for this whole AI thing to blow over so they can get back to their regularly scheduled meetings without having to learn yet another new thing.
Check out the full article for practical tips on moving AI tools out from being an unquantified risk over to a foundational element of your operational strategy.
Understand how it’s already being used
Understand what tools and use cases are already in use - Is marketing using AI to write copy? Engineers using ChatGPT with code? You're already aware of your users' browser extensions sharing Google Docs to providers, right?
An accurate inventory enables you to focus on the most impactful use cases first. It also informs you of the types of risks you need to consider as well as areas where you can help teams maximize the values they derive from these tools.
Guidance and policy you develop on the biggest use cases will often be generalizable to other tools in use so start where you can make the largest impact.
Work with teams to understand specific risks
Work with peers across leadership to help them understand the risks associated with AI tool usage
.
It’s very likely not the actual risks they’re thinking of - Just because someone pasted an internal doc into ChatGPT doesn’t mean that your competitor can later ask ChatGPT about it and retrieve the contents, even if it is later used for training.

Ensure both you and they understand the real and practical risks to your organization, based on your use cases, not just the generalized fears you read on the internet.
If you’re not sure what those risks should be, don’t fret!
There’s lots of great resources available to build a basic understanding of the fundamentals behind AI that will give you a base to work from. Many of them are free for anyone willing to put in a bit of time. Don’t be overwhelmed or intimidated. It's something everyone in tech needs to do to stay relevant, so just dive in.
Build your strategy
Based on the use cases you’ve identified and the risks and risk appetite of the company, work with company leadership to define an overall strategy that most effectively balances the risk and value of AI tool usage.
Guidance for AI-in-the-product efforts vs. AI usage by employees are two very different use cases with very different sets of concerns.
While there may be common guidance that affects both, the policies and thinking that govern each should be considered separately.
The goals and risk appetite associated with these different use cases will more often than not lead to different strategies and policy outcomes so be mindful of where they’re aligned and where diverging makes more sense.
You did budget for that, right?

Manager communicating physically to his team why they have to continue on the free version of ChatGPT
This new class of tools and features will cost money if you don’t want people on free, unmanaged accounts. What will the budget need to look like? Is there a path for people to move to managed corporate accounts for the tools in use?
From a practical standpoint, not accounting for these costs means that unless you’re tightly managing usage with a Cloud Access Security Broker (CASB) or similar, you’re implicitly taking on increased risk from unmanaged AI tools in your organization (see the second sentence of this post).
If there isn’t a budget allocated already, consider this as an opportunity to build relationships with departmental leaders in your company.
By helping coordinate a collaborative pitch describing the value AI brings to their departments, you’re taking a key role informing the overall strategy by shining a light on the aggregate benefits to their teams and the company at large and by connecting use cases where there may have previously only been isolated efforts.
Developing and communicating guidance and policies

A security professional so well versed in policy writing she can do it with her eyes closed
Using the information you've gathered above, collaborate with the relevant internal teams build out your usage policies and guidance documentation, focusing on what’s important based on risks you identified.
Don’t make it a laundry list covering every possible problem that could happen or every possible case. If it’s too big, people either won’t read it or won’t know what’s important so focus on on what they can do to address the most significant risks you've identified.
Help users understand why there’s risk associated with the actions you flagged rather than just telling them what to do. This helps them make smarter decisions for all those edge cases we’re purposely not addressing in the docs.
If you’ve built a strong culture of trust, your users will know they can come to you with questions and they’ll get help rather than judgment.
Generally, important issues will be things like:
- Ensuring people understand they are still responsible for any outputs they submit our use regardless of AI tools that assisted them
- In line with above, the need to validate outputs (famous failures like lawyers citing fictitious cases can be useful here)
- When it comes to integrating AI with products they’re building, the importance of understanding what can go wrong, how to mitigate them and why it’s important to do this thinking up front. Ensure they're trained on risks like the OWASP LLM Top 10 and have threat modeling as part of the development/design process
- Ensuring awareness of the risks specific certain classes of tools. Poor quality code written by AI tools, bias in HR based tools and inaccurate facts in marketing copywriting are all things you're more likely to catch and prevent once you're aware of the potential issues.
The adoption of AI tools today isn't a question of either if or when. Your users have almost certainly already started moving forward using them so it's about what you're doing about it now.
The goal of an aligned AI security strategy is about creating a framework that best enables your organization to maximize the benefits while managing the risks and potential pitfalls in a way that best enables your broader business goals.