Risk Management isn't Risk Minimization
"You’re here to support the business!" It's the mantra we in security know and hear constantly, sometimes as a genuine reminder, sometimes as a "Know your place, nerd!" dismissal. But how often do we truly demonstrate th...

"You’re here to support the business!" It's the mantra we in security know and hear constantly, sometimes as a genuine reminder, sometimes as a "Know your place, nerd!" dismissal. But how often do we truly demonstrate that we deeply and holistically understand what that means?
When it comes to AI, a dangerous combination is emerging: a lack of knowledge about LLMs coupled with the pretense of technical expertise. This leads to security teams essentially making things up when assessing AI risks, unable to effectively weigh the risks against potential benefits, and missing opportunities to guide the business on managing those risks and maximize AI's value.
I saw this firsthand last week, speaking with multiple security professionals who were making decisions based on fundamental misunderstandings of basic LLM functionality. How can we assess risk properly when we don't even grasp the underlying technology?
As an example, many orgs are implementing “AI governance boards" or building impossible to maintain allow-lists. But at this point, AI is or shortly will be in all your software. If your existing teams that do software eval can't assess AI risk, that's a training issue to address, not a need for more bureaucracy. When else have we entirely new committees just to assess specific new functionality in our apps?
We rarely apply this same thinking to the countless third-party dependencies we pull into our projects daily. We don't mandate a review committee with approval forms and rubber stamps for every new utility library, despite their potential security implications if misused. It would grind development to a halt as developers would be blocked from using common tools and incentivized to write their own, slower and less secure, versions.
You could argue AI is specialized knowledge, like privacy or security. But the solution isn't creating yet another silo. Legal, privacy, and security already require deep domain expertise that can be adapted to AI contexts with reasonable training. A few weeks of focused study on how LLMs actually work gives these teams the foundation they need to apply their existing expertise to AI challenges.
We don't need a separate AI governance team staffed by people who understand prompt engineering but lack the contextual knowledge of how security, privacy, and legal frameworks actually operate in practice.
Ultimately, security's value lies in enabling the business to move forward mindfully balancing risk/reward tradeoffs. And that means understanding that "securely" is a strong preference, but not the only criteria. Where are you seeing security teams effectively integrate AI risk into their existing frameworks vs creating new silos and bottlenecks?