What changes in your security program when you assume attackers have better AI than you do?Model…
What changes in your security program when you assume attackers have better AI than you do?
Model providers are tightening safeguards as attackers start using AI for real attacks. That’s good but it doesn’t change the underlying reality.
Attackers aren’t limited to “safe” models. And even when they are, guardrails only raise the bar until someone asks in iambic pentameter.
Near term, attackers get the advantage because their AI has a single, unconstrained objective: find a way in.
It can draw from every known vulnerability, probe endlessly, and adapt until something breaks.
Defenders will have AI too. Long term, that’s where the advantage shifts. But defensive AI has a harder job:
It needs broad, accurate context across messy systems and incomplete (also messy) data. No one is ready to let it freely take actions like rotating creds, disabling systems or reconfiguring infrastructure on the fly in response.
So attackers benefit first from simplicity.
Defenders will benefit later from context and control.
Until then...
Know what software and services you actually run
Start building data capabilities in your teams - security is fast becoming a big data problem
Measure how quickly you can turn around changes
Design systems for limited blast radius (ZT!)
AI will eventually favor defenders. But defenders will have to earn it by improving context and tooling first.
Until then, attackers get cheap, parallel exploration along with instant concentration on any foothold they find against defenders who are still stitching reality together.