EU AI Act has been ratified!
The EU formally adopted the AI Act last week with potential for GDPR sized fines of up to 6% of global revenue. What does it mean for businesses building AI based tools? There was fear that the EU would regulate away in...

The EU formally adopted the AI Act last week with potential for GDPR sized fines of up to 6% of global revenue. What does it mean for businesses building AI based tools?
There was fear that the EU would regulate away innovation in the space. I don’t see that the Act will result in meaningfully lessen innovation out of the EU and am happy to see the extra controls on certain use cases with material potential for harm of individuals.
The Act itself is fairly common sensical and most businesses won’t see much in the way of direct impact to their strategy or implementation unless they’re working in the areas defined as either unacceptable or high risk which I'll touch on a bit here:
☠️ Unacceptable risk is fairly narrowly scoped. Your product is unlikely to be classified here unless your TAM includes authoritarian regimes requiring assistance suppressing the proles.
That leaves high risk AI systems as the main area with requirements. These would be systems like those described below:
🛫 Safety systems based on AI - Autopilot, either the real kind or the “Definitely need you to still keep your eyes on the road” type means you’re high risk. Industrial automation AI that monitors for safe levels could be another case.
🕵️ Systems that profile individuals automatically to assess aspects of a person’s life are high risk due to the potential impact of mistakes.
🧑⚖️ Legal assistance - assessing evidence reliability, assessing individuals, profiling, any uses involving researching or interpreting the law, immigration assessments. It should go without saying that unattended ChatGPT would be a pretty awful defense counsel.
🏗️ Critical infrastructure - If your system controls the flow of water, electricity or cars, you’re in!
🧑💼 Employment and education - AI powered admissions, hiring, firing, promoting and deciding who gets the best snacks based on performance are all high risk.
If your product fits any of the above high risk use cases, you have some work to do.
Below are some of the major points:
📋 Risk management (Article 9)- You need a continuous process to assess and address the risk that the product potentially poses, including cases of misuse. NIST AI RMF anyone?
🏛️ Data governance (Article 10)- Do you know where your data came from? Thought through potential biases? Are your training and validation sets sufficient to identify bias? Can you prove it?
🪵 Auditability and logging (Article 12)- Can you show why your system made the decision it did? Are you holding those logs for long enough (6 months!) Is there human oversight? Do you have an escalation path back to the vendor in case of issues?
👁️ Oversight capabilities (Article 14)- Can humans tell what the system is doing and why?
🔒 Security (Article 15)- It says you need to secure it appropriate to the risk. What a terrific idea!
Overall, I expect this to have a much lower impact than something like GDPR with more restraint on the part of the EU to focus on what's important.