Article
Apr 10, 2026
The EU AI Act: A Love Letter to Paperwork
A candid deep dive into the friction between noble regulatory ambition and the messy reality of shipping code. While the EU aims to lead the world in "trustworthy AI," the 500-page framework often prioritizes compliance theater over technical clarity. From the "high-risk" catch-all to the "warm body" oversight requirements, this piece explores why the Act feels like it was written for Big Tech boardrooms while leaving small-scale innovators in a regulatory fog.

I'll be honest — I spent way too long reading a document no one asked me to enjoy.
The EU AI Act landed with the energy of someone who once heard about Terminator and immediately formed a committee. Five hundred pages of regulatory ambition, written by people who genuinely mean well but may have never deployed a model in production.
I'm not here to bash Brussels. I've been operating in Europe long enough to know that American-style "move fast and break things" isn't the vibe here, and honestly, sometimes that's the right call. But there's a gap between thoughtful governance and what this thing actually does — and that gap is where small operators like me live.
"High-Risk" AI: The Catch-All
The Act flags anything impacting people's rights or safety as high-risk. Noble intent. Then you realize that definition is wide enough to catch hiring software, student assessment tools, loan decisions, and probably your HR chatbot that mostly answers questions about vacation days. The category isn't wrong — it's just so broad that it stops being useful.
Meanwhile, the things most people actually worry about — deepfakes, manipulation at scale, autonomous weapons — live in a separate "prohibited" bucket that gets carved out faster than it gets enforced.
Transparency Rules That Miss the Point
There's a rule saying users must be told when they're talking to an AI. Good idea, bad execution. Most consumers already know, or don't care, or both. The populations who genuinely don't know — older adults, people less digitally fluent — aren't protected by a disclosure buried in a UI footer. A legal checkbox isn't the same as actual transparency.
I've built chatbots for clients. The conversation about how to communicate AI's role clearly is worth having. The compliance form isn't.
Human Oversight, a.k.a. The Warm Body Requirement
"Humans in the loop" sounds reasonable. The implementation is where things get weird. The Act pushes for meaningful human oversight without defining what "meaningful" means operationally. So companies end up hiring someone to technically exist near a dashboard and call it compliant. That's not oversight. That's theatre.
The harder question — how do you build systems where the human can actually understand what they're supervising? — doesn't get answered.
The Biometric Carve-Out
Real-time biometric surveillance is prohibited. Except for law enforcement. With prior authorization. In serious cases. Defined by member states.
So it's banned, except when it isn't. Which, depending on your politics, is either pragmatic governance or a loophole you could drive a bus through. I'll let you decide.
Risk Tiers: Unacceptable, High, Limited, Minimal
Four categories. No clear bright lines between them. Plenty of room for legal interpretation and expensive lawyers. The tier system isn't useless — it's an attempt at proportionality, which is the right instinct. But it creates more uncertainty than clarity for anyone trying to build something without a dedicated compliance team.
The Conformity Assessment
Want to release a high-risk AI product? You'll need an audit. Possibly a third-party audit. The audit may take longer than it took you to build the thing. By the time you're certified, there's a solid chance a newer model has made your approach obsolete.
The goal isn't wrong. Accountability for high-stakes systems makes sense. The process just wasn't designed by anyone who ships software.
The Innovation Sandbox
The Act creates regulatory sandboxes — controlled environments where startups can test AI with some regulatory flexibility. Genuinely useful concept. In practice, access varies by member state, timelines are unclear, and the sandbox doesn't automatically lead to a path to market.
It's like getting a trial gym membership with no information about how to sign up for the full one.
Look, I'll say something slightly unpopular: the EU is right that AI needs governance. The US is winging it. The UK is somewhere in between, hoping politeness counts as a strategy. Europe stepping in and saying "no, we're going to write the rules" isn't the problem.
The problem is that the rules were written by people optimizing for coverage and political consensus, not by people who understand what it actually costs a small operator to comply. When you're a big tech company, compliance is a line item. When you're running a one-person AI agency in Rotterdam, it's a different conversation.
That's the part that needs to change — not the ambition, but the proportionality.
We'll keep navigating it. That's literally what we do.
NeoInsent AI helps businesses understand what AI can actually do for them — and what the fine print really means. Reach out if you want to talk through it.