The US controls 74% of global AI supercomputer capacity. Europe holds 4.8%. OpenAI and Anthropic raised $130 billion in a single month. A European founder asks: are we regulating ourselves into irrelevance?

I'm writing this from Copenhagen, in the heart of Europe, at a moment when I feel the weight of a strategic failure settling over the continent. Not failure born from lack of talent or innovation. Europe's engineers are world-class. Our researchers publish more AI papers than any other region. But strategy? Commitment? Vision? That's where we're losing.

Three weeks ago, OpenAI and Anthropic raised $130 billion in funding in a single month. A month. The year before, this would have been the entire annual venture capital allocation to European AI startups. Meanwhile, our regulators are debating whether a chatbot needs a new compliance framework. This isn't a difference in speed. It's a difference in conviction.

The Numbers Don't Lie

Let's start with the ugliest statistic: the United States controls 74% of global AI supercomputer capacity. Not software. Not ideas. Raw, physical compute—the fundamental ingredient that everything else depends on. China is building aggressively, currently running at about 9% and climbing. Europe? 4.8%.

This isn't just a number for a spreadsheet. This is the constraint that determines whether you can build the next GPT or whether you're forever building products on top of someone else's foundation. When you don't have compute, you don't have agency.

And the funding gap tells the same story. In 2026, a handful of American AI companies have raised more capital than the entire European AI ecosystem combined. Meta just announced a $200 billion infrastructure spend. Google is matching that. Meanwhile, the biggest European AI funding round this year was Mistral's Series B, which—while impressive—is still dwarfed by what happens in Silicon Valley on an ordinary Tuesday.

Europe's regulators are debating whether a chatbot needs a new compliance framework. The US is building the infrastructure of the future. These are not equivalent activities.

The EU AI Act: Noble Intentions, Strategic Disaster

I need to be careful here because I believe in responsible AI. The EU AI Act was born from genuine concern about fairness, bias, and accountability. These are serious topics. But here's what I've learned as a founder building in both Europe and internationally: regulation is not the same as responsible innovation. In fact, excessive regulation often prevents the responsible actors from building while barely slowing the irresponsible ones.

The EU AI Act added two years and millions in legal costs to every serious AI project in Europe. We have to classify models. We have to document training data. We have to establish audit trails. None of this is wrong in principle. But while our companies were hiring lawyers, American companies were hiring researchers. While we were building compliance frameworks, they were building training clusters.

Here's what troubles me most: the regulation isn't selective. It applies to the 10-person startup in Berlin with the same force that it applies to Google. We've essentially regulated scale into centralization. Only the largest companies can afford the compliance burden. The second-order effect? Fewer European AI startups, less competition, less innovation. We're solving the wrong problem.

The US didn't ban powerful models. OpenAI and Anthropic are simultaneously the most capable and the most cautious AI companies in the world. They chose safety because they believed it was right, not because they were forced to. Europe chose to legislate where America chose to lead. One approach creates products. The other creates consultants.

China's Unflinching Advantage

If Europe is sleepwalking, China is sprinting. China has unified industrial policy around AI dominance. The government allocates capital with complete coherence. No debate about whether to invest. No regulatory framework slowing deployment. Just: build faster, build bigger, overtake America.

This isn't to say China's approach is morally superior. But it is strategically coherent. When the Chinese government wants AI to be a national priority, compute allocations happen. Infrastructure gets deployed. Talent gets concentrated. Europe discusses it at quarterly meetings.

The uncomfortable truth: China doesn't have to be better regulated than America. It only has to be faster. And right now, it is.

The Talent Exodus

The best European AI researchers and engineers are working for American companies. This is partially economic—salaries at DeepSeek and xAI dwarf European offers. But it's also about opportunity. You go where the problems are hardest and the resources are biggest.

I know extraordinary AI engineers in Copenhagen, Berlin, Paris, Amsterdam. When I ask them about staying in Europe versus going to the US, the conversation is short. The US has the compute, the capital, and the projects worth working on. Europe has better work-life balance and better bread.

This is a vicious cycle. Less capital means fewer ambitious projects. Fewer ambitious projects means less talent retention. Less talent means weaker companies. Weaker companies struggle to attract capital. The spiral accelerates.

What We've Built Versus What We Could Have

Europe has built some good things. Mistral is genuinely impressive. There are serious AI research groups at universities across the continent. But look at what it cost us: millions in regulatory burden, years of delay, and the constant underlying message that we don't really trust our own builders.

Meanwhile, OpenAI went from concept to ChatGPT to having raised $130 billion without waiting for permission. Anthropic took a different approach to safety than most and built a $20 billion company in four years. DeepSeek in China is shipping products faster than any Western company by orders of magnitude.

The difference isn't innovation. It's permissions. It's conviction. It's betting that your engineers can handle responsibility without a compliance bureaucracy explaining how.

You cannot build a world-changing AI company while genuinely concerned about whether you're filling out the right regulatory forms. The two things are incompatible.

Europe's Real Problem Isn't Regulation. It's Belief.

I'll say something that might sound harsh: Europe doesn't actually believe it can win the AI race. If we did, we'd be acting differently. We'd have unified compute infrastructure. We'd have venture capital flowing into AI with American-style conviction. We'd have founders pushing for more ambitious projects, not fewer.

Instead, we regulate. Regulation is what you do when you've decided someone else will lead and you want to manage the damage. Regulation is the tool of followers, not builders.

This doesn't mean abandoning our values. It means separating the pursuit of responsible AI from the pursuit of AI dominance. These are not mutually exclusive. America has shown this. Anthropic and OpenAI are simultaneously the most capable and most thoughtful AI companies on Earth. Not because they're regulated into it, but because their founders believe it matters.

What This Means for European Builders

If you're building AI in Europe right now, you have three paths: accept the handicap, leave, or find the niche where European constraints don't matter as much.

The first path—accepting the handicap—works for some companies. If you're building localized applications, if regulatory compliance is actually a moat rather than a burden, if your customers demand EU jurisdiction for data protection reasons, then you can compete. These are real niches. There's real value here. But you're not building the next foundation model. You're not raising $10 billion. You're building a solid business in a constrained market.

The second path is what most of our best people are doing: going to the US or Singapore or anywhere else where the game isn't handicapped. This is rational. It's why Mistral's founders considered moving operations out of France. It's why every serious European founder I know has an escape plan.

The third path is what I'm trying to do at yellow3: find where the constraints don't bind as much, and build something real anyway. At naffe.ai, we're focused on making AI accessible to non-technical people. That's a problem that exists everywhere. Our approach to safety isn't forced by regulation—it's core to our product. And in this specific niche, we can build something extraordinary without needing to be in California.

But I won't pretend this is easy or that I'm not constantly watching for the moment when we need to make the harder choice.

The Lesson from This Moment

History will judge whether Europe made the right call with the AI Act. Maybe in 10 years, careful regulation will have prevented serious harms while maintaining European competitiveness. But I doubt it. More likely, we'll look back at 2024-2026 as the moment when Europe decided to be a careful passenger rather than an aggressive driver.

The data is already suggesting this story. I wrote last week about Europe falling behind in the AI race. The cost data makes it worse. When you add up the capital costs, the compute costs, the regulatory overhead, and the talent opportunity cost, Europe isn't just behind. We're heading in the opposite direction at increasing speed.

The worst part? It doesn't have to be this way. Europe has the talent. We have the capital—most of it is just sitting in traditional industries waiting for permission to move. We have the governance frameworks if we actually want them. What we lack is belief that we can win.

In five years, we won't regret being too cautious about AI. We'll regret being too late.

The US has chosen to build. China has chosen to dominate. Europe has chosen to regulate and hope that turns out to be the clever play. I've never seen that script work out in any era of technological change. The careful player always loses to the bold one, eventually.

That's not a reason to abandon our values. It's a reason to pursue them differently—through responsible building instead of careful regulation. Through capital allocation instead of bureaucratic oversight. Through founders with conviction instead of lawyers with forms.

We have maybe two more years before the die is fully cast. The compute gap will become insurmountable. The talent exodus will become a flood. The cultural message that "Europe does things carefully" will calcify into "Europe doesn't do things at all."

I'm still betting on Europe. I'm building here. But I'm betting faster now, and I'm betting alone.