EU regulation of Artificial intelligence in the shadow of global interdependence


American research institutes are world leaders in AI. And US firms currently dominate the European AI landscape; Microsoft, Alphabet, Meta, and Amazon Web Services immediately come to mind. Does this heavy American presence create a challenge for EU AI policy? If so, how, and how should EU policy deal with it? In essence, project II asks: how central should the transatlantic axis be to EU AI policy?

Transatlantic cooperation has emerged a key nexus of contemporary AI governance. In the EU, the AI Act seeks to fulfill the Europe’s ambition for global leader on ethical AI. In the USA, the pioneering efforts by for example the Federal Trade Commission and have now been complemented by Executive Order 14110 providing federal guidance on the US AI policy. Transatlantic cooperation was also institutionalized in September 2021 in the Trade and Technology Council, which seeks to develop, among other things, compatible AI standards. 

Many of the initiatives focus on facilitating safe, secure, and trustworthy artificial intelligence. Risks to individual rights, such as discriminatory treatment, bias and due process are weighed balanced against the hope that AI will help address lackluster economic growth and wicked social problems such as climate change and global pandemics. Through mandating risk-management systems, design standards and transparency requirements, policymakers on both sides of the Atlantic to seek balance the risks and benefits of artificial intelligence in a responsible way.

This dynamic has a distinct geo-economical and geopolitical flavor. The “governance overhead” stemming from regulatory requirements seen as a drag on relative economic competitiveness. And the unbalanced distribution of AI companies has triggered accusations of protectionism. We have witnessed several efforts to coordinate governance regimes and mutually recognize rules, which would integrate markets for artificial intelligence products and manage regulatory interdependence.

But fundamental questions loom in the background: how has the dominant regulatory framework emerged in the first place, in particular the emphasis on risk-based regulation? After all, governance arrangements are social practices that reflect underlying institutional constraints, politico-economic interests, and constitutive ideas. Once embraced, these arrangements constrain policymakers, shaping the direction of technological development, and technologies’ impact on contemporary socio-economic arrangements. Hence, the governance arrangements that have emerged warrant explanation.

By investigating the social context and historical trajectories that underpin contemporary transatlantic AI governance this subproject provides nuanced explanations to the emerging AI governance regime, and it helps us anticipate future developments. Outlining the limits and nature of the regulatory space in which the EU and the USA operate, it offers nuanced analyzes of the drivers behind future regulatory initiatives.

Moreover, it reveals the social considerations and dilemmas so far left ungoverned. What kind of societal transformations do they actually shape?  Who benefits from it? What are the distributive effects of this particular approach to AI governance? And what does all of this tell about the relationship between technology and contemporary societies?


go to project IIIIIIIVV