What’s up EU had a conversation with Professor Anu Bradford about her latest book, Digital Empires: The Global Battle to Regulate Technology (OUP). It was published in September and features in the Financial Times selection for Best Books of 2023.
The Finnish-American lawyer-turned-academic is the Henry L. Moses Professor of Law and International Organization at Columbia Law School. She is the director of the European Legal Studies Center at Columbia University.
Her previous book — The Brussels Effect: How the European Union Rules the World — is routinely quoted by European institutions trying to define the EU’s role in the world as a regulatory superpower. Prior to academia, Professor Bradford practice EU competition law in Brussels and worked as an economic policy adviser at the Finnish Parliament.
Digital Empires describes the global scramble for regulation in the digital sector, in which the US, China, and Europe try to expand their spheres of influence via technological leadership and/or regulation.
Her book describes three competing models of digital regulation: the U.S. market-based model that runs on private initiative and self-regulation; the EU rights-driven model that centers on the ex-ante regulation and risk assessment; and the Chinese state-driven model that revolves around a strong techno-autocracy.
The conversation took place on 15 November 2023.
The EU’s Role in Digital Regulation
For lack of technological leadership in the digital world, the European Union surely is ahead of the regulatory curve. The EU was a trailblazer in introducing far-reaching privacy rules such as the General Data Protection Regulation (GDPR), which entered into force in 2018.
Under Ursula von der Leyen’s presidency, tech regulation has been a priority for the European Commission with landmark legislation such as the Digital Markets Act, the Digital Services Act, the Data Governance Act, the Data Act, and the AI Act which is currently under negotiation.
“First mover advantage […] can help. If you think about the GDPR, there were not really any other competing regulatory frameworks that [other] countries could have looked at. [There are other] conditions for the Brussels effect: you need to have the market size. So if you have Costa Rica go ahead with the first move in GDPR in GDPR-type of regulation or competition regulation that’s not going to become a global regulation. [The EU was] a first mover as a large market that has the capacity to write a good sort of credible, important, strict enough of a regulation that then will become followed by these companies, but certainly being a first mover does help.”, Bradford says.
Will Europe manage to be a regulatory innovator if it remains a technological laggard? But more fundamentally, with a declining economy relative to the rest of the world, isn’t the EU poised to lose its status as a rulemaker?
Bradford says “The relative size of the European market will decline over the years and that also diminishes the leverage that the EU has because it may no longer be an inevitable market. But here I think there a couple of things that we would need to consider. […] One is that it is not just the GDP that matters, it’s also the GDP per capita […]. If you think about a social media company, or any of those platforms that rely on advertising, the per user advertising revenue is much higher on wealthy markets than in developing markets. That’s why Facebook makes much more money when it’s advertising something [in] Europe versus when they’re advertising [in] India, or Africa, or other parts of the world. It takes a while still for the GDP per capita to catch up in other parts of the world to the level that the consumer demand is as significant to in some of those market. But I am not disputing the trend at all. And I think what comes potentially to the EU’s rescue is that de jure Brussels effect — if the EU has managed to export its regulatory philosophy and frameworks to many of these growing markets — whereby then they would be leveraging one day their market size and further entrenching the European regulatory rules and values underpinning them”.
And Then Came Generative AI
The spectacular progress in generative AI has been a catalyst for policymakers worldwide. 2023 thus far has shown that there indeed is a “battle to regulate technology” — one fought with unilateral and potentially extraterritorial legislation (such as the EU’s AI Act) and large international gatherings such as the G7 Leaders’ Summit in Hiroshima, Japan (19-21 May) or the AI Safety Summit (1-2 November) in Bletchley Park, U.K.
“The biggest effect of the UK’s AI Safety Summit was that it was building momentum. The default is now that AI needs to be governed. The default is no longer that we’re going to let innovation run its course and tech companies will have this. There’s a greater global political momentum that gives the narrative for governments in their own jurisdictions to take a stronger stand and regulate AI”, says Bradford.
The AI Safety Summit’s guest list was initially drafted to only include “like-minded countries”, i.e. not China. The UK eventually handed an invitation to China — a move that caused some unease given the PRC’s track record of digital authoritarianism.
However, Bradford notes: “I certainly commend the U.K. for convening countries, including inviting China. There’s a trade off. If you invite China, there are many issues where you cannot agree on […] like using AI towards mass surveillance or facial recognition. […] But China is a massive part of the world and AI development. […] It would seem very futile to try to have a truly international dialogue about AI that would exclude China. So I think it was significant to get a declaration, even though it was a shallow declaration. It doesn’t impose any binding obligations, it affirms that we all wanted to have more accountability, but we are going to do it recognizing our differences. […] I think it’s not nothing, it’s still important to have these conversations, but we are very far from pursuing an international binding AI treaty.”
The AI Safety Summit made it clear that the U.S. too had joined the battle to regulate AI. The White House released on Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence just two days before the summit took place. At the summit, Vice President Kamala Harris was on the offensive, stressing America’s leading role in regulating AI as the world’s lead technologist. With this in mind, what is the risk of regulatory fragmentation?
“Fragmentation is never good […] When you have gaps in regulation that can mean that even well developed frameworks for regulations in some markets can be offset by gaps elsewhere. The more there is fragmentation, the harder it is to offer these AI systems globally. […] Ideally we would have more of globally harmonized approach, but that’s just not going to happen. At least when we understand the differences, we can manage them better. The EU is the further along with its AI Act (assuming that is adopted still this year or early next year) it’s the only one that is comprehensive, horizontal, binding. China has legislated already, but it’s more a piecemeal approach which incrementally allows China to build on existing legislation. China is doing it differently than the EU, but it is pursuing binding legislation […] The US Executive Order is very much a manifestation of what I talk in the book, that the US is moving towards the EU rights-based model. And [the US] seems to be endorsing a very similar worldview [to the EU], with concerns about privacy, non-discrimination and safety risks. But it’s still an Executive Order. It’s not the same as passing a law, it’s more vulnerable to being revoked. The US is not yet the same kind of player as the EU in regulation.” Bradford says.
Tech Wars And The EU-US-China Triangle
While Digital Empires is about three competing models, it is hard not to think of the China-US competition in tech and tech regulation. The U.S. is targeting the Chinese technology sector as a whole, trying to stave off China’s rise as a technological superpower, whether it be in 5G, nanochips, dual-use items, or artificial intelligence.
Bradford says: “The tech war can further escalate. We’ve seen gradually the US ramping up and expanding the set of technologies that fall within the scope of export controls. […] We see two forces battling. We see a dynamic towards escalation, because I think the tech war will continue — I don’t really see a truce anytime soon. There is too much at stake in terms of geopolitical battles, ideological disagreements, China’s ambitions, the US’ discomfort with that. I think that the tech war will continue to escalate and we will see pressure towards decoupling. But at the same time the economies are so intertwined […] hence we also see pressures towards restraint. Escalation alternates with de-escalation”.
On tech regulation, the EU and the U.S. have often seemed to be at odds in recent years. Several high profile American officials — including Commerce Secretary Gina Raimondo — chastised the EU’s Digital Markets Act as unfairly targeting U.S. companies. The same seemed to be happening with the AI Act, with OpenAI threating to leave the EU and Google delaying the release of its chatbot Bard.
Until recently, it appeared that the EU was once again racing ahead with regulation. The U.S. was concerned about legislating too fast and the potential hindrance to its AI leadership in the race with China. On the DMA, Braford says: “I think [the U.S. Administration was] responding to some degree of corporate pressure that was concerned that this was an anti-American campaign […]. But at the same time, if you look at many of the bills pending in the US Congress they are very similar to the DMA. […] There are many lawmakers that would like to see exactly DMA-type of legislation being enacted in the US. If you look some of these cases by the heads of enforcement like Lina Khan at the FTC and [Jonathan] Kanter qt the DOJ it’s a pretty strong pro-enforcement approach. […] The US was a little bit divided in how it thought about the DMA because many in the US actually support it.”