We use our own and third-party cookies to perform an analysis of use and measurement of our website, to improve our services. You can change the settings of cookies or get more information, see cookies policy. I understand and accept the use of cookies.

Global AI governance and the Brussels effect

image/svg+xml

Last week, the #GLOBE webinar hosted a presentation by Prof. Anu Bradford on her new book "The Brussels Effect: How the European Union Rules the World." In it, the author convincingly demonstrates how the EU achieves global regulatory reach through the setting of industry standards and norms for policy areas such as consumer safety, environmental protection, and competition rules.

Now, as the EU finds itself squeezed between China and US in their race for artificial intelligence (AI) leadership, many European policymakers are hoping to replicate the Brussels effect for the domain of algorithms, big data and automated decision-making technology.

This blog post provides an overview of recent developments in global AI governance. It also lays out the path ahead, shining some light on plausible next steps for the EU as it struggles for greater strategic autonomy and future tech leadership.

THE ROAD SO FAR

In 2020, the – still nascent – global AI governance landscape is characterized by a patchwork of initiatives from a wide range of different stakeholders. Multinational corporations and interest groups, individual governments, international organisations, and multilateral alliances all compete for authority as the world nudges towards a more solidified legal and governance framework. Notable recent developments at the international level include the creation of Global Partnership on AI, an alliance of democracies that will be hosted by the OECD; its (older) private sector namesake, the Partnership on AI, which gathers some 100 NGOs, businesses, and academic institutions from 13 countries; the Council of Europe’s Ad-Hoc Committee on AI (CAHAI), which is exploring options for an international AI treaty; as well as ongoing work in various UN organisations and fora.

In this evolution of the global AI governance landscape, a driving force is the geopolitical rivalry between China and USA, who have come to see AI mastery as a defining edge in their 21st century power struggle. The US has long dominated the development and production of many cutting-edge technologies, from personal computers and smartphones to internet platforms and cloud computing. Thus, it comes as no surprise that it is also leading the AI revolution. In recent years, though, China has successfully embarked on an aggressive catch-up and leapfrogging strategy, making enormous gains in sectors such as network technology, robotics and even quantum communications. The geopolitical implications of this race for technology leadership have been exposed by the fall-out of the US' ambition to exclude Chinese telecoms suppliers Huawei and ZTE from its supply-chains – and to convince its allies to follow suit.

In all this, Europe - despite its strengths in fundamental research and several niche technologies - found itself relegated to an observer spot on many of the most relevant emerging technologies. Of the twenty largest internet companies, not even one comes from Europe. When looking at investment activity, patent applications, or other common metrics, Europe has been lagging both China and the US with regards to digital innovation. This is especially true for the area of AI, a transformative enabling technology with monumental economic, societal, and military implications.

Recognizing the importance of maintaining a strong scientific and industrial base and not making itself dependent on others in yet another key technology, the EU and its Member States have turned up the heat over the last few years. Consultations were conducted, strategies written, and investments channelled. While these measures slowly begin to bear fruit, Brussels still has an ace up its sleeve: the regulatory and legislative power that gave name to the Brussels Effect.

As preparatory steps, the European Commission earlier this year has published a White Paper proposing possible approaches to regulating and governing the technology's development and application. Based on previous communications and reports, the goal is to chart a European third way for AI development, framed as "human-centric", “ethical”, and "trustworthy.” If (or rather, when) translated into hard law, this will undoubtedly have repercussions well beyond the EU’s direct jurisdiction.

With a report approved last week, lawmakers in the European Parliament signalled their general support for the Commission's approach. Some ideological differences exist among Member States as to how exactly legislation could look like - with some countries advocating for a more innovation-friendly soft touch, and others demanding a strict and comprehensive regulatory framework. Nevertheless, the overall mood seems to be that the EU cannot afford missing out on the enormous economic value of the AI industry, and also cannot afford to be a rules-maker when it comes to defining global AI standards and norms.

Thus, it seems likely that the EU will make true on its promise and within few months introduce a legislative package that would propel it to the forefront of international stakeholders able and willing to shape global AI governance.

THE WAY FORWARD – AT HOME…

For this to work out, the EU and its Member States need to deliver on several fronts. First, they have to do their homework when it comes to reviewing and updating national strategies in line with the coordinated plan for AI - an exercise first carried out in 2018 to align Member States with each other and with the Commission's actions supporting AI development. Here, it will be critical whether approaches converge and whether individual countries' initiatives are sufficiently geared towards complementarity and the harvesting of cross-border synergies.

Second, the Commission will use the extensive feedback gathered from its White Paper release to propose a legislative package in early 2021, which then has to be adopted through the ordinary legislative procedure. While the European Parliament seems generally in line with EU action on this file, last year’s controversy around the Copyright Directive and other contentious digital policies have shown that one should always expect surprises. Before turning into law, the proposal would also have to be agreed upon by the European Council, though it appears that the preceding exercise of coordinating national action plans, as well as ongoing informal ministerial exchanges, will give the Commission sufficient indication as to what it can and cannot get past national governments. Hence, it seems less likely that the Council would act as a deal-breaker at that stage.

Something much harder to predict, but equally important for defining the EU's fate as a rules-maker in AI is how markets will react. As Anu Bradford argues, the Brussels Effect only works because of the global reach of European industrial and commercial champions. These are almost inexistent in the AI domain. However, the recent experience with the GDPR, which influenced data protection discussions and privacy policies around the globe despite the shortage of dominant global European online platforms, may give policymakers in Brussels reason to be optimistic. Ultimately, investors and entrepreneurs tend to avoid legal uncertainty. By being an early mover and providing a clear regulatory framework, the EU could attract investments, foster innovation, and generate consumer trust, thus offsetting some of the harms of red tape. Moreover, the Commission has chosen to engage with industry and other stakeholders through various channels, the most promising being the European AI Alliance. This participatory approach – mirrored also in the extensive public consultations of the White Paper process – is likely to shape market reactions positively for several reasons. It pre-emptively aligns business interests with policymakers’ preferences, gives industry time to anticipate and prepare for future measures, lends additional legitimacy and acceptance to the Commission, and generally improves the overall quality of legislative proposals.

… AND ABROAD

In addition to the internal arena, Brussels must not lose sight on the international level if its ambitions to become a rule-maker in global AI governance are to be successful. The Commission will likely continue its constructive engagement and collaboration with other international organisations such as the OECD and UN. Yet, it is paramount that large Member States such as Germany and France also promote a European (vs a German or French) vision of AI in international fora such as the G7 and G20. While it is a positive sign that the EU is amongst the signatories of the G7-born Global Partnership on AI, it would be a strong signal if more individual Member States also signed up (to date, Slovenia is the only EU Member State to have joined the G7 signatories France, Germany and Italy). By the same token, it is high time that the three remaining Member States Bulgaria, Croatia, and Cyprus sign up the OECD’s AI Principles, adopted in 2019 to promote AI that is trustworthy and respects human-centred and democratic values.

Furthermore, as the EU’s sister organisation, the Council of Europe, is making advances towards an international treaty on AI, policymakers on both sides should avoid a false sense of competition or jurisdictive rivalry. Instead, they need to work hand in hand towards the creation of a stable, rules-based and fair governance system that is conducive to the promotion of human rights, the rule of law, and democracy.

Lastly, the future relationship to the UK after Brexit will be crucial. The UK has been the single biggest contributor to European investments into AI, it boasts a large number of digital unicorns and AI-related start-ups, and is home to some of the world’s leading AI research centres. Any future EU laws on AI need to take this into account and look for avenues that allow fruitful cooperation and partnerships, while at the same time attracting more AI-related investment and research activity to the EU’s single market.

If the EU and its Member States manage to agree on a common, ambitious AI strategy that is flanked by sensible regulation, the early-mover benefits from providing legal certainty, coupled with its giant consumer market, may make up some of its current disadvantages in AI research and deployment. In such an optimistic scenario, the Brussels effect may well apply to global AI governance. If, on the other hand, Member States fail to reach consensus or to provide the necessary resources, fragmentation, legal confusion, and lack of funding will cement the EU’s third place in the AI race. 

Lewin Schmitt is Pre-Doctoral Fellow at Institut Barcelona d’Estudis Internacionals (IBEI) and a former Policy Analyst at the European Political Strategy Centre (EPSC), the European Commission’s in-house think tank.