We use our own and third-party cookies to perform an analysis of use and measurement of our website, to improve our services. You can change the settings of cookies or get more information, see cookies policy. I understand and accept the use of cookies.

The Virus and the Machine: Why We Need to Talk About Global AI Governance

image/svg+xml

The growing use of artificial intelligence (AI) tools and technologies in public health interventions raises novel challenges for global governance.

As the Covid-19 pandemic has swept the globe, the use of cutting-edge artificial intelligence (AI) technologies and tools by government (and contracted private organisations) in their fight to contain virus outbreaks has received glowing headlines.  STEM research in AI, data science and machine learning will no doubt be major beneficiaries of this renewed interest in the application of AI to major global challenges, and rightly so.

However, the use of AI to protect public health and safety also raises vitally important questions for political, social and ethical inquiry: above all, what global regulations should govern the use of AI technologies and who should set and enforce them? As Covid-19 accelerates the embedding of AI within the critical substrate of our modern social, political and economic systems, what are the prospects for regulatory coordination on end-user privacy protections?  Will policymakers underwrite public trust in these new technologies by ensuring that AI is not deployed for private gain at the expense of larger societal goals?

Early signs are not good. Governing AI is a formidable task amid revolutionary change across multiple scales and systems. The global governance landscape for AI is fragmented and underdeveloped, facing the triple challenge of facilitating cooperation between states, ensuring the participation of powerful private companies and keeping up with rapid technology development. Despite its potential to profoundly affect our ability to address other global challenges, AI is not explicitly considered in global policy agendas such as the Sustainable Development Goals (SDGs).

Risk scholarship has long flagged parasitic artificial general intelligence (AGI) as a global catastrophic risk, as illustrated by Bostrom’s famous thought experiment “the paperclip maximiser,” which he uses to illustrate the existential dangers posed by AI absent machine ethics. Elon Musk has declared AI “more dangerous than nukes.” The Atomic Bulletin of Nuclear Scientists recently expressed concern at the “frenzied pace” of AI development and the emergence of new destabilizing technologies facilitating “cyber-enabled information warfare.” Underneath the sometimes sensationalist claims of existential threat lie more subterranean risks, including that AI may copy and “bake in” human biases. This could be particularly consequential with regard to public health, where implicit biases help explain disparities in health outcomes.  Indeed, the Director-General of the World Health Organization (WHO) has warned that “we’re not just fighting an epidemic; we’re fighting an infodemic” of fake news, which threatens to undermine public trust in scientific advice and government guidance.

Research shows that unlike more immediate or emotionally compelling dangers, “creeping” risks such as large-scale climate change or misaligned AI are not registered as urgent moral imperatives, resulting in wishful thinking, self-defensive reactions or hostile denial of scientific findings. The techno-utopianism that permeates the computer engineering and corporate culture surrounding the development of AI may also contribute to minimisation of its long-term risks.  When it comes to autonomous AI weaponry systems, for example, experts on the ground voice concern over time wasted, the short-term limitations of current regulation, the absence of preventive action, and the lack of effective containment. However, more insidious threats posed by AI tools, such as “deepfakes” – synthetic media in which a person in an existing image or video is replaced with someone else’s likeness – often go under the radar.  EU regulators are only now scrambling to respond to the use of intrusive facial recognition technology by member states.

Circling back to Covid-19, AI has spearheaded rapid responses to the pandemic which would have been unthinkable even a decade ago, such as speeding up genome sequencing, forecasting the evolution of outbreaks, and accelerating the slow and costly process of vaccine research through modelling and simulations. In China, law enforcement agencies have used AI-assisted “smart helmets”, a headgear equipped with an infrared camera, augmented reality glasses and facial recognition technology that can detect anyone with a fever in a radius of five metres. Police in Dubai and Italy have also begun to use these surveillance helmets, leading some to conclude that this may be the “future normal.”

It does not take much imagination to see that many of these technologies could easily be repurposed to serve as a tool for population control and mass surveillance.  The ability of AI-empowered technologies to track people’s movements and monitor compliance with government directives underpins the stunning success of early Covid-19 containment policies in countries such as Israel and Singapore.  However, these applications are setting off alarm bells regarding their intended and unintended consequences on individual freedoms, control of critical public infrastructure, and potential abuses of governmental authority, especially in illiberal regimes with weak checks and balances.

What the future holds depends upon the actions of informed and engaged citizens and opportunities for meaningful civic engagement on steps to reduce the risks posed by AI technologies and tools. When it comes to global AI governance, related ground-clearing work has been undertaken by major organisations, such as Chatham House Global Commission on Internet Governance. Urgent questions include: To what standards should we hold governments when it comes to how they use AI-powered tools on their populations? What kinds of collective discipline mechanisms should be triggered when states use these tools oppressively? What educational resources should be provided to make sure that new technology is accessible to all? What is the role of the global multilateral apparatus in monitoring and evaluating AI governance practices?

The UN Secretary General, Antonio Guterres, has recently highlighted the emergence of AI-powered “new and dangerous risks,” requiring long-term technology education strategies, along with social protections and flexible regulation frameworks. However, his call for “a multipolar world with solid multilateral institutions” capable of coordinating an effective response to global risks looks increasingly untenable against a backdrop of antagonistic great power rivalry, which some observers have dubbed cold war 2.0. The Covid-19 crisis has accelerated already-existing systemic weaknesses in global governance systems, with the WHO failing to serve a vital global coordination function and powerful states actively undermining collaborative capacity. 

Covid-19 has given us all a vivid reminder of the dangers posed by global systemic risks to the protection and safeguarding of human life in our ever more global civilisation. However, it has also unleashed powerful new AI-driven technological forces which are rapidly changing social relations on a vast scale, the effects of which are impossible to know in advance. The age-old debate on liberty versus security looms large over this critical historical juncture for AI technology, with decisions made today on its deployment – absent robust protection safeguards – likely to reverberate for decades to come.

Tom Pegram is an Associate Professor in Global Governance at the University College London (UCL) Department of Political Science/School of Public Policy and Deputy Director of the UCL Global Governance Institute.

Buğra Süsler is a Teaching Fellow in International Organisations & International Conflict and Cooperation at UCL.

Article published in UCL website on July 27, 2020