Global Trends in AI Governance Evolving Country Approaches © 2024 International Bank for Reconstruction and Development The World Bank Group 1818 H Street NW Washington DC 20433 Telephone: 202-473-1000 Internet: www.worldbank.org DISCLAIMER This work is a product of the staff of The World Bank with external contributions. The findings, interpretations, and conclusions expressed in this work do not necessarily reflect the views of The World Bank, its Board of Executive Directors, or the governments they represent. The World Bank does not guarantee the accuracy of the data included in this work. The boundaries, colors, denominations, and other information shown on any map in this work do not imply any judgment on the part of The World Bank concerning the legal status of any territory or the endorsement or acceptance of such boundaries. RIGHTS AND PERMISSIONS The material in this work is subject to copyright. Because the World Bank encourages dissemination of its knowledge, this work may be reproduced, in whole or in part, for noncommercial purposes as long as full attribution to this work is given. Any queries on rights and licenses, including subsidiary rights, should be addressed to the Office of the Publisher, The World Bank, 1818 H Street NW, Washington, DC 20433, USA; fax: 202-522-2422; e-mail: pubrights@worldbank.org. Document and section covers are created using photos by: Sharath G. / Pexels Antoni Shkraba/ Pexels David Kwewum/Pexels Iqbal Farooz/ Pexels Abbakar Saeeyd / Pexels Cottonbro Studio / Pexels Freepik  Acknowledgements This paper is a product of the Digital Transformation Vice Presidency at the World Bank Group. It was prepared by Sharmista Appaya (Lead, AI Business Line & Task Team Leader) and Jeremy Ng (Consultant). The team would like to thank the peer reviewers David Leslie (Director of Ethics and Responsible Innovation Research, Turing Institute), David Satola (Lead Counsel), Yan Liu (Senior Economist), Marelize Gorgens (Senior Specialist) and Nay Constantine (Digital Development Specialist), for their constructive comments. The paper also benefited from inputs from Yolanda Lannquist (The Future Society). Christine Qiang and Peter Kusek provided overall guidance. Thanks to Sajid Chowdhury (Director, Big Blue) for design and layout assistance . The findings, interpretations, and conclusions expressed in the paper and case studies are entirely those of the authors. They do not necessarily represent the views of the World Bank Group and its affiliated organizations or those of the Executive Directors of the World Bank or the governments they represent 3 Global Trends in AI Governance Evolving Country Approaches Contents Executive Summary 7 Section 1: Introduction and Background 10 1.1. Introduction 11 Section 2: Enabling Foundations for AI 15 2.1. Digital and Data Infrastructure 16 2.2. Human Capital (AI and Digital Readiness) 19 2.3. Local Ecosystem 20 Section 3: The Promise and Perils of AI 22 3.1. Challenges in Governing AI 28 Section 4: Regulatory and Policy Frameworks 30 Tool 1: Industry Self-Governance 35 Tool 2: Soft Law 38 Tool 3: Hard Law 46 Tool 4: Regulatory Sandboxes 60 4 Contents Section 5: Dimensions for AI Governance  63 Section 6: Stakeholder Ecosystem & Institutional Frameworks 71 6.1. Public and Regulatory Bodies 73 6.2. Private Sector 79 6.3. Civil Society and Direct Public Participation 81 6.4. International Community 82 Section 7: Guidance for Policymakers 84 7.1. Key Considerations 86 7.2. Looking to the Future 89 Glossary 90 Annex: Sample Country Approaches to AI Governance 92 5 Global Trends in AI Governance Evolving Country Approaches Acronyms AI Artificial Intelligence ITU International Telecommunication Union API Application Programming Interface LGBT Lesbian, Gay, Bisexual, and Transgender CCPA California Consumer Privacy Act LGPD General Data Protection Law (Brazil) CDDO Central Digital and Data Office (UK) LLM Large Language Model CENELEC European Committee MOOC Massive Open Online Course for Electrotechnical Standardization NIST National Institute of Standards and Technology CEN European Committee for Standardization OECD Organisation for Economic Co- operation and Development DFFT Data Free Flow with Trust OSTP Office of Science and DPI Digital Public Infrastructure Technology Policy (US) DSIT Department for PAI Partnership on AI Science Innovation and Technology (UK) PDPA Personal Data Protection Act (Singapore) EU European Union PPP Public-Private Partnership FDI Foreign Direct Investment RAM Readiness Assessment G7 Group of Seven Methodology GDP Gross Domestic Product RTA Responsible Technology Adoption Unit GenAI Generative AI SDG Sustainable Development Goal GDPR General Data Protection Regulation UN United Nations GPAI Global Partnership on AI UNESCO United Nations Educational, Scientific and Cultural ICT Information and Organization Communication Technology US United States IEEE Institute of Electrical and Electronics Engineers WB World Bank ILO International Labour WBG World Bank Group Organization WDR World Development Report IMF International Monetary Fund ISO International Organization for Standardization 6 Executive Summary Executive Summary 7 Global Trends in AI Governance Evolving Country Approaches As artificial intelligence (AI) becomes increasingly 2. Soft Law integral to global economies and societies, the • Strengths: Soft law includes non- need for effective AI governance has never binding international agreements, been more urgent. The rapid advancement national AI principles, and technical in AI technologies, coupled with their standards, providing adaptable widespread adoption across many sectors such frameworks that promote responsible as healthcare, finance, agriculture, and public innovation. Early governance efforts administration, present both unprecedented by intergovernmental bodies have opportunities and significant risks. Ensuring set important precedents. that AI is developed and deployed in a manner that is ethical, transparent, and accountable • Limitations: While soft law requires robust governance frameworks that encourages innovation, it focuses can keep pace with technological evolution. on high-level principles rather than binding rights and responsibilities. This report explores the emerging landscape of AI governance, providing policymakers with an 3. Hard Law overview of key considerations, challenges, and • Strengths: Binding legal frameworks global approaches to regulating and governing provide clear, enforceable guidelines AI. It examines the foundational elements that ensure AI stakeholders comply with necessary for thriving local AI ecosystems, such established standards and regulations. as reliable digital infrastructure, a stable and sufficient power supply, supportive policies for • Limitations: Given the rapid pace of AI digital development, and investment in local development, hard laws risk becoming talent. As countries navigate this complex outdated and can be extremely landscape, the report highlights the need to resource-intensive to implement. encourage innovation by mitigating risks like 4. Regulatory Sandboxes bias, privacy violations, and lack of transparency, emphasizing the importance of sustainable • Strengths: These controlled growth and responsible AI governance. environments allow for real-world experimentation with AI technologies, Regulatory Approaches to AI Governance supporting innovation and providing The report outlines four key regulatory valuable insights without exposing approaches to AI governance—industry the public to unchecked risks. self-governance, soft law, regulatory • Limitations: Sandboxes can be sandboxes, and hard law—each offering resource-intensive and have limited distinct advantages and challenges: scalability, making them less 1. Industry Self-Governance feasible for wide-scale governance across diverse sectors. • Strengths: Can directly impact AI practices if integrated into business Key AI Governance Challenges and models and company cultures. Considerations • Limitations: Non-binding; not AI systems are inherently complex and dynamic, appropriate for sectoral use-cases with with implications that touch on ethical, legal, particularly high risks – e.g. financial and socio-economic aspects. Governing AI sector or healthcare; risk of requires frameworks that promote responsible ‘ethics-washing’. innovation and risk mitigation, ensuring that AI’s benefits are distributed equitably while minimizing potential harms. Moreover, these frameworks must consider sector-specific issues and legacy concerns, particularly in areas like healthcare, finance, and public services, where AI harms can scale rapidly across populations. One critical challenge is bias and fairness. AI systems, if not properly governed, can 8 perpetuate and even amplify existing societal biases, leading to unfair outcomes, Acronyms especially in sensitive sectors like criminal Some key takeaways include: justice or healthcare. It is essential that • Adopting a Multi-Stakeholder governance mechanisms detect and mitigate Approach: Policymakers should engage bias at every stage of AI development diverse stakeholders—including and deployment. Legacy concerns, such industry, civil society, and academia— as pre-existing societal inequalities, must to ensure AI governance frameworks also be addressed to prevent AI from are inclusive, comprehensive, and entrenching or exacerbating these issues. aligned with ethical standards. Another key issue is privacy and security. AI’s • Tailoring Regulatory Mechanisms: reliance on vast datasets raises significant Countries must assess the maturity of their concerns about data privacy and security, AI ecosystem, existing legal and regulatory particularly where sensitive personal information landscapes, and available resources is involved. Robust data protection standards when determining the most appropriate and privacy-preserving AI techniques are regulatory mechanisms. A ‘one-size-fits- necessary to safeguard individual rights and all’ approach is unlikely to work given the maintain public trust in AI technologies. diversity of AI applications and risks. Transparency and accountability are equally • Promoting International Collaboration: crucial. AI decisions must be explainable, AI governance is inherently global in and developers must be held accountable scope. As AI technologies transcend for the impacts of their systems. Clear borders, international cooperation will be standards for explainability, coupled with essential to harmonize standards, address mechanisms for auditing and oversight, cross-border challenges, and ensure AI are vital to maintaining public trust. This is aligns with global public goods, human especially important in sectors like finance or rights, and equitable development. government, where the stakes are high, and transparency is critical to public confidence. • Sector-Specific Considerations and Regulatory Legacies: AI governance Lastly, sustainable growth depends on the frameworks must be tailored to the specific presence of reliable digital infrastructure, sectors they regulate, recognizing that adequate power supply, and a robust talent different industries—such as healthcare, pipeline. For sectors like agriculture or public finance, agriculture, and public services administration, where AI can significantly —face unique challenges and risks. enhance service delivery and efficiency, Additionally, these frameworks must these foundational elements are crucial. consider the regulatory legacies of Policymakers must ensure that legacy individual countries, ensuring that existing infrastructure, which may not have been legal structures, sector-specific regulations, built with AI in mind, is updated to support and data protection laws are integrated sustainable and inclusive AI growth. into new AI governance models. Key Takeaways The future of AI governance lies in AI governance cannot rely on a single, universal a carefully balanced combination of approach, and no regulatory model works in regulatory mechanisms. Only through this isolation. The report stresses the importance tailored, multi-layered approach can AI’s of adopting a flexible, adaptable governance transformative potential be realized for the framework that evolves with both technological common good—driving inclusive growth, advancements and societal changes. sustainability, and ethical progress. Disclaimer This report is intended to serve as a foundation for broader policy discussions and stakeholder consultations. It does not purport to provide legal, technical, or strategic advice but rather provide an overview of current emerging practices. Please note that issues related to AI adoption, strategic frameworks, and 9 enabling infrastructure are covered in separate papers, which are currently under development. Global Trends in AI Governance Evolving Country Approaches Section 1 Introduction and Background 10 Introduction and Background 1.1. Introduction AI has the potential to significantly improve efficiency, optimization, and transparency across multiple sectors. For example, Artificial intelligence (AI) is sparking interest cross-sectoral applications such as language among policymakers globally as a powerful translation tools and customer service chatbots tool to unlock new opportunities for can increase access to public services, benefiting sustainable development. Over 70 countries both users and administrators. In healthcare, AI have already published AI policies or initiatives,1 tools can help address structural inequalities, with numerous more in progress around the shortages of qualified healthcare professionals world. AI technology and applications are or supplies, and accessibility barriers when developing at record pace, as evidenced by the applied responsibly.3 Similarly, in education, AI rapid and widespread adoption of generative can support more inclusive platforms for young AI (GenAI) – new tools and applications children, teenagers, adults, and people with which create original text, audio, image, and disabilities.4 In agriculture, AI is used in precision video content. One striking benchmark is to farming, leveraging drone and satellite imagery consider the pace at which various technologies to support climate adaptation and mitigation have permeated our lives. It took 75 years outcomes, including forest conservation and the for fixed telephones to reach 100 million better use users globally. In contrast, mobile phones of renewable energy. The public sector achieved this milestone in just 16 years, and also benefits from AI, assisting regulators the internet took only 7 years. The Apple in e.g. the financial sector to support fraud store took 2 years, and strikingly, ChatGPT detection and supervision activities.5 reached this number in a mere two months. Although AI has been around for a long This unprecedented rate of adoption not only time, its impact today is markedly different. showcases the transformative potential of The explosion in popularity of GenAI has led AI but also sets the stage for a major shift in to increased focus on AI policymaking and global connectivity and economic systems. governance. Non-generative algorithmic The responsible adoption of AI has substantial systems and decision-making processes potential to drive inclusive growth and (referred to as traditional or narrow AI) have economic development in emerging been widely used across both the public and economies. Investing in AI and digital innovation private sectors in the past decades. Unlike other prepares countries to generate new business emerging technologies such as blockchain, models and participate in the global economy. which primarily appealed to niche markets, According to UNESCO, AI may add USD $13 AI has had significant time to develop and trillion to the global economy by 2030 and mature, and with a proven track record of increase global enhancing productivity across various sectors. GDP by 1.2%.2 It can boost productivity and Policymakers should be aware of the nuances efficiency in key economic sectors, and in public between narrow AI and GenAI, given that services to overcome resources gaps, ranging the governance interventions might need to from health and education to transportation be tailored accordingly. GenAI models and and finance. Strategic adoption of technologies large language models (LLMs) are versatile as can provide employment opportunities they are not limited to a specific pre-defined for youth, innovators and entrepreneurs list of ‘labels’. This represents a significant to participate in global AI value chains. shift from narrow AI, traditionally used in 1 https://oecd.ai/en/dashboards/overview 2 https://unesdoc.unesco.org/ark:/48223/pf0000382570 3 https://www.mckinsey.com/industries/healthcare/our-insights/tackling-healthcares-biggest-burdens-with-generative- ai 4 https://www.oecd.org/en/publications/the-potential-impact-of-artificial-intelligence-on-equity-and-inclusion-in- education_15df715b-en.html 5 https://www.bankingsupervision.europa.eu/press/interviews/date/2024/html/ssm.in240226~c6f7fc9251. en.html#:~:text=It%20can%20analyse%20vast%20amounts,the%20work%20of%20banking%20supervisors. 11 Global Trends in AI Governance Evolving Country Approaches Figure 1. Generative AI fits within the broader context of deep learning, a subset of machine learning. Source: The World Bank 2024 Box 1: Generative AI: A Primer Generative AI applications are enabled by large language models (LLMs), which are trained on vast amounts of diverse and unstructured data including original text, audio, images and videos to support the generation of new content. Users can interact with generative AI models through web or application interfaces providing prompts, that the models use to generate original content in various formats. Leading generative AI models today include OpenAI’s GPT series, Anthropic’s Claude, Google DeepMind’s Gemini and Meta’s Llama. These models can generate articles, synthesize text, write poetry, and even create code. They can also respond to questions, engage in discussions, explain complex scientific or social concepts, and provide extensive replies to precise questions and inquiries. Investment and adoption in these generative AI models has been fast paced. For example, OpenAI’s ChatGPT application was released in November 2022 and reached 100 million monthly active users in two months, making it the fastest-growing consumer application in history. Generative AI represents the next frontier in AI, building upon advancements in machine learning, gains in computing power, and the leveraging of extremely large datasets. Whereas a small number of companies are training and developing advanced models, numerous startups, nonprofits, universities, companies and government actors are leveraging existing LLMs to develop their own AI applications. Smaller actors can access pre-trained AI models via their application programming interfaces (API) or model repositories, enabling them to create their own customized applications without the need to train complex models from the ground up. For instance, a startup or government organization might integrate a generative AI model into their applications for purposes such as language translation, customer service, educational content, and more. Source: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ 12 Introduction and Background specific applications like image recognition, However, if not managed properly, AI tools product recommendation systems, and fraud and applications also pose significant risks detection and which depend to a large extent to consumers that, if left unaddressed, could on pre-defined ‘labels’ on which to train the seriously impact fundamental human interests model. While narrow AI excels within a limited and negatively impact countries’ economic scope, GenAI demonstrates adaptability and growth and development trajectories. The World competence across diverse contexts (see Bank Digital Progress and Trends Report series Box 2). outlines several global efforts, concerns and recommendations to address While AI offers numerous benefits, it also these challenges.6 presents risks that need to be carefully managed by countries as they engage in Policymakers should play a proactive role in the emerging AI economy. AI offers great creating a trust framework for AI governance, potential to accelerate productivity, growth, promoting adoption of AI by encouraging expand economic opportunities, improve responsible innovation with proportionate societal welfare, and promote inclusion. safeguards. This involves establishing Box 2: Distinguishing Narrow AI and Generative AI Narrow AI (Traditional AI): • Task-Specific: Designed to optimize the efficiency of well-defined, specific tasks. • Pattern Recognition: Recognizes features in input data and correlates them with established patterns from training datasets. • Output Type: Primarily used in cases where outcomes follow a predictable format, such as generating scores or providing probabilistic classifications. Generative AI: • Adaptive Learning: Learns and adapts from vast and diverse datasets. • Content Creation: Capable of producing original content including text, audio, images, and videos based on input prompts. • Output Type: Used in creative and dynamic tasks, such as generating articles, code, images, and engaging in complex discussions. More likely to succeed with unstructured imagery or natural language interfaces. Key Differences: • Learning Approach: Narrow AI often requires labeled training data, while Generative AI learns from larger sets of unstructured data. • Output Nature: Narrow AI tends to provide responses from a set range of options, whereas Generative AI can generate dynamic, language-based, or visual responses. • Flexibility: Generative AI can handle a wider variety of tasks and adapt to new data more fluidly than Narrow AI. • Infrastructural Requirements (compute and data): Traditional AI approaches can usually trace and evaluate the appropriateness of their training data and produces algorithms with relatively low computational cost. Generative AI algorithms require vast amounts of data and require substantial compute resources for both model training and inference. *Adapted from inputs from multiple sources. 6 https://www.worldbank.org/en/publication/digital-progress-and-trends-report?cid=ECR_LI_worldbank_EN_EXT 13 Global Trends in AI Governance Evolving Country Approaches comprehensive policy and regulatory to respond to both new opportunities and frameworks, building the disruptions. Regulation, where needed, must enabling foundations for AI innovation be technology-agnostic, focusing on outcomes and ecosystems to thrive, and addressing and principles. Societal and cyber resilience, human capital needs and access to digital AI and digital literacy and inclusion, and infrastructure, computing resources, and sustainability are also important considerations. datasets. Targeted policies can support AI Section II highlights the foundational elements adoption in key sectors and foster the growth needed to create an enabling environment of local innovation ecosystems. Additionally, for AI; Section III outlines the promises and AI harms can be managed through a challenges of AI and the difficulties in regulating combination of regulatory approaches, it; Section IV then examines various regulatory including binding laws and regulations, technical tools and highlights some key principles for standards, international and national ethical policymakers to consider as they design their principles, and private codes of practice. approach to AI governance. Section V sets Looking ahead, international standards- out key dimensions for AI governance, while setting and cooperation are important to Section VI outlines the stakeholder ecosystem guide responsible AI adoption for sustainable, and common institutional arrangements for inclusive, and resilient growth. These principles oversight of AI; finally, Section VII looks to are illustrated with country examples throughout the future, offering some parameters and the report, showcasing developing strategies recommendations for policymakers as they and practices in AI governance develop their AI governance frameworks. and regulation. The report surveys different types of This report seeks to provide policymakers AI governance arrangements around the world with an overview of current approaches illustrated through country examples. Although to creating robust, fit-for-purpose national it is too early to definitively say what has worked AI governance7 frameworks. To meet fast- best, these principles highlight developing changing technological and societal trends, strategies and practices in AI governance agile and flexible policymaking is essential. and regulation. Reliable digital infrastructure, Multi-stakeholder participation, especially sufficient and stable power supply, policies consultation with consumers and affected enabling digital development and investment communities, along with international and in local talent are some of the foundational regional coordination, are crucial in the design requirements for local AI ecosystems. This and implementation of AI governance and section sets out essential prerequisites that policy frameworks. As AI development and can act as enabling foundations for deployment advance, policymakers must countries seeking to harness the benefits be informed, coordinated, and equipped of AI for sustainable development. 7 In this report, the term ‘governance’ refers to the broader framework of laws, rules, practices, and processes used to ensure AI technologies are developed and used responsibly. The term ‘regulation’ is used in a narrower sense to refer to binding legal or regulatory guardrails imposed on AI developers and deployers. 14 Enabling Foundations for AI Section 2 Enabling Foundations for AI 15 Global Trends in AI Governance Evolving Country Approaches Reliable digital infrastructure, sufficient and more people and organizations to develop stable power supply, policies enabling digital and benefit from AI technologies. However, development and investment in local talent the share of mobile phone owners is only 49 and are some of the foundational requirements percent8 in low-income countries. This lack of for local AI ecosystems. This section sets out access hinders inclusive AI growth and constrains essential prerequisites that can act as enabling the collection of diverse and representative data foundations for countries seeking to harness the crucial for developing relevant AI algorithms. benefits of AI for sustainable development. c. Data Storage and Management Systems 2.1. Digital and data AI applications generate and rely on vast amounts of data. Efficient data storage solutions, infrastructure such as data lakes and warehouses, are critical to managing this data and training AI models. Moreover, proper data management systems The successful deployment of AI technologies ensure data integrity, accessibility, and security. in a country hinges on robust digital and According to a report by Gartner, global data infrastructure. This foundation is spending on data storage is expected to reach essential to support the development, $25 billion by 2025, reflecting the growing deployment, and scaling of AI applications importance of this infrastructure component. across various sectors. Key components of this infrastructure include high-speed d. Computational Power internet, data storage and management Compute capacity—the ability to store, process, systems, and computational power. and transfer data at scale9—is crucial for training and deploying AI models and applications. High- a. High-Speed Internet performance computing (HPC) and High-speed internet is the backbone of digital Graphics Processing Units (GPUs) are pivotal in infrastructure. It ensures that data can be this context. Affordable access to international transmitted quickly and efficiently between cloud computing services is a valuable devices, data centers, and cloud services. resource for both training—teaching a For instance, countries like South Korea and model to recognize patterns in data—and Singapore have achieved internet speeds AI inference, applying the trained model exceeding 200 Mbps, enabling seamless AI to new data to generate predictions or operations and real-time data processing. In decisions. Training requires significantly more contrast, countries with slower internet speeds computational power than inference. face significant delays in data transmission, hindering AI application performance. For example, training OpenAI’s GPT-3 involved processing 570 gigabytes of text data using b. Devices thousands of GPUs over several weeks, The availability of devices such as computers, whereas inference tasks using GPT-3 require smartphones, and IoT devices plays a crucial role approximately 1-2 orders of magnitude less in the development, deployment, and utilization compute power, often only needing a single of AI technologies. Devices like smartphones, GPU or a small cluster of GPUs for real-time computers, and IoT devices gather vast amounts processing.10 However, a number of countries, of data essential for training AI models and face challenges in scaling their computational enable real-time processing through edge infrastructure. The reliance on international cloud computing, reducing latency and enhancing computing services can be expensive and may privacy. They also democratize AI by making not always meet the specific11 needs of local it accessible to a broader population, allowing AI practitioners. Furthermore, dependence 8 https://www.worldbank.org/en/publication/digital-progress-and-trends-report?cid=ECR_LI_worldbank_EN_EXT 9 Definition of compute by Tony Blair Institute for Global Change. Retrieved from https://www.institute.global/insights/tech-and- digitalisation/state-of-compute-access-how-to-bridge-the-new-digital-divide 16 10 https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/ 11 https://indiaai.gov.in/ Enabling Foundations for AI on external providers can pose risks related inaccurate, unsafe, and discriminatory outcomes. to data sovereignty, privacy, vendor lock-in To address these challenges, governments and security. Investments in HPC and local could increase the availability of AI-ready open cloud infrastructure are crucial for fostering a datasets by digitizing local data, including public sustainable and competitive AI ecosystem. sector records, and making it publicly accessible. Regional collaborations can consolidate Synthetic data has also been explored to resources towards shared data centers. For enhance datasets, but it should be carefully example, the European High Performance managed to avoid perpetuating biases.15 Public Computing Joint Undertaking (EuroHPC JU)12 is and private sector entities can then utilize a significant initiative aimed at pooling resources these open datasets to develop across European countries to develop a world- consumer-beneficial products. class supercomputing ecosystem. Moreover, as Countries that invest in collecting and curating demand for edge computing grows, investments diverse, high-quality multimodal datasets are in local infrastructure become even more critical. better positioned to develop advanced AI Edge computing reduces latency by processing applications that are more accurate, reliable, data closer to where it is generated, which is and capable of performing complex tasks particularly important for applications requiring across different domains. However, expanding real-time processing and decision-making.13 data access must be balanced with good data It should be noted however that mitigating the governance and sharing practices, ensuring environmental impacts of AI is also an important privacy, security, and fair representation to consideration. Evidence shows sharply increased support trustworthy and inclusive AI systems. water and electricity consumption due to AI There is an urgent need for critical research on training and development. Developing more the intersection of data and AI governance. For energy-efficient algorithms and sustainable AI example, the need to combat algorithmic bias infrastructure powered by clean energy is crucial in outputs by using larger, more representative, for addressing these challenges and ensuring and inclusive datasets to train AI models, long-term sustainability and competitiveness.14 may sit in tension with core data protection principles such as data minimization. e. High Quality Multimodal Data High-quality multimodal data is the backbone of the digital economy and a crucial element for AI development. This type of data encompasses various formats, including text, images, audio, and video, allowing AI models to understand and process information from multiple sources effectively. For example, combining textual data with visual and audio data can enhance an AI system’s ability to recognize speech, understand context, and make accurate predictions. However, disparities in digital access lead to underrepresentation within datasets, resulting in less representative training data. This, in turn, lowers the accuracy of model outputs and can potentially cause biased or harmful outcomes. The issue is further exacerbated when AI models are trained on foreign datasets that are not suited to local contexts, leading to 12 https://eurohpc-ju.europa.eu/about/discover-eurohpc-ju%5Fen 13 The State of AI Infrastructure at Scale 2024 14 Gartner. (2023). ‘Market Guide for Cloud Infrastructure as a Service.’ Retrieved from Gartner. 15 Datasets require sufficient real data in each generation to ensure their quality (precision) or diversity (recall). https://arxiv.org/abs/2307.01850 17 Global Trends in AI Governance Evolving Country Approaches Box 3: Country Example: India India’s comprehensive AI Mission recognizes the critical importance of computational power as a prerequisite for AI development. With a strong emphasis on building and democratizing computational infrastructure, the mission has a budget outlay of Rs.10,371.92 crore. (USD 1.38 billion). Beyond computational power, the AI mission encompasses several other key components designed to foster innovation, ensure ethical practices, and drive socio-economic transformation. Key Components of the IndiaAI Mission: • High-End Scalable AI Computing Ecosystem: The mission includes the establishment of a high-end scalable AI computing ecosystem with over 10,000 Graphics Processing Units (GPUs), built through public-private partnerships. This infrastructure is designed to meet the demands of India’s rapidly expanding AI start- ups and research ecosystem. • AI Marketplace: An AI marketplace will be developed to offer AI as a service and provide pre-trained models to AI innovators. This marketplace will serve as a one-stop solution for critical AI resources, facilitating easy access and promoting innovation. • IndiaAI Innovation Centre: This centre will focus on the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models across key sectors. It aims to bolster India’s capabilities in AI and ensure the development of AI solutions that cater to local needs. • IndiaAI Datasets Platform: A unified platform will be created to streamline access to quality non-personal datasets, ensuring that Indian startups and researchers have seamless access to the data necessary for AI innovation. • IndiaAI Application Development Initiative: This initiative will promote AI applications in critical sectors by developing, scaling, and promoting impactful AI solutions with the potential for large-scale socio- economic transformation. • IndiaAI FutureSkills: The program aims to mitigate barriers to AI education by increasing the availability of AI courses at undergraduate, masters, and Ph.D. levels. Data and AI labs will be established in Tier 2 and Tier 3 cities to offer foundational AI courses, ensuring that AI education is accessible across the country. • IndiaAI Startup Financing: This pillar will support and accelerate deep-tech AI startups by providing streamlined access to funding, enabling them to undertake futuristic AI projects and drive innovation. The IndiaAI mission is poised to create highly skilled employment opportunities, leverage the country’s demographic dividend, and enhancing India’s global competitiveness. 18 Source: https://indiaai.gov.in/ Enabling Foundations for AI Box 4: Korea’s Data Dam Initiative South Korea has launched an ambitious project known as the Data Dam, aimed at enhancing the country’s data infrastructure and fostering innovation in AI and big data. This initiative is part of the Korean New Deal, which focuses on digital transformation and green growth. The Data Dam project involves collecting and utilizing vast amounts of data across various sectors, including healthcare, transportation, and finance. By integrating data from multiple sources and making it accessible through a centralized platform, Korea aims to create a robust data ecosystem that supports the development of AI applications. Just like a water-storage dam collects, stores, and distributes water to the surrounding land for activities such as farming, the Data Dam project collects information from public and private sectors to create useful data and releases it across all industries. Key features of the Data Dam initiative include: • Centralized Data Integration: Combining data from public and private sectors into a unified platform to break down silos and promote efficient data use. • AI Hub Establishment: Creating an AI hub to provide companies and researchers with access to AI training data from the Data Dam and cloud-based high-performance computing resources. • Sectoral Data Utilization: Focusing on sectors such as healthcare, transportation, and finance to drive innovation and improve services through AI applications. • Data Privacy and Security: Implementing robust data protection measures to safeguard personal information and comply with regulations, thus building public trust. The Data Dam initiative has already shown promising results, with significant progress in data collection, integration, and utilization. Source: Ministry of Science and ICT, South Korea. ‘Korean New Deal’; Korea Data Agency. ‘Data Dam Initiative.’; OECD 2.2. Human Capital (AI and and management of data centers and cloud infrastructure. While some outsourced jobs Digital Readiness) may face automation, countries can target AI adoption towards technologies that leverage labor and address domestic needs. Governments must adapt education and This effort should include both upskilling current training programs to prepare workforces workers to enhance their existing capabilities for participation in the global AI value chain and reskilling individuals to equip them with while mitigating labor market disruptions and new skills for emerging job opportunities. potential job losses due to automation.16 The Additionally, it is crucial to focus on capacity AI value chain offers employment opportunities building within government institutions to across skill levels, from data collection and ensure they have the expertise required to preparation to machine learning research effectively regulate and govern AI technologies. 16 Value chains are sequences of processes involved in the creation, development, deployment, and utilization of AI technologies and solutions. Including data collection and processing, algorithm design and development, model training and optimization and integration among others. Subject of forthcoming WB paper. 19 Global Trends in AI Governance Evolving Country Approaches Investing in productivity, and digital skills digital and data infrastructure and human capital. will be key to augmenting the labor force This ecosystem includes elements such as with AI rather than replacing it.17 Education research and development (R&D) - crucial for and training programs should equip students advancing AI technologies and can potentially with skills in machine learning, data science, include government funding, private sector business, data engineering, computer investment, and academic partnerships; public- science, and practical technical skills like data private partnerships (PPP)- which can help center maintenance or data preparation and in pooling resources, sharing knowledge, and management.18 Most regions lack human capital driving large-scale AI projects; a vibrant startup and a talent pool ready to develop or apply ecosystem including support for startups, AI applications. Networks of exchange among access to funding, shared infrastructures such university professors, such as those established as incubators and accelerators, and the tools by the African Institute for Mathematical to support collaborations among industry Sciences,19 can help overcome shortages in players, academics, local organizations, knowledgeable lecturers at low cost. Training and other community stakeholders; and programs must emphasize inclusivity, particularly awareness and advocacy about AI and targeting rural communities and women, to its potential benefits to drive adoption prevent widening inequality and divides. both in the public and private sector. Moreover, there is a need for capacity Governments can lead by example, through building within government institutions to promoting internal AI adoption or offering ensure they have the expertise required subsidies to solve pressing challenges in key to effectively regulate and govern AI industry sectors such as healthcare, education, technologies. Education and training programs environment, energy or beyond. Some examples should also consider fostering so-called ‘soft of this include India’s ‘AI for All’ approach, a self- skills’, those that AI cannot easily replicate, such learning online program designed to raise public as judgment, critical thinking, and emotional awareness of AI for inclusive development, intelligence. Measures to improve digital highlighting AI startups addressing social literacy are important, as it remains a significant challenges in healthcare, language translation hurdle to the development, management, and agriculture.20 Additionally, governments adoption, and use of AI, particularly in low- can create an enabling environment for AI income countries (LICs). Addressing this investment through supportive policies, seed foundational challenge is crucial for enabling investment funds and co-financing, incentives the wider population to participate in and or even support public procurement by benefit from AI-driven economic opportunities. pre-certifying certain AI vendors, such as in Canada’s List of AI Suppliers, to facilitate the 2.3. Local Ecosystem integration and adoption of AI technologies.21 More details on the ecosystem can be This section does not go into the details found in our forthcoming toolkit on of the enabling ecosystem but is here to developing a country-specific AI Strategy. illustrate its importance as a foundational element for AI development. A robust ecosystem is essential for fostering AI development and adoption, complementing 17 Digital Progress and Trends Report 2023, The World Bank, 2024, https://openknowledge.worldbank.org/server/api/core/ bitstreams/95fe55e9-f110-4ba8-933f-e65572e05395/content 18 https://www.oecd.org/publications/the-impact-of-ai-on-the-workplace-main-findings-from-the-oecd-ai-surveys-of-employers- and-workers-ea0a0fe1-en.htm and https://www.oecd.org/els/the-impact-of-ai-on-the-workplace-evidence-from-oecd-case- studies-of-ai-implementation-2247ce58-en.htm 19 https://nexteinstein.org/ 20 AI for All, India 20 21 https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/list- interested-artificial-intelligence-ai-suppliers.html Enabling Foundations for AI Box 5: Singapore’s AI Apprenticeship Program Singapore’s AI Apprenticeship Program (AIAP) has successfully trained over 300 Singaporeans, equipping them with practical AI technical skills to meet the growing demands of the domestic AI ecosystem. This full-time program runs for 9 months and is structured into two phases: a 2-month intensive deep-skilling training followed by a 7-month real-world AI project. During the program, apprentices are paired with mentors and gain access to industry recruitment opportunities. The AIAP is fully funded by the government and includes a monthly stipend for apprentices, which varies based on their years of relevant work experience and qualifications. The program is inclusive, welcoming participants of various ages, with special provisions for Singaporeans aged 40 years or above who are eligible for an extension to gain additional business and technical hands-on experience. To be eligible for AIAP, applicants must be Singaporean citizens, graduates from a recognized university or polytechnic, and possess prerequisite programming competencies. AIAP is part of the national AI Singapore initiative, supported by the National Research Foundation and hosted by the National University of Singapore Source: https://aisingapore.org/aiap/ ; https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press- releases/2023/imda-leads-ai-skilling-to-build-ai-talent-pool 21 Global Trends in AI Governance Evolving Country Approaches Section 3 The Promise and Perils of AI 22 The Promise and Perils of AI Despite the transformative potential of AI For example, LLMs emit up to 550 tons of across multiple sectors, there are important CO2 during their training processes. Serious practical challenges to implementation – sustainability concerns also apply to model crucially, robust governance frameworks inference processes – Google attributes 60% are needed to ensure AI systems of its AI-related energy use to inference;23 are trusted by consumers. generating one image using AI uses the same amount of energy as charging a smartphone.24 AI systems present several existing risks Large tech companies are at risk of missing their that stem from their inherent limitations and climate targets – with Microsoft and Google the quality of the data they are trained on. both announcing in 2024 that they would miss One of the most prominent risks is bias and their sustainability targets set during previous discrimination. AI models can perpetuate years.25 There are also increasing concerns and even exacerbate existing biases if they regarding the water consumption needed to are trained on unrepresentative or biased cool the computing equipment housed within datasets. This can lead to unfair treatment and data centers – Microsoft has noted that 42% of outcomes, particularly for underrepresented the water it consumed in 2023 came from ‘areas and marginalized groups. For example, facial with water stress’.26 Addressing these challenges recognition systems have been shown to requires rethinking the dominant ‘bigger is have higher error rates for people with darker better’ paradigm and deepening appreciation skin tones compared to those with lighter of the value of smaller AI models, mandating skin tones22, raising serious concerns about greater transparency in terms of compute cost their use in law enforcement and surveillance. and energy usage, and promoting research that Additionally, the lack of explainability and focuses on resource efficiency27 – this requires transparency in AI decision-making processes collaborative efforts across sectors and borders, makes it difficult to identify, audit, and rectify robust policy frameworks, and ongoing research these biases, further compounding the risk of and development to ensure that AI technologies discrimination. As AI systems play an increasingly are implemented responsibly and equitably. significant role in decision-making processes across sectors, the lack of explainability remains Beyond well-documented risks such as a barrier to auditing and improving the models. biases and lack of explainability, newer challenges are emerging as AI technologies Moreover, while AI is being applied in evolve. One such risk associated with use cases for environmental and climate GenAI in particular is the phenomenon of AI protection, such as predicting and monitoring hallucinations, where AI systems generate deforestation patterns or optimizing renewable outputs that are factually incorrect yet appear energy systems, the AI model supply chain plausible. The complexity and opacity of consumes a huge amount of energy, water, these models make it difficult to predict and and other natural resources. This contributes control when hallucinations will occur, posing to increased carbon emissions and potential significant risks in critical applications such environmental degradation, highlighting the as healthcare, legal advice, and education. need for sustainable practices in AI deployment. 22 Buolamwini, J., & Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.’ Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR 81:77-91. Available at: http:// proceedings.mlr.press/v81/buolamwini18a.html 23 https://arxiv.org/pdf/2409.14160 24 https://arxiv.org/pdf/2311.16863 25 https://arxiv.org/pdf/2409.14160 26 https://techcrunch.com/2024/08/19/demand-for-ai-is-driving-data-center-water-consumption-sky-high/ 27 https://arxiv.org/pdf/2409.14160#page=13&zoom=100,48,86 23 Global Trends in AI Governance Evolving Country Approaches Box 6: AI Risks 1. Bias and Discrimination: AI systems can perpetuate bias and discrimination due to unrepresentative datasets and a lack of transparency in algorithms. 2. Labor Market Disruption: The adoption of AI technologies can lead to significant labor market disruption, resulting in job losses and a widening digital divide. 3. Misuse of AI & Trust Erosion: AI can be misused for spreading misinformation, creating deepfakes, conducting cybercrime, interfering with elections, and facilitating fraud and scams, which erodes trust in public and private institutions. 4. Inequality and Access: There are growing gaps in inclusion and widening inequality based on differential access to AI technologies. 5. Environmental Impacts: AI systems, particularly those involving large-scale data processing and machine learning models, consume significant amounts of energy, contributing to environmental degradation and increased carbon emissions. 6. Cybersecurity Vulnerabilities: AI systems and applications are susceptible to various cybersecurity vulnerabilities due to their complexity and multiple points of vulnerability. LLMs and other foundation models currently lack adequate security requirements.28 Critical services and infrastructure may become inaccessible due to AI failures or targeted cyber-attacks. 7. Privacy and Data Protection: AI-driven surveillance and misuse of personal information pose significant privacy risks. Training AI models requires huge amounts of data, leading to significant concerns regarding mass data collection and processing of personal data. 8. Physical Safety Risks: Additionally, AI system failures, security breaches, or unintended AI behavior. 9. Explainability and Accountability: The lack of explainability and accountability in AI decision-making processes raises serious concerns, especially where end-users wish to challenge certain algorithmic decisions. 10. Risks related to deployment context: Depending on the context in which the AI system is deployed, a range of risks can arise. For example, if not deployed appropriately, generative AI tools used by students may threaten learning quality by lowering retention due to a deepened dependency of students on AI tools. In healthcare contexts, AI systems used for disease prediction and diagnosis that are not robustly designed may lead to biased results, under- or mis-diagnosis, and potentially delayed treatment. 11. Geopolitical Risks: The development and deployment of AI in certain sectors can lead to geopolitical instability, e.g. by increasing fragility and conflict via the use of autonomous weapons. 12. Social and Cultural Impact: The integration of AI can disrupt social norms and lead to cultural homogenization. 13. Intellectual Property: Mass data collection raises concerns regarding the legality of using copyrighted material and other information protected by IP law to train AI models. 14. Psychological Impact: The influence of AI on mental health and human- AI interaction dynamics can have profound psychological effects. Source: adapted and updated from https://openknowledge.worldbank.org/server/api/core/bitstreams/9040dbbb-8594- 4083-a399-24592313f907/content Disclaimer: Non-exhaustive 24 28 https://www.rand.org/pubs/working_papers/WRA2849-1.html The Promise and Perils of AI Box 7: Bias in AI system leads to exclusion of families from childcare benefits in the Netherlands An AI system employed by the Dutch tax authority inaccurately excluded eligible recipients from welfare benefits, causing significant negative repercussions. The Dutch tax authorities employed an AI tool to create risk profiles for identifying child care benefits fraud. However, this system inaccurately labeled tens of thousands of families, often lower-income or ethnic minorities, as fraudsters based on flawed risk indicators like having dual nationality or low income. As a result, many families faced severe consequences, including crippling debts to the tax agency which pushed them into poverty, loss of child custody, and in some cases, suicide. More than one thousand children were taken into foster care. This incident underscores how AI bias and automation can lead to the inaccurate exclusion of vulnerable populations from important public assistance.29 It also highlights the need for robust regulations, algorithmic transparency, human oversight, and avenues for redress when automated decisions cause harm. Source: https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/ Additionally, AI poses risks from job productivity gains, exacerbating the digital automation and labor market disruptions. The divide and income disparity within and among International Monetary Fund (IMF) estimates countries.30 While the forthcoming effects that nearly 40% of jobs in emerging markets of AI on labor markets are still unknown, and 26% in low-income countries are exposed there is notable potential for job losses, to AI, compared to 60% in advanced economies increased inequality, and societal disruptions. due to the prevalence of cognitive tasks. While Addressing these new and evolving risks emerging markets and developing countries requires ongoing research, robust verification (EMDEs) are less exposed to job disruptions, mechanisms, and stringent oversight to ensure they are also less equipped to benefit from AI systems are reliable and trustworthy. 29 https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/ 30 https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future- of-Work-542379 25 Global Trends in AI Governance Evolving Country Approaches Box 8: Understanding AI Hallucinations AI Hallucinations occur where AI systems, particularly those powered by LLMs, produce outputs that are plausible sounding but factually incorrect or nonsensical. These errors occur because the AI generates text based on patterns and data it has been trained on, without an understanding of the real-world context or factual accuracy. Example of an AI Hallucination Consider an AI chatbot designed to assist with medical queries. A user might ask, ‘What are the symptoms of a heart attack?’ An accurate response would include symptoms such as chest pain, shortness of breath, and dizziness. However, an AI hallucination might generate an answer like, ‘Heart attacks can be treated effectively with green tea and meditation,’ which is misleading and potentially dangerous. Real-World Instance In 2020, OpenAI’s GPT-3 was noted for generating a response suggesting that ‘Ebola is caused by spirits.’ This statement is a clear hallucination, as Ebola is a viral infection caused by the Ebola virus, and the response lacks any scientific basis. Such instances highlight the critical need for verifying AI-generated information, especially in sensitive domains like healthcare. Mitigating Hallucinations To reduce the risk of AI hallucinations, it is crucial to: • Implement robust verification mechanisms to check the factual accuracy of AI outputs. • Use domain-specific training data to improve the contextual accuracy of AI models. • Continuously monitor and update AI systems to correct and learn from mistakes. • Ensure users of GenAI systems are correctly trained to identify and manage hallucinations. Source: Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. ACM Digital Library; Marcus, G., & Davis, E. (2020). ‘GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about.’ MIT Technology Review. MIT Technology Review. 26 The Promise and Perils of AI Box 9: Open-source AI amplifies benefits and risks ‘Open-source’ AI models are those whose source code is openly shared under a licensing model that grants users the right to access, modify, and redistribute code. The term is often associated with AI models that have widely available and publicly accessible components such as model weights, training data or code. Several advanced AI models, including LLMs developed by major tech companies, are available under open- source licenses, enabling local AI practitioners to use them without paying licensing fees for context-specific projects while also expanding the potential for misuse.31 The open-sourcing of AI models or public accessibility of model components can ‘democratize’ access to AI by allowing more actors to adapt models for local context, provided they have the necessary infrastructure, data, and skills. It also enables greater transparency, allowing external parties to conduct inspections, audits, research, and bug detection. Conversely, open-source models can be more easily misused. Access to model weights can compromise the safety of models by allowing actors to remove safety guardrails, potentially generating harmful outputs.32 As AI systems become more capable, the potential for misuse and harm grows, and practitioners may lack the tools or awareness to apply models responsibly. Once LLM model weights are made public, it is infeasible to monitor, retract, or stop their use, as models may be copied and distributed.33 While many open-source projects use licenses that promote responsible use to a limited degree, governments and societies should anticipate and prepare for harms and misuse in the absence of comprehensive safeguards or global regulation. Source: https://spectrum.ieee.org/open-source-ai-2666932122; BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B, https://arxiv.org/pdf/2311.00117.pdf; See page 3, ‘for open-source models safety filters can simply be removed by deleting a few lines of code,’ https://arxiv.org/pdf/2211.14946.pdf. 31 https://spectrum.ieee.org/open-source-ai-2666932122 32 BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B, https://arxiv.org/pdf/2311.00117.pdf. See also page 3, ‘for open-source models safety filters can simply be removed by deleting a few lines of code,’ https://arxiv.org/pdf/2211.14946. pdf 33 https://spectrum.ieee.org/open-source-ai-2666932122 27 Global Trends in AI Governance Evolving Country Approaches 3.1. Challenges in health data and ensure the reliability of AI-driven diagnostic tools. Conversely, the Governing AI financial sector focuses on fraud detection, risk management, and compliance with financial regulations. Similarly, the There are various challenges involved transportation sector must address in governing AI. Some of the most safety and efficiency in AI applications pertinent include: for autonomous vehicles and traffic management. Powerful foundation models 1. Keeping pace with technological present unique governance challenges advancements. One of the primary issues due to their broad applicability across is the rapid pace of AI development. As various sectors. These models require highlighted by Stanford University’s AI Index comprehensive governance measures Report 2024, investment in generative AI that go beyond traditional sector-specific accelerated to $25.2 billion in 2023, with approaches, necessitating coordination applications spanning customer support, across multiple government entities and healthcare, autonomous vehicles, fintech, sectors. These sector-specific differences drones, legal tech, and manufacturing.34 highlight the importance of developing This rapid evolution, often referred to as the customized governance frameworks ‘pacing problem,’ means that regulatory to ensure responsible and effective AI and governance frameworks struggle implementation across diverse domains. to keep up. Developing new laws and policies can take months or even years, 4. Cross-Jurisdictional Coordination: AI during which AI technologies continue development, deployment, and use are to advance, creating governance gaps. often cross-jurisdictional, necessitating international coordination. AI models may 2. Limited technical expertise and be trained on datasets collected from knowledge gaps. Another challenge numerous countries and accessed through is limited technical expertise within international cloud services. Different governments. Policymaking is hindered by stages of AI development occur in multiple knowledge gaps regarding AI technologies jurisdictions with varying legal frameworks, and their applications. Higher salaries in the making it challenging for individual private sector contribute to a brain drain, countries to regulate the entire AI lifecycle. with a significant proportion of AI talent The material AI supply chain involves opting for private or international roles materials, hardware, and labor sourced over government positions. For instance, from a wide array of countries across both only 0.7% of new AI PhD graduates in the the Global North and South. Without a United States and Canada choose to work coordinated global approach, disparate in government roles.35 This lack of expertise national policies can lead to regulatory makes it difficult to draft effective policy, arbitrage, inconsistencies, and potential regulatory, and governance measures. loopholes, resulting in gaps or even a 3. Sector-Specific Governance Needs: AI ‘race to the bottom’ in AI governance. governance needs to be tailored to different 5. Complexity of AI Supply Chains: The sectors, each with unique requirements, complex supply chains of AI products, risks, and operational contexts. For particularly generative AI and LLMs, present example, the healthcare sector prioritizes significant challenges for governance and patient privacy and safety, necessitating accountability. These AI systems often rely stringent regulations to protect sensitive 34 https://aiindex.stanford.edu/report/; Figure 4.3.3 page and Figure 4.3.15 page 254. 35 https://aiindex.stanford.edu/report/; figure 6.1.7 ‘Employment of new AI PhDs (% of total) in the United States and Canada by sector, 2010-22, page 335. 28 The Promise and Perils of AI on vast amounts of data sourced from 6. Balancing Innovation and Risk Mitigation: multiple providers and specialized hardware Ensuring governance approaches take and software components supplied by a proportionate approach to promoting different vendors. The complexity and AI innovation while mitigating potential lack of transparency in these supply chains risks is a delicate task. Disproportionate make it difficult to trace the origins of regulatory provisions can over-burden potential issues and identify practical startups with limited compliance resources, points for regulatory intervention. while insufficient governance leaves individuals and society vulnerable to serious risks. Governing AI involves addressing complex ethical, technical, and socio- economic challenges, hence policymakers must create adaptable governance frameworks that provide clear guidelines and safeguards that enable rather than hinder responsible technological progress. Box 10: When should AI governance policies be introduced? One of the core challenges of AI governance is correctly timing policy interventions. Early on in the AI adoption lifecycle, we face an ‘information’ problem: given the rapid pace of cutting-edge AI development, it is difficult to predict how AI’s critical features, uses, and risks will evolve over time. However, if AI adoption becomes widespread, policymakers may face a ‘control’ problem: exercising governance control over AI systems may become harder because AI approaches, applications and structures become entrenched in path-dependent ways. Given the nature of this dilemma, it is impossible to set out a prescriptive, one-size-fits-all approach – however, two key principles may help policymakers navigate these issues for their local contexts. First, thinking of governance as an iterative, agile process can enable policy interventions to be tailored and updated as technology develops and new information is collected. Second, collaborative multi-stakeholder approaches to governance can increase the degree of openness and transparency regarding how governance decisions are made – enabling greater trust between all societal stakeholders and the technologies being governed. Source: https://demoshelsinki.fi/2022/02/15/what-is-the-collingridge-dilemma-tech-policy/ 29 Global Trends in AI Governance Evolving Country Approaches Section 4 Regulatory and Policy Frameworks 30 Regulatory and Policy Frameworks Policymakers seeking to craft robust AI facilitate and encourage responsible AI governance frameworks are faced with several innovation) .38 Where possible this has complex challenges. On one hand, there is an been illustrated with country examples. urgent need to establish robust governance Some policymakers may be concerned frameworks to ensure the ethical, fair, and about the risk of over-regulating nascent responsible use of AI technologies. Without AI industries and stifling innovation. As such frameworks, there is a risk of AI systems such, it is important to ensure any regulatory perpetuating biases, infringing on privacy, and interventions are proportionate and tailored making decisions without accountability. On the to the risks, harms and potential societal other hand, technology-specific governance impact of the AI systems being regulated. At interventions may provide clearer guidelines the same time, policymakers should note tailored to the unique challenges of AI, but they that the empirical relationship between risk becoming quickly outdated due to the rapid regulation and innovation is highly unclear;39 pace of technological advancement. Conversely, often, regulation is critical for creating a tech-agnostic interventions, which focus on level competitive playing field for new broader principles applicable across various market entrants while also creating legal technologies, offer flexibility and longevity but certainty for established AI developers and may lack the specificity needed to address deployers. Clear regulations help companies AI’s unique risks and opportunities. Striking plan and invest with confidence, knowing the right balance between these approaches the standards they must meet. Conversely, is critical to fostering innovation while leaving AI systems unregulated risks exposing safeguarding societal values and human rights. consumers to unacceptable harms and leaves For the potential benefits of AI to be critical decisions regarding AI deployment to realized, all societal stakeholders must trust market forces and private companies, potentially the AI systems and institutions that they are prioritizing profit over public interest. Therefore, engaging with. The UN High-Level Advisory considered and agile regulation is essential Body on AI has noted that governance is a key for encouraging responsible innovation while enabler and precursor for responsible AI.36 They safeguarding end-users and vulnerable groups. indicate that the creation of AI systems that This section provides policymakers with work towards the Sustainable Development a toolbox of AI governance instruments Goals (SDGs) cannot be guided solely through that they can use as a starting point for market forces or self-regulation by the private governing AI in their country contexts. For sector; they require concerted governmental the purpose of this paper, we have identified and intergovernmental policymaking and 4 main types of regulatory approaches. coordination.37 Building on the approach set out in the World Bank’s World Development Report 1. Industry self-governance 2021: Data for Better Lives, this paper sets out 2. Soft law (including technical standards) the different legal, regulatory and governance 3. Regulatory sandboxes tools available for creating trust in the AI 4. Hard law ecosystem, encompassing both safeguards - to prevent AI harms and enablers - to 36 https://www.un.org/sites/un2.un.org/files/un_ai_advisory_body_governing_ai_for_humanity_interim_report.pdf%20at%20, at p.8 . 37 Id. 38 WDR 2021. 39 See e.g. https://www.ohchr.org/en/press-releases/2019/10/world-stumbling-zombie-digital-welfare-dystopia-warns-un-human- rights-expert?LangID=E&NewsID=25156;%20https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4753107. 31 Global Trends in AI Governance Evolving Country Approaches For each of these regulatory tools, we have and mutually reinforcing, and often intersect included examples from specific country with other legacy regulatory and policy contexts, discussing each tool’s relative frameworks, both horizontal and sector- strengths, weaknesses, and policy tradeoffs. A specific. Effective national AI governance summary of our analysis is set out in table 1. strategies will likely integrate multiple tools. This overview is not intended to be a Therefore, there is no ‘one-size-fits-all’ AI comprehensive or exhaustive list of all AI governance approach. Each tool has its own governance interventions (given that such context-specific strengths and weaknesses. a list would quickly become outdated). Policymakers should tailor any regulatory tool for The aim here is to provide an overview their country’s policy priorities and for the needs of current thinking around key tools to of local communities to create an AI governance stimulate high-level policy debate. regime that suits their national policy objectives. It is imperative that policymakers do not This paper does not aim to set out a import regulatory provisions or strategies prescriptive list of ‘best practices’ – given the from other countries without appropriate nascent state of AI governance and regulation modifications and consultation with affected efforts globally, it is too early to definitively state communities, the public, civil society, the private that some approaches work better than others. sector, and the international community. Instead, this paper instead aims to provide a toolbox of potential options that policymakers The tools outlined below are intended to apply can consider and adapt for their local contexts. to all AI systems – however, certain interventions (e.g. AI Safety Institutes) are particularly tailored It is also important to note that these to the governance of frontier, advanced regulatory tools are not discrete or stand- large-scale AI systems. These interventions alone approaches – these approaches do not will be flagged for the reader as needed. operate in isolation; they are interdependent 32 Regulatory and Policy Frameworks Table 1. Governance Tradeoffs for AI Governance Regulatory Tool Examples Benefits Risks Industry self-governance Private ethical codes • Microsoft Aether 1. Can directly impact 1. May be vague and of and councils Committee and AI practices if limited practical use. Responsible AI integrated into 2. Not appropriate for certain Standard Playbook business models and sectoral use-cases with company cultures • Google AI Principles particularly high risks – e.g. 2. Requires minimal financial sector or healthcare. • Bosch Ethical public sector Guidelines for AI 3. Non-binding, with supervision, no mechanisms for • IBM’s AI Ethics Board intervention or effective public oversight resources to set up • Partnership on or enforcement AI (non-profit 4. Limited public input into coalition on AI) design or implementation 5. Risk of ‘ethics-washing,’ where ethical commitments are superficial 6. Limited to a smaller subset of companies Soft Law Non-binding • OECD/G20 AI 1. Can directly impact 1. Non-binding international Principles national AI policy, 2. Focus on high-level principles agreements when supported • UNESCO rather than specific rights with funding and Recommendation and responsibilities technical advice on the Ethics of AI 3. Potential legal uncertainty 2. Can have a global • G7 Principles due to vagueness/lack harmonizing effect of practical impact • UN General Assembly resolution on AI National AI principles • UK AI regulation 1. Provides guidance 1. Non-binding / ethics frameworks principles (2023 for industry actors 2. Potential legal uncertainty white paper) 2. Agile and flexible; due to lack of clarity and • US White House can adapt to practical implications. AI Bill of Rights technological 3. Must be supported by advances • Australia voluntary mandatory transparency AI Ethics Principles 3. Relatively low-cost to requirements to create and promote monitor uptake • Singapore Model AI Governance Framework for Generative AI Technical standards • IEEE P70xx series 1. Provides technical 1. Well-resourced means of incumbents could have • ISO/IEC 23894:2023 operationalizing disproportionate influence • NIST AI Risk responsible AI 2. Participation gaps for Management principles less developed states Framework 2. Often have strong and civil society • UK AI Standards Hub incentives for 3. Time-intensive to develop compliance • C2PA standards 33 3. Usually created through multi- stakeholder process Global Trends in AI Governance Evolving Country Approaches Regulatory Tool Examples Benefits Risks Regulatory sandboxes Regulatory sandboxes • Colombia regulatory 1. Controlled 1. Mainly useful where there are sandbox on privacy environment to regulatory questions that can by design and default test and evaluate be solved by experimentation in AI projects new regulatory 2. Extremely resource-intensive approaches • Brazil regulatory 3. Can create market distortion sandbox pilot for AI 2. Can leverage and unfair competition and data protection expertise of existing supervisory • Singapore AI authorities Verify toolkit 3. Collaborative form of regulation particularly suited for nascent AI ecosystems with limited capacity Hard law New horizontal AI law • EU AI Act 1. Creates legal 1. Lack of concrete ‘best certainty and level practices’: policymakers • Council of Europe playing field. should not ‘copy and Framework paste’ approaches from Convention 2. Sets binding, other jurisdictions consistent level • Brazil AI Bill of protection 2. Time-consuming and • Chile AI Bill against AI risks resource-intensive to design and implement 3. Allows setting ‘red lines’ around 3. Tradeoffs in drafting (future- unacceptable proofing vs. avoiding gaps AI use cases in consumer protections) Update or apply • Data protection/ 1. Leverages existing 1. Limited by the scope existing laws privacy regulatory of existing frameworks architecture (e.g. data protection only • Human rights, applies to personal data). equality, non- 2. Existing regulated discrimination laws entities already 2. Patchwork approach to familiar with regulation can create • Cybercrime compliance gaps in consumer • Intellectual property framework protections and lack of legal certainty for industry • Competition/antitrust • Procurement Targeted / sectoral • Chinese regulations 1. Can provide highly 1. Can create fragmented laws or regulations on recommendation context-specific legal landscape, creating algorithms, and sticky form legal uncertainty and gaps ‘deep synthesis’ of regulation in consumer protections technologies, and 2. Particularly effective 2. Risk becoming out of generative AI when enforced by date if technological • New York City Local existing sectoral developments create new Law 144 of 2021 regulators AI harms that do not map on Automated onto existing taxonomies Employment Decision Tools • US semiconductor 34 Source: Authors export controls Regulatory and Policy Frameworks Tool 1: Industry Self-Governance Private ethical codes and councils There are a range of AI ethics documents and councils that have been set up by large technology firms or affiliated organizations. Some are internal-facing, such as Microsoft’s Aether Committee and Responsible AI Standard Playbook,40 Google’s AI Principles,41 Bosch Ethical Guidelines for AI42 and IBM’s AI Ethics Board.43 Other bodies, such as the Partnership on AI (established by Amazon, Apple, Google, Facebook, IBM, and Microsoft in 2016) aim to coordinate responsible AI work across industry, academia and civil society. 40 https://www.microsoft.com/en-gb/ai/responsible-ai 41 https://ai.google/responsibility/principles/ 42 https://www.bosch.com/stories/ethical-guidelines-for-artificial-intelligence/ 43 https://www.ibm.com/impact/ai-ethics 35 Global Trends in AI Governance Evolving Country Approaches Box 11: Partnership on AI (PAI)44 PAI is a multi-stakeholder nonprofit organization dedicated to the ethical and responsible development of artificial intelligence. Founded in 2016 and funded by philanthropic and corporate entities, PAI includes participation from technology companies, non-profits, and academic institutions. Mission and Objectives • Responsible AI Development: Ensuring AI technologies are ethical, transparent, and inclusive. • Interdisciplinary Collaboration: Bringing together experts from computer science, ethics, law, and social sciences to address AI’s challenges. • Public Awareness and Education: Enhancing public understanding of AI, its impacts, and ethical considerations. • Best Practices and Guidelines: Creating guidelines to promote fairness, accountability, and transparency in AI development. Key contributions through its collaborative efforts include: 1. Ethical Guidelines and Best Practices: Developed and disseminated ethical guidelines and best practices for AI development and deployment. 2. Research and Reports: Published numerous studies on critical AI issues like bias, safety, privacy, and societal impacts, providing insights for policymakers and practitioners. 3. AI Policy and Advocacy: Been active in advocating for sound AI policies at both national and international levels. 4. Working Groups: PAI has several working groups focused on specific areas such as AI and labor, safety-critical AI, fair, transparent, and accountable AI, and social and societal influences of AI. 5. Public Awareness and Education: Raised awareness and educated the public on ethical AI through events, workshops, and initiatives. 6. AI Incident Database: Launched a database for collecting and analyzing AI incidents to improve safety and reliability. While PAI has encouraged large multi-stakeholder dialogue on the responsible development and use of artificial intelligence, some have voiced concerns regarding the dominance of Big Tech in its activities, to the detriment of other actors: in 2020 prominent civil society organization Access Now resigned from PAI, citing a lack of consensus and radically differing views between stakeholders’ and ‘an increasingly smaller role for civil society to play within PAI’, stating that they ‘did not find that PAI influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis.45 Source: https://partnershiponai.org/; https://www.accessnow.org/press-release/access-now-resignation-partnership-on-ai/; 44 https://partnershiponai.org/ 45 https://www.accessnow.org/press-release/access-now-resignation-partnership-on-ai/ 36 Regulatory and Policy Frameworks Ethical codes and councils can be important ethical guidelines are created or implemented.49 governance instruments if they are directly Third, these ethical frameworks are often integrated into the business models and inconsistently interpreted and implemented;50 company cultures of industry actors – providing there is a risk that industry stakeholders will a focal point for live, product-relevant questions engage in regulatory arbitrage by ‘shopping’ on AI ethics.46 They also require minimal public around for the most permissive ethical principles sector supervision or interventions (although to allow for minimal interruption to business.51 governments can encourage the creation of such For these reasons, self-governance will rarely councils through law or regulatory guidelines).47 be a standalone intervention. Given the However, policymakers should also note their potential risk of ‘ethics-washing’, the existence weaknesses. First, even if properly integrated of such ethical principles should not be seen as into key product decisions on AI development, a complete regulatory intervention and should some principles and ethics documents may be not preclude further action by policymakers. too vague and therefore of limited practical Even where self-regulation is an appropriate use.48 Second, because of their nature as non- governance intervention, regulators may still binding guidelines, there are no mechanisms for have a role to play in providing incentives effective public oversight or enforcement, with or guidelines for responsible action. little transparency or public input into how these 46 https://cms.law/en/media/local/cms-cmno/images/other/artificial-intelligence-what-is-an-ai-ethics-board-cms?v=1 47 E.g. The creation of ethical councils may satisfy the recommendation under Article 29 Data Protection Working Party Guidelines on Automated individual decision-making and profiling of 3 October 2017, which advises data controllers to ‘establish ethical review boards to assess the potential harms and benefits to society of particular applications for profiling.’, https://cms.law/en/ media/local/cms-cmno/images/other/artificial-intelligence-what-is-an-ai-ethics-board-cms?v=1. 48 Munn (2022), https://link.springer.com/article/10.1007/s43681-022-00209-w. 49 https://www.annualreviews.org/content/journals/10.1146/annurev-lawsocsci-020223-040749, p.258; 50 id. 51 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3835010. 37 Global Trends in AI Governance Evolving Country Approaches Tool 2: Soft Law Source: G20 Ministerial Statement on Trade and Digital Economy, G20, 3–4 (2019) https://www.mofa.go.jp/ Non-binding international agreements files/000486596.pdf; UNESCO (2021). https://unesdoc. unesco.org/ark:/48223/pf0000386276, https://unesdoc. Some of the earliest non-binding, country- unesco.org/ark:/48223/pf0000385198; https://digital- strategy.ec.europa.eu/en/news-redirect/805511; https:// led instruments on AI governance were www.reuters.com/technology/britain-publishes-bletchley- adopted in intergovernmental fora such declaration-ai-safety-2023-11-01/; https://documents. un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592. as the OECD, G20 and UNESCO. A brief pdf?token=yC0PzNhOyLqeZ1tZ7w&fe=true timeline summarizing the key agreements and developments in this area are set out below: 38 Regulatory and Policy Frameworks Although these agreements are non-binding from the Global South, and underscores the in nature, they demonstrate a notable importance of aligning AI governance with degree of international consensus around international human rights laws. The report key responsible AI principles. They can also highlights AI’s potential to achieve the also have a direct impact on national AI Sustainable Development Goals (SDGs) through policies – The OECD’s 2023 report on the ethical and inclusive deployment. The final state of implementation of its AI principles report is expected before the end of 2024. has found that countries have sought to However, it is important to recognize that these translate the AI Principles into concrete policy agreements are not standalone regulatory interventions through a range of measures, interventions – they focus mainly on high- including ‘i) establishing ethical frameworks level principles and do not address important and principles, ii) considering hard law questions regarding the assignment of rights and approaches, iii) supporting international regulatory responsibilities. For these documents standardization efforts and international to have practical relevance, they must be law efforts […] and iv) promoting controlled translated into national policy frameworks and environments for regulatory experimentation’.52 accompanied by further technical assistance and The UN’s High-Level Advisory Body on AI (HLAB) policy advice. Often, international organizations has recognized the critical need for robust socio- will provide direct technical assistance to technical standards to govern AI. In its interim member states to help guide their AI policy report, the HLAB emphasized the importance development – for example, it was announced of a coordinated global approach to prevent in May 2024 that Chile had adopted an updated fragmentation and ensure interoperability national AI policy and action plan, following the among various AI governance frameworks.53 recommendations of a Readiness Assessment It calls for inclusive participation, especially Report elaborated by UNESCO (see Box 12).54 52 https://www.oecd-ilibrary.org/docserver/835641c9-en.pdf?expires=1716551804&id=id&accname=ocid195787 &checksum=8B861C9BB22A13F96A39FDF353B58786, p. 15. 53 https://www.un.org/sites/un2.un.org/files/un_ai_advisory_body_governing_ai_for_humanity_interim_report.pdf. 54 https://www.unesco.org/en/articles/chile-launches-national-ai-policy-and-introduces-ai-bill-following-unescos- recommendations#:~:text=The%20country%2C%20following%20the%20recommendations,responsible%20development%20 of%20this%20technology. 39 Global Trends in AI Governance Evolving Country Approaches Box 12: Chile-UNESCO Collaboration on AI Policy – UNESCO Readiness Assessment Methodology59 Chile was one of the first countries in the world to implement and finalize UNESCO’s Readiness Assessment Methodology (RAM). The RAM is intended to help countries understand how prepared they are to implement AI ethically and responsibly for their consumers, while also highlighting what further institutional and regulatory changes are needed.55 The implementation of a RAM has three stages: 1. Diagnosis of national AI landscape 2. Development of national AI multi-stakeholder roadmap 3. Main policy recommendations for national AI strategy During June and July 2023, participatory consultations were held with different actors in the local AI ecosystem, with the aim of generating recommendations for AI development in Chile. The Chilean Ministry of Science, Technology, Knowledge and Innovation (MSTKI), in collaboration with UNESCO, identified six thematic areas of discussion relevant to the AI agenda for the coming years, covering the future of work, democracy, government, health, education, safety, regulation and the environment. Chile’s engagement with UNESCO was led by MSTKI, the ministry that elaborated Chile’s 2021 National AI Policy. MSTKI was supported by a Ministerial Steering Committee that included MSTKI, the Ministry of Economy, Development and Tourism, and the Ministry of Education. In each area of discussion, participants were asked to identify challenges and opportunities; the outcomes of these discussions served as inputs for the main recommendations of the Readiness Assessment Report. The final Report recommended the following list of ten recommendations, to align Chile’s AI policy with UNESCO’s recommendations:56 1. REGULATION 1.1. Assign urgency to the updating of the current Personal Data Protection Law and the Cybersecurity and Information Critical Infrastructure Bill 1.2. Create a multi-stakeholder and adaptive governance for AI regulation 1.3. Explore Regulatory Experimentation Mechanisms (e.g., Sandboxes) for the Application of AI in Critical Areas 1.4. Promote ethical principles of AI through purchasing regulations and standards 2. INSTITUTIONAL FRAMEWORK 2.1. Improve data collection and statistics on the use of AI 2.2. Development of AI Strategies for Local Governments 2.3. Update Chile’s National AI Policy (NAIP) (Continued on other page) https://www.unesco.org/en/articles/chile-launches-national-ai-policy-and-introduces-ai-bill-following-unescos- recommendations#:~:text=The%20AI%20bill%20introduced%20by,the%20protection%20of%20consumers%20from; https:// dig.watch/updates/chile-introduces-updated-national-ai-policy-and-new-ai-legislation#:~:text=Chile%20has%20officially%20 launched%20its,Readiness%20Assessment%20Report%20by%20UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000387216 40 Regulatory and Policy Frameworks (Box 12 continued) 3. CAPACITY BUILDING 3.1. Development of Human Capital in AI 3.2. Attract investments in AI technological infrastructure and promote discussion on its environmental impacts. 3.3. Assess the impact of AI and automation on the workforce and define job retraining plans To implement the findings from the RAM, in May 2024 Chile launched its updated National AI Policy and action plan, along with a proposed AI bill spearheaded by MSTKI, seeking to regulate and encourage the ethical and responsible development of AI.57 The Bill sets out a risk-based approach to regulation (for more information on a risk-based approach to AI regulation, see section [x] below) classifying AI systems into unacceptable, high, limited and no evident risk categories.58 Chile’s revised National AI Policy explicitly incorporated insights from the RAM process, addressing governance gaps and integrating diverse stakeholder perspectives from across Chile. Source: https://www.unesco.org/en/articles/chile-launches-national-ai-policy-and-introduces-ai-bill-following-unescos- recommendations#:~:text=The%20AI%20bill%20introduced%20by,the%20protection%20of%20consumers%20from; https:// dig.watch/updates/chile-introduces-updated-national-ai-policy-and-new-ai-legislation#:~:text=Chile%20has%20officially%20 launched%20its,Readiness%20Assessment%20Report%20by%20UNESCO. National AI principles / ethics frameworks to guide the responsible development and use of AI across all sectors.60 In 2022 the Office of In addition to ethical frameworks and Science and Technology Policy (OSTP) of the principles-based documents created by White House produced a ‘Blueprint for an AI Bill industry actors, and international bodies, of Rights’ suggesting fundamental principles to governments are increasingly developing guide and govern the efficient development and voluntary national AI principles and ethics implementation of AI systems while in Rwanda, frameworks (as noted in Box 12 above Guidelines on the Ethical Development and on Chile’s national AI policy, adopted Implementation of Artificial Intelligence61 was with assistance from UNESCO, there released as part of the National AI Strategy. is often a significant interplay between international and national frameworks). The key characteristic of these frameworks is that they are non-binding. For instance, the For example, a 2023 white paper released by UK white paper explains that these principles the UK sets out 5 principles : 1) Safety, security were not placed on statutory footing to and robustness, 2) appropriate transparency allow the government to remain agile and and explainability, 3) fairness, 4) accountability respond quickly and proportionately to new and governance, 5) contestability and redress) https://unesdoc.unesco.org/ark:/48223/pf0000387216 https://www.unesco.org/en/articles/chile-launches-national-ai-policy-and-introduces-ai-bill-following-unescos- recommendations#:~:text=The%20AI%20bill%20introduced%20by,the%20protection%20of%20consumers%20from 59 Id. 60 https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#:~:text=Our%20 framework%20is%20underpinned%20by,Fairness 61 https://www.minict.gov.rw/index.php?eID=dumpFile&t=f&f=67550 &token=6195a53203e197efa47592f40ff4aaf24579640e 41 Global Trends in AI Governance Evolving Country Approaches technological advances.62 Similarly, Australia Technical standards and certification has adopted voluntary AI Ethics Principles63 to frameworks guide the development and deployment of AI Voluntary international standard-setting technologies. These principles provide a flexible organizations are increasingly developing framework to address ethical considerations technical standards65 for AI governance. One without imposing mandatory regulations, early example is the Institute of Electrical and allowing for rapid adaptation as AI technology Electronics Engineers’ (IEEE) P70xx series of evolves. Both examples illustrate standards for ethical use of AI. Known as the how non-binding frameworks can serve as ‘Ethically Aligned Design’ (EAD) principles, interim measures, providing industry guidance they include standards on transparency while preserving the ability for future regulatory (7001–2021), processes for considering ethical adjustments based on emerging insights issues in design (7000–2021), and standards on and risks. bias and ‘ethically-driven nudging’ (7003TM, National AI principles and ethical frameworks 7008TM). The International Organization for can form useful intermediate stopgaps as Standardization (ISO) is also increasingly active part of a broader ‘wait and see’ approach in this area and has published a standard on to AI regulation but they must be carefully AI risk management (ISO/IEC 23894:2023). monitored. Non-binding frameworks provide a Similar work is being undertaken by national useful guide for the industry thereby promoting standards institutes such as the U.S. National responsible innovation. However, it is important Institute of Standards and Technology (NIST), to note that any ‘wait and see’ approach must be whose AI Risk Management Framework (RMF) carefully implemented, and any period of ‘active provides comprehensive guidance for risk learning’ may need to be supported by strong mitigation across the AI lifecycle.66 The UK transparency requirements to allow policymakers has also developed an AI Standards Hub to to actively monitor AI developments and ensure share knowledge, capacity, and research on that consumers and vulnerable groups are AI standards.67 There are also standard-setting not exposed to unacceptable levels of risk. A bodies that seek to address specific governance ‘wait and see’ approach should not preclude issues such as misinformation: for example, further action, whether that is in the form of the Coalition for Content Provenance and hard regulation, development of national AI Authenticity (C2PA), which includes stakeholders standards, or implementation of a regulatory such as Adobe, BBC, Google, Microsoft, sandbox. For example, the UK government Sony and OpenAI, seeks to address online has said that it eventually expects to introduce misleading information through developing ‘targeted, binding requirements’ for the most technical standards that certify the source and powerful general-purpose AI systems.64 history (or provenance) of media content.68 62 https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#:~:text=Our%20 framework%20is%20underpinned%20by,Fairness 63 https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles 64 https://www.gov.uk/government/news/uk-signals-step-change-for-regulators-to-strengthen-ai-leadership 65 This paper defines ‘technical standards’ as technical specifications that encourage (but do not require) compliance. Historically, technical standards have been crucial for the development of the internet and other networked infrastructures – one important factor driving standard adoption is the need for interoperability. 66 https://www.nist.gov/itl/ai-risk-management-framework 67 https://aistandardshub.org/ 68 https://c2pa.org 42 Regulatory and Policy Frameworks Figure 2. AI Standards Landscape Snapshot Source: AI Standards Hub, Q2 2024 Technical standards can be complemented systems can be objectively assessed, and by certification schemes, trust marks, levels of compliance determined.74 Second, quality marks and seals. Some examples although technical standards are non-binding include the proposed AI certification ‘Made in nature, there are often strong incentives for in Germany,’69 IEEE’s Ethics Certification compliance – especially where adherence to a Program for Autonomous and Intelligent standard becomes necessary for interoperability, Systems,70 the proposed Malta’s National or if a certification becomes a de facto AI Certification Framework,71 Responsible industry benchmark. AI standards may play Artificial Intelligence Institute Certification,72 a crucial role in ‘regulatory interoperability’ and Denmark’s digital trust seal.73 across borders – as different countries enact AI legislation. Differences in regulatory Although many of these standards address language and approaches to key principles ‘ethical’ issues, they are distinct from private such as trustworthiness, accountability, ethical principles in two ways. First, high- and transparency can create obstacles for level ethical principles will often recognize regulatory compliance by industry actors. the importance of key principles such as transparency in AI systems, without specifically Technical standards also have a role to elaborating how they can be implemented in play in creating multi-stakeholder driven context. In contrast, technical standards operate global consensus on how these principles at a greater level of detail, seeking to explain are translated into technical specifications.75 how such principles can be integrated into AI Standard development is often a multi- systems in practice – the IEEE Standard for stakeholder process and can incorporate Transparency of Autonomous Systems (IEEE input from the technical community, 7001-2021), for example, directly sets out a governments, academia, and civil society methodology for creating measurable, testable organizations. In short, standards can provide levels of transparency, so that autonomous an agile way of translating responsible AI 69 https://www.ki.nrw/en/flagships-en/certified-ai/ 70 https://standards.ieee.org/industry-connections/ecpais/. 71 https://www.mdia.gov.mt/malta-ai-strategy/. 72 https://www.responsible.ai/. 73 https://d-seal.eu/ 74 https://standards.ieee.org/ieee/7001/6929/ 75 https://www.holisticai.com/blog/ai-governance-risk-compliance-standard 43 Global Trends in AI Governance Evolving Country Approaches practices to the technical level while also facilitating a multi-stakeholder approach to global harmonization – although this greatly depends on the participation structure of the relevant standard-setting organizations. Table 2. Standard Coverage Responsible Institute ISO/IEC 22989:2022 Framework for AI, addressing AI International Organization concepts, terminology, and principles. for Standardization (ISO) and International Electrotechnical Commission (IEC) ISO/IEC 23053:2022 Framework for AI systems, focusing on ISO and IEC machine learning lifecycle processes. IEEE 7000-2021 Model process for addressing ethical Institute of Electrical and concerns during system design. Electronics Engineers (IEEE) IEEE 7010-2020 Well-being metrics for ethical AI IEEE and autonomous systems. NIST AI Risk Guidelines for organizations to identify National Institute Management and managing risks associated with AI of Standards and Framework (AI RMF) technologies, focusing on accuracy, Technology (NIST) reliability, and robustness. BS 8611:2016 Guide to the ethical design and application British Standards of robots and robotic systems. Institution (BSI) ISO/IEC JTC 1/SC 42 Comprehensive AI standard covering ISO/IEC Joint Technical terminology, data quality, risk Committee 1/ management, and governance. Subcommittee 42 IEEE P70xx Series Standards for ethical use of AI including IEEE transparency (7001-2021), ethical design (7000-2021), and bias (7003TM, 7008TM). ISO/IEC 23894:2023 AI risk management standard focusing on ISO and IEC managing risks throughout the AI lifecycle. C2PA Technical Standards addressing online Coalition for Content Standards misinformation by certifying the source Provenance and and history of media content. Authenticity (C2PA) Responsible AI Certification framework for Responsible AI Institute Institute Certification assessing AI systems against ethical and technical standards. Malta’s National National framework for certifying Government of Malta AI Certi-fication AI systems in accordance with Framework ethical and technical standards. IEEE Ethics Certification for AI systems based on IEEE Certification Program ethical standards and transparency. for Autonomous and Intelligent Systems Denmark’s Digital Certification mark for trustworthy AI systems Danish Government 44 Trust Seal focusing on transparency and accountability. Regulatory and Policy Frameworks However, although many standard-setting society organizations (see Box 13 below for more organizations adopt a multi-stakeholder detail).77 In addition, the multi-stakeholder and approach to standards development, the most consensus-based process of some standard- well-resourced industry players with technical setting organizations means that standards- expertise and financial resources often have development can take time – for example, an advantage in these processes. These large developing an ISO standard from first proposal actors can sometimes coerce other actors into to final publication usually takes around 3 years.78 adopting certain standards.76 This can also lead This can pose a challenge for agile governance to participation gaps for less well-resourced given how quickly AI systems are evolving. actors, such as less developed countries and civil Box 13: Participation Gaps at Standard-Setting Organizations Governments can directly participate in standard-setting processes where possible. However, participants in these processes are often technical representatives usually from industry, typically sponsored by a handful of large private sector actors based in the Global North. Among the important standard-setting bodies governing the ICT sector, only the International Telecommunication Union (ITU) has express provisions for participation by countries from the Global South.79 Civil society organizations, such as the Ada Lovelace Institute, have identified significant barriers to participation in AI standardization processes. These barriers include the substantial time commitment required, the complexity and opacity of the processes, and the dominance of industry voices, which can marginalize less-resourced actors and civil society groups.80 For instance, the EU standardization body tasked with creating standards for ‘high-risk’ AI systems under the EU AI Act faces challenges in ensuring broad stakeholder participation. Given these participation issues, governments have two primary ways to engage with AI standards regimes:81 1. Hybrid Approach: This method involves specifying that compliance with certain standards satisfies legal obligations. For instance, the EU AI Act adopts this approach by requiring providers of ‘high-risk’ AI systems to self-certify that they meet essential requirements set out in proprietary standards authored by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). This gives these standardization bodies significant regulatory influence globally, as AI providers wishing to sell high-risk systems in the European market must certify their compliance with these standards.82 (for more information on the EU AI Act’s risk based regulatory approach, see BOX 14) 2. Symbiotic Approach: In this approach, a legal regime promotes optional industry certification mechanisms. An example is the EU data protection law, which encourages companies to adhere to certain standards voluntarily, thus fostering a culture of compliance through incentives rather than mandates. Source: Kanevskaia (2023), p.263-267, https://www.cambridge.org/core/books/law-and-practice-of-global-ict-standardizat ion/069342911A73905590EE661655CA0DA0; https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/; Veale (2023), p.262. 76 https://www.annualreviews.org/content/journals/10.1146/annurev-lawsocsci-020223-040749, p.262. 77 https://ideas.repec.org/a/eee/telpol/v45y2021i6s0308596121000483.html 78 https://www.iso.org/developing-standards.html#:~:text=The%20voting%20process%20is%20the,usually%20takes%20 about%203%20years. 79 p.263-267, https://www.cambridge.org/core/books/law-and-practice-of-global-ict-standardization/069342911A73905590EE661 655CA0DA0 80 https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/ 81 https://www.annualreviews.org/content/journals/10.1146/annurev-lawsocsci-020223-040749, p.262. 82 https://www.annualreviews.org/content/journals/10.1146/annurev-lawsocsci-020223-040749, p.264. 45 Global Trends in AI Governance Evolving Country Approaches Tool 3: Hard Law Creation of horizontal AI law This section focuses on ‘horizontal’ AI laws, that apply to all AI systems84 A growing number of countries have regardless of sector or use case. enacted binding legislation establishing concrete obligations and consequences A ‘risk-based’ approach involves categorizing for AI development and use. These hard AI applications based on their potential risks laws can take various forms, including and impacts. This approach subjects higher- horizontal laws that apply broadly across all risk AI systems to more rigorous regulatory sectors, technology-specific laws targeting obligations to mitigate potential harms.85 The particular types of AI applications or EU AI Act, passed in 2024, is one of the world’s systems, or sector-specific laws addressing first examples of such a law – its approach AI deployment within specific industries. draws heavily from EU product liability law, classifying AI systems according to their Given the speed at which new AI laws are risk levels and applying tailored regulatory being proposed, this paper does not provide requirements accordingly. It also applies a comprehensive survey of all proposed more stringent requirements to developers approaches (given that this would quickly of ‘general-purpose AI’ models and imposes become out of date) – instead, we group additional requirements for those posing current regulatory proposals into several ‘systemic’ risks. A deeper evaluation of the loose categories, to identify common EU’s risk-based approach is set out in box 14. strengths, weaknesses, and policy tradeoffs. According to Stanford University’s AI Index report, 31 countries have passed at least one AI-related bill since 2016.83 Figure 3. Source: https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_6-1.pdf 83 https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report-2023_CHAPTER_6-1.pdf 84 It is important to note that jurisdictions have taken different approaches to defining ‘AI’ and ‘AI systems’ in their proposed legislative frameworks. For example, law firm White and Case notes that ‘the draft text of the EU AI Act adopts a definition of ‘AI systems’ that is based on (but is not identical to) the OECD’s definition, and which leaves room for substantial doubt due to its uncertain wording’, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker#home. 85 Digital Trends Report 2023, WBG 46 Regulatory and Policy Frameworks Box 14: Evaluating the EU AI Act’s risk-based regulatory framework First introduced in 2021, the EU AI Act is a legislative proposal by the European Commission that introduces a tiered, risk-based approach to regulating AI within the European Union. The Act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. • Unacceptable risk: AI systems that pose a clear threat to safety, livelihoods, or rights, such as social scoring by governments, are banned. These can include things such as social scoring, manipulative AI, exploitation of vulnerable populations, biometric categorization or automated compiling of facial recognition databases. • High risk: AI systems used in critical infrastructure, education, employment, law enforcement, and health must meet strict requirements before they can be placed on the market. The obligations and conformity assessments include areas such as risk and quality management systems, data governance, technical documentation, record-keeping, instructions for downstream deployers, and design for accuracy, robustness and cybersecurity. • Limited risk: AI systems with ‘limited risk’ are subject to light transparency obligations such as ensuring that end-users are aware they are interacting with AI (e.g. chatbots and deepfakes must declare use of AI). • Minimal risk: Most AI systems, like spam filters or video games, fall under this category and are largely unregulated. In addition, all general-purpose AI model developers must provide technical documentation, instructions for use, comply with the EU Copyright Directive, and publish a summary about training data. Meanwhile, general purpose models posing ‘systemic risks’86 face additional requirements – all providers of these models – whether open source or not – must also conduct evaluations of models for risks, adversarial testing, and track and report serious incidents and ensure cybersecurity protections.87 Because the EU AI Act is one of the first pieces of binding legislation regulating AI, policymakers designing new regulatory frameworks in other countries may seek to draw inspiration from it.88 However, this approach should be treated with caution – the EU AI Act is grounded in concerns specific to the EU, such as the need to create a harmonized regulatory regime across 27 member states. The EU AI Act’s risk categorization framework is therefore the product of the specific political compromise, as well as drafting specific to EU product liability and consumer protection law. Because this formulation may not reflect the policy priorities of other countries, the AI Act should not be ‘copied- and-pasted’ into new regulatory regimes without appropriate modifications (Continued on other page) 86 AI models pose systems risks where the cumulative amount of compute used for its training is greater than 10 25 floating point operations (FLOPs) - similar to the level of OpenAI’s GPT4 which powers ChatGPT) 87 Id. 88 A similar process occurred in the data protection context, leading to the ‘Brussels effect’ of the GDPR, https://academic-oup- com.libproxy-wb.imf.org/book/36491?login=true&token= 47 Global Trends in AI Governance Evolving Country Approaches (Box 14 continued) Policymakers should also note the following issues that have been highlighted by academics and civil society regarding the EU AI Act: 1. AI systems covered under the AI Act are those that ‘may exhibit adaptiveness after deployment’.89 This definition could potentially exclude AI systems that do not learn or adapt to new data inputs after deployment, such as older rule-based systems. However, these AI systems may still be complex and cause unique risks for consumers. 2. The AI Act is the product of a unique blend of EU product safety regulation, fundamental rights protection, and consumer protection law. Academics have argued that this patchwork approach to regulation leaves certain gaps in how the Act is drafted.90 In addition, because the Act acts as a form of ‘maximum’ market harmonization under EU law, in principle member states cannot introduce further national regulation on AI.91 3. Many of the ‘essential requirements’ that high-risk AI systems must comply with are drafted in general and vague terms (e.g. they must have an ‘appropriate level of accuracy, robustness, and cybersecurity’ to mitigate risks to fundamental rights92). The AI Act therefore relies on European standards development bodies (particularly CEN-CENELEC) to clarify these essential requirements by operationalizing them into harmonized European technical standards.93 Under the AI Act, high-risk AI systems and general-purpose AI systems that are in conformity with these harmonized standards are automatically presumed to comply with the Act’s legal requirements for high-risk systems.94 However, as noted at box 13 above, these European standards development organizations face serious participation gaps and do not have specific expertise in important fundamental rights topics. More generally, academics have noted that the Act’s approach to its enforcement architecture means that key operative provisions delegate critical regulatory tasks to AI providers themselves, without adequate oversight or redress mechanisms.95 4. The ‘list-based’ approach of the Act means that the Act may not guard against novel AI systems that fall outside the Act’s risk classification.96 5. Persons affected by AI systems have no specifically enforceable rights or role within the AI Act97 (although individual rights regarding automated decision-making are found elsewhere in EU law, such as under the GDPR).98 89 Art. 3(1), EU AI Act. 90 https://papers.ssrn.com/sol3/Delivery.cfm?delivery_id=3896852&frd=yes&anym=yes. 91 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4874852 92 Art. 15, EU AI Act. 93 https://www.adalovelaceinstitute.org/wp-content/uploads/2023/03/Ada-Lovelace-Institute-Inclusive-AI-governance- Discussion-paper-March-2023.pdf. This is an approach that is modelled on the EU’s ‘New Legislative Framework’ market surveillance regime. 94 Art. 40, EU AI Act. 95 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4874852 96 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4874852 97 https://www.adalovelaceinstitute.org/report/regulating-ai-in-europe/ 98 Article 22(1) of the GDPR gives data subjects the ‘right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’ 48 Regulatory and Policy Frameworks Another approach to horizontal AI regulation (transparency, accountability, equality, privacy, is a ‘rights-based’ approach – one example etc.),102 (b) contain measures ensuring accessible is the Council of Europe’s recently finalized and effective remedies for rights violations,103 Framework Convention on Artificial Intelligence, and (c) have mechanisms to assess and mitigate Human Rights, Democracy and the Rule of adverse AI impacts on rights.104 However, the Law.99 Countries that sign up to the Framework Convention has been criticized for the fact that Convention commit to adopt or maintain countries are able to determine whether to measures to ensure that AI activities are apply the Convention to private sector actors, compatible with human rights,100 and to ensure or implement ‘other appropriate measures’.105 that AI systems are not used to undermine the Brazil’s proposed AI Bill106 takes a hybrid integrity of democratic processes and the rule approach – it is explicitly rights-based, but of law.101 Countries also commit to ensuring also incorporates a tiered risk-based model that their national frameworks (a) incorporate inspired by the EU AI Act; see box 15. general principles regarding AI governance 99 The Committee on Artificial Intelligence (CAI), the body tasked with drafting the treaty, comprises the 46 member states of the Council of Europe, as well as observer states from most regions of the world: including Argentina, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru the USA, and Uruguay, https://rm.coe.int/terms-of-reference-of-the-committee-on- artificial-intelligence-cai-/1680ade00f. Although developed under the auspices of the Council of Europe, the Convention is open to ratification by any country. The Council of Europe can have global influence: for example, the Budapest Convention on Cybercrime has 67 ratifications or accessions and is widely considered to be a key international instrument governing cybercrime. 100 Article 4, Framework Convention. 101 Article 5, Framework Convention. 102 Chapter III, Framework Convention. 103 Chapter IV, Framework Convention. 104 Chapter V, Framework Convention. 105 See wording at Art 3(1)(b), Framework Convention; for discussion of the legislative history of this private sector carveout, see https://www.euractiv.com/section/artificial-intelligence/news/eu-commissions-last-minute-attempt-to-keep-private-companies- in-worlds-first-ai-treaty/. 106 Bill No. 2,338/2023 https://legis.senado.leg.br/sdleg-getter/documento?dm=9347593&ts=1683152235237& disposition=inline&_gl=1*edqnkm*_ga*MTgyMDY0MTcwMS4xNjc5OTM2MTI0*_ga_CW3ZH25XMK* MTY4MzIxNzUzMy4yLjEuMTY4MzIyMDAyMy4wLjAuMA. 49 Global Trends in AI Governance Evolving Country Approaches Box 15: Brazil’s Bill 2.338/2023 Brazil’s Bill 2.338/2023, currently under consideration, proposes a comprehensive risk and rights-based approach to AI governance. It defines an AI System as a ‘[c]omputer system, with different degrees of autonomy, designed to infer how to achieve a given set of goals, … predictions, recommendations, or decisions that can influence the virtual or real environment.’ Risk-Based Approach The risk-based approach mandates that AI systems conduct a preliminary self-assessment analysis to classify themselves according to their risk levels. These levels include: • Prohibited AI Systems: These are deemed excessively risky and are banned. • High-Risk AI Systems: These systems can be used only if they meet stringent compliance requirements such as impact assessments, robustness, accuracy, reliability, and human oversight. High-risk AI systems cover applications like credit rating, personal identification, autonomous vehicles, medical diagnoses, and decision-making processes affecting employment, education, and access to essential services. Developers and operators of these systems must ensure they do not use AI for subliminal manipulation or exploit vulnerabilities of specific groups, such as children or people with disabilities. In addition, high-risk AI systems must include technical documentation, log registers, reliability tests, technical explainability measures and measures to mitigate discriminatory biases.107 Every AI system must implement a governance structure involving transparency, data governance and security measures.108 The Bill places significant emphasis on organizations’ responsibility to mitigate biases through regular public impact assessments. These impact assessments will be held in an open public database. Rights-Based Approach109 The Bill also proposes individual rights, such as the right to explanation about decisions, non-discrimination and correction of discriminatory biases, and the right to privacy and protection of personal data.110 In addition, rules for civil liability, codes of best practice, notification of AI incidents, copyright exceptions for data mining processing, and fostering of regulatory sandboxes are also included. To implement this, the bill proposes an institutional model with four coordinated bodies: 1. The Competent Authority: Likely the National Data Protection Authority (ANPD), responsible for interpreting and regulating AI law. 2. The Executive Branch: Formulates public policies for AI development and is tasked with designating supervisory authority to regulate and enforce legislation regarding Brazil’s National AI Strategy (EBIA). 3. Sectoral Regulatory Bodies: Specific regulators working in cooperation with the ANPD. 4. The Artificial Intelligence Advisory Council: Ensures societal participation in AI-related decisions Source: Bill 2.338/2023; https://oecd.ai/en/wonk/brazils-path-to-responsible-ai; https://accesspartnership.com/access-alert- brazils-new-ai-bill-a-comprehensive-framework-for-ethical-and-responsible-use-of-ai-systems/ 107 https://accesspartnership.com/access-alert-brazils-new-ai-bill-a-comprehensive-framework-for-ethical-and-responsible-use-of- ai-systems/ https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-brazil 108 Id. 109 https://oecd.ai/en/wonk/brazils-path-to-responsible-ai 110 https://accesspartnership.com/access-alert-brazils-new-ai-bill-a-comprehensive-framework-for-ethical-and-responsible-use-of- 50 ai-systems Regulatory and Policy Frameworks Horizontal laws allow policymakers to exacerbate social inequalities. For example, mitigate the legal uncertainty created by the Council of Europe Framework Convention non-binding frameworks. A horizontal AI law, has been criticized by Amnesty International backed by strong supervisory and enforcement for its high-level approach, which is seen as capacity, can play a critical role in creating excessively vague. Critics argue that it does trust in the AI economy, which then enables not provide sufficient detail regarding the greater participation by consumers in digital concrete rights affected by AI, the specific life and more robust responsible innovation AI-based practices that are incompatible with ecosystems. Legal frameworks which clearly human rights, and the processes for conducting specify what types of AI-enabled activities, effective and binding human rights due and AI systems, are unacceptable allow firms diligence for AI developers and deployers.113 greater certainty in ensuring their activities Furthermore, crafting binding regulation are fully regulatory compliant. The creation is inherently challenging due to the of regulatory frameworks with clear ‘red absence of established ‘best practices’ lines’ around unacceptable AI use cases is an and limited supervisory or enforcement approach that has been recommended by the experience. Policymakers often look to human rights community in particular.111 The existing frameworks like the EU AI Act for need for binding horizontal regulation has been guidance, but this approach must be adapted recognized by the international community.112 to fit local contexts. Simply copying the EU However, implementing horizontal AI Act’s categorizations of AI systems without laws comes with several challenges. modifications may result in regulations that are ill-suited to the specific needs and There is a significant risk of regulatory circumstances of different jurisdictions.114 fragmentation, where different jurisdictions develop incompatible or conflicting regulations. Creating binding legislation is also a time- This can lead to overlapping requirements that consuming and resource-intensive process. create compliance burdens Legislative drafters may seek to ensure that AI for businesses operating in multiple regulations are flexible enough to adapt to rapid regions. Such fragmentation and overlap technological advancements by using high-level can hinder the growth and scalability of wording or allowing for future interpretation local AI innovation ecosystems by increasing by courts and regulators. While this can help the complexity and cost of compliance. prolong the relevance of the regulations, it also introduces significant legal uncertainty. Industry Regulatory fragmentation and vague actors may be left uncertain about how their legal frameworks can also result in uneven regulatory obligations will be interpreted and protection against AI risks for consumers. enforced,115 which can disadvantage startups Inconsistent regulations across different regions with fewer compliance resources and slow down may mean that some consumers enjoy robust the deployment of beneficial AI technologies. protections against AI risks, while others are left vulnerable due to weaker or less comprehensive Fixed red lines or categorizations, such regulatory frameworks. This disparity can as those set out in the Brazilian and EU undermine public trust in AI technologies and frameworks, may become outdated as 111 https://www.amnesty.eu/wp-content/uploads/2024/04/EUs-AI-Act-fails-to-set-gold-standard-for-human-rights.pdf 112 See UN High-Level Advisory Board Interim Report. 113 https://www.amnesty.eu/wp-content/uploads/2024/04/Amnesty-International-Recs-draft-CoECAI-11042024.pdf 114 A similar process occurred in the data protection context, leading to the ‘Brussels effect’ of the GDPR, https://academic-oup- com.libproxy-wb.imf.org/book/36491?login=true&token= 115 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker#articles 51 Global Trends in AI Governance Evolving Country Approaches technology evolves.116 This necessitates and transparency before Canadian federal the development of innovative and iterative institutions are permitted to deploy automated regulatory approaches that allow legal decision systems: these include mandatory frameworks to be updated efficiently without algorithmic impact assessment, mandatory incurring significant time and resource user-facing notice requirements, and obligations costs. Policymakers must strike a balance to provide meaningful explanations to between providing clear, enforceable rules affected individuals after a decision is made, and maintaining the flexibility needed to release custom source code owned by the adapt to future technological changes. Government of Canada, and document the decisions of automated decision systems. At the same time, policymakers should note that certain well-accepted AI governance Update or application of existing laws principles, such as transparency and Another approach is to focus on updating accountability, can be readily specified in or amending existing regulatory frameworks statute in ways that remain technology- that may apply to activities in the AI neutral but provide necessary documentation, ecosystem. The non-exhaustive list of existing auditability, and answerability requirements. legal frameworks that can be applied to For example, the Canadian Directive on the AI ecosystem is on the next page. Automated Decision-Making 2019117 imposes several requirements regarding accountability 116 Indeed, the EU AI Act required significant last-minute amendments to account for the emergence of generative AI, https://www. reuters.com/technology/behind-eu-lawmakers-challenge-rein-chatgpt-generative-ai-2023-04-28/ 117 https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 52 Regulatory and Policy Frameworks Table 3. Existing Legal Framework Examples of Application to AI Ecosystem Data protection / privacy In March 2024, Singapore’s Personal Data Protection Commission (PDPC) issued Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems,118 providing organizations with clarity on the use of personal data at three stages of AI system implementation: (a) development, testing and monitoring, (b) deployment, and (c) procurement.119 On March 30, 2023, the Italian Data Protection Authority (DPA) ordered OpenAI to stop the use of ChatGPT to process the personal data of Italian data subjects, on the grounds that there was a material risk that ChatGPT would breach the GDPR, mainly due to the use of personal data as a training set for ChatGPT in the absence of an adequate legal basis and without provision of a privacy notice.120 Human rights, In April 2024 the UK’s Equality and Human Rights Commission equality, and non- (EHRC) issued a reminder to employers to prevent inadvertent bias or discrimination laws discrimination in their use of AI tools, following a complaint from an Uber Eats driver who argued that AI facial recognition checks required to access the Uber Eats platform were racially discriminatory.121 In the US, the National Fair Housing Alliance (NFHA) and the US Department of Housing and Urban Development separately sued Facebook on the grounds that Facebook was allowing advertisers seeking to place algorithmic housing ads to exclude certain users by their race, which appeared to violate the US Fair Housing Act. The case was settled out of court by Meta.122 Cybercrime In Nigeria, legal commentators have proposed the use of the Cybercrime (Prohibition, Prevention etc.) Act 2015 to combat deepfakes, via its prohibition on identity theft and impersonation.123 118 https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai- recommendation-and-decision-systems 119 https://www.dataprotectionreport.com/2024/03/singapore-releases-new-guidelines-on-the-use-of-personal-data-in-ai- systems/ 120 https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2023/04/the-italian-data-protection- authority-halts-chatgpt-s-data-proce.html; The DPA’s order found that there was ‘a material risk that ChatGPT would breach the GDPR on a number of grounds: (i) The users of ChatGPT and other data subjects whose data is processed by OpenAI are not provided with a privacy notice (breach of Art. 13 GDPR) (ii) The use of personal data as a training set for the AI software is unlawful due to the absence of an adequate legal basis (breach of Art. 6 GDPR), (iii), The processing is not accurate, in that the information contained in ChatGPT’s responses to users’ queries is not always correct (breach of Art. 5 GDPR), (iv) Although OpenAI’s terms and conditions forbid access to users below the age of 13, OpenAI has not implemented measures to detect the users’ age and block access accordingly (breach of Art. 8 GDPR).’ https://www.cliffordchance.com/insights/ resources/blogs/talking-tech/en/articles/2023/04/the-italian-data-protection-authority-halts-chatgpt-s-data-proce.html. The ban was subsequently lifted 4 weeks later after OpenAI ‘addressed or clarified’ the issues raised by the DPA; however, in January 2024 OpenAI was notified by the Italian DPA that it was again suspected of violating GDPR, https://techcrunch. com/2024/01/29/chatgpt-italy-gdpr-notification/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_ referrer_sig=AQAAABunj1LP2tp3L7Sm3QTqWHYhl8JT3B93g8WaNoYJPjRIM1TyWf1VA46qyu7S_ Uy02LYirmyuiCEc5AAsPS9uHbqj4bDPFqXIfH1q_fbv-QVOUyxCV-qL839MCqIv2NzC-9ekzOJ5YkMVKHc0H3eIN8B5tlQ6g46bbE wGS-OGZTuH 121 https://www.pinsentmasons.com/out-law/news/uber-case-a-reminder-dangers-potentially-discriminatory-ai#:~:text=It%20 follows%20a%20case%20in,under%20the%202010%20Equality%20Act. 122 https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly- known 123 https://www.doa-law.com/wp-content/uploads/2024/02/Deepfakes-Legal-Safeguards-in-Nigeria.pdf 53 Global Trends in AI Governance Evolving Country Approaches Existing Legal Framework Examples of Application to AI Ecosystem Intellectual property In late 2023 the New York Times (NYT) sued OpenAI in US courts, arguing that OpenAI had engaged in large-scale copyright infringement, on the grounds that (a) OpenAI’s platform is trained on large volumes of the NYT’s articles, which are protected by copyright, (b) the LLMS that have been trained are a derivative work of the NYT’s body of copyrighted work, and (c) ChatGPT outputs closely mimic NYT articles, in effect reproducing copyrighted material.124 OpenAI has defended itself on the basis that its use of NYT articles is protected under the ‘fair use’ doctrine.125 Competition / antitrust The US Federal Trade Commission (FTC) announced in January 2024 that it issued orders to five AI developers (Google, Amazon, Anthropic, Microsoft and OpenAI) requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers, to understand their impact on the competitive landscape.126 Procurement The 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence seeks to leverage the US government’s federal procurement power to set industry standards for AI safety.127 The Executive Order directs the Office of Management and Budget (OMB) to issue guidance to federal agencies on managing AI risks in the federal government.128 In September 2024, the OMB issued guidance on responsible AI acquisition by the federal government, setting out three strategic goals: ‘managing AI risks and performance,’ ‘promoting a competitive AI market with innovative acquisition,’ and ‘ensuring collaboration across the federal government’. 124 https://hls.harvard.edu/today/does-chatgpt-violate-new-york-times-copyrights/; 125 Id. 126 https://www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships 127 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and- trustworthy-development-and-use-of-artificial-intelligence/; https://www.ey.com/en_us/insights/public-policy/key-takeaways- from-the-biden-administration-executive-order-on-ai 128 Id. 54 Regulatory and Policy Frameworks The benefit of this approach is that it overlapping legal architecture and ensure leverages the existing enforcement expertise, that such gaps are addressed in any new infrastructure, and resources of current legislation or other administrative measures. supervisory bodies, without the need to Additionally, in EMDE contexts, many of these pass fresh legislation. These frameworks legal frameworks may not yet exist; where can be updated under a ‘wait and see’ they are in place, they may face gaps (both approach to regulation – for example, the UK in terms of the substantive legal framework government’s position is that, although new or in the institutional capacity for regulatory legislative action regarding general purpose AI enforcement). In addition, the interaction systems will be necessary, doing so at present between existing legal regimes and new laws would be premature. The UK’s preferred on AI will be complex. Questions regarding approach is to empower existing regulatory how countries should prioritize allocation authorities and frameworks to apply the UK’s of their policymaking and institutional AI principles (see above) to tackle AI risks.129 resources are not well settled and will greatly New guidance issued by existing regulators depend on local exigencies – it will be the can help industry actors update their existing task of country policymakers to consult with compliance processes to address AI risks. all relevant stakeholders on the best way However, reliance on existing legal forward for local consumers and priorities. frameworks is constrained by the existing scope of these frameworks. For example, Technology-specific and sectoral data protection laws often only apply to approaches personal data, and regulators lack the power Legal frameworks governing AI can also to regulate AI models that are trained mainly be targeted towards certain sectors and on non-personal data. Because enforcement product areas or towards specific use and supervisory experience relating to AI is cases and application types. This method scarce, it is unclear how effective the application contrasts with broad, overarching regulations, of some of the above legal frameworks will allowing for more precise control and be – for example, the outcome of the US management of AI technologies in areas copyright litigation noted above is highly where they have unique impacts and risks. unclear, meaning the effectiveness of IP Some non-exhaustive examples of and copyright regimes for safeguarding the sectoral AI regulation include: interests of publishers, content producers, and the creative industry is in question. 1. Healthcare: In the US, the Food and Drug Administration (FDA) is Over-reliance on existing legal frameworks can considering updating its existing pre- create a patchwork approach to regulation, market review processes to regulate AI with potential gaps in rights protection or and machine learning-enabled medical risk mitigation – this may eventually lead to devices.130 In India, the Medical Council unacceptable harms for consumers and a lack of India and the Ministry of Health and of legal certainty for industry actors. If existing Family Welfare oversee regulations to legal frameworks are leveraged under a ‘wait ensure AI applications in healthcare and see’ approach, it is critical for policymakers comply with data privacy and safety to introduce a monitoring layer within the standards.131 For a deeper discussion existing regulatory architecture, to allow early of sectoral governance considerations identification of potential gaps in the complex, for healthcare, see box 16 below. 129 https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#executive-summary; https://www2.deloitte.com/uk/en/blog/emea-centre-for-regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html 130 https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software- medical-device#regulation 131 https://www.niti.gov.in/sites/default/files/2021-09/ndhm_strategy_overview.pdf#:~:text=URL%3A%20https%3A%2F%2Fwww. niti.gov.in%2Fsites%2Fdefault%2Ffiles%2F2021 55 Global Trends in AI Governance Evolving Country Approaches 2. Financial Services: The financial sector Singapore’s Land Transport Authority employs AI for credit scoring, fraud (LTA) has established guidelines for the detection, and algorithmic trading. In July testing and deployment of autonomous 2023, the US Securities and Exchange vehicles.134 The country also implemented Commission (SEC) proposed new rules a 5-year regulatory sandbox in 2017 designed to regulate potential conflicts of to facilitate the safe development and interest associated with private funds’ use of integration of autonomous vehicles.135 AI-related technologies in their interactions 4. Employment: AI systems used in hiring and with investors.132 The UK Financial Conduct workplace management are regulated to Authority is set to regulate ‘critical third prevent discrimination and ensure fairness. parties’ that provide critical technologies, New York City’s Local Law 144136 requires including AI, to regulated financial entities.133 independent bias audits for automated 3. Transportation: Autonomous vehicles and employment decision tools (see box 17). AI-driven traffic management systems are regulated to ensure safety and efficiency. 132 https://www.dechert.com/knowledge/onpoint/2023/9/sec-proposes-new-regulatory-framework-for-use-of-ai-by-broker-de. html 133 https://www.proskauer.com/blog/a-tale-of-two-regulators-the-sec-and-fca-address-ai-regulation-for-private-funds 134 https://cms.law/en/int/expert-guides/cms-expert-guide-to-autonomous-vehicles-avs/singapore 135 https://www.ippapublicpolicy.org/file/paper/5cea683b9a45b.pdf 136 https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page 56 Regulatory and Policy Frameworks Box 16: Sectoral Governance and Regulatory Aspects: Health Sector Case Study Regulating AI and GenAI in healthcare presents complex challenges due to the high stakes involved in patient care and medical decision making. AI systems in healthcare, need to be highly accurate and reliable, as incorrect decisions or predictions could result in misdiagnoses or inappropriate treatments. Moreover, the integration of AI into healthcare workflows raises questions about accountability and the role of healthcare professionals in AI-driven decisions. In response to these challenges, in January 2024, the World Health Organization (WHO) developed health sector guidelines on the use of AI in healthcare. These guidelines highlight the need to harness AI’s benefits while minimizing potential risks. Key regulatory considerations include ensuring the safety and effectiveness of AI systems, implementing strong privacy and security measures, and promoting transparency and trust in AI technologies. WHO also stresses the ethical use of AI, prioritizing human rights, safety, and preventing biases or misinformation that could cause harm. The guidelines provide support for governments, developers, healthcare providers, and other stakeholders in managing AI responsibly in healthcare. The responsibility for regulating medical devices and medical products lies with countries’ medical regulatory authorities, such as the Food and Drug Agency (FDA) in the United States, the Thai Food and Drug Administration in Thailand, the European Medicines Agency (EMA), or the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) among others. These agencies are responsible for ensuring the safety, efficacy, and quality of medical products, including drugs, medical devices, and, increasingly, some types of software used in healthcare (software as a medical device, software in a medical device, and software as an accessory to a medical device). These agencies have already cleared several uses of AI in medical devices; the FDA, for example, has cleared 950 such medical devices as of August 2024, but none of them have included GenAI. Unlike traditional medical devices, AI models can change post-deployment, making it difficult for current approval frameworks to ensure long-term performance and safety. The US FDA has recognized these challenges and has suggested that post-market evaluation might become a responsibility that falls, at least in part, on healthcare providers and institutions using AI tools. While this allows for continuous oversight as AI evolves, it raises concerns about the burden on providers, who may lack the expertise or resources to effectively monitor AI performance. Additionally, it could lead to inconsistencies in how AI is assessed across different healthcare settings. Bottom line: Regulating GenAI in healthcare requires a balance between caution and flexibility. Policymakers must integrate sector-specific regulations, maintain and amend existing processes as needed, and evaluate new technologies carefully. At the same time, sector-specific regulatory frameworks may need updating to accommodate the evolving nature and broader applications of technologies like GenAI, ensuring appropriate value, safety, and cost-benefit measures are in place. Sources: https://www.who.int/publications/i/item/9789240084759; https://jamanetwork.com/journals/jama/ fullarticle/2825146 57 Global Trends in AI Governance Evolving Country Approaches Box 17: Regulatory Experience with Algorithmic Auditing – New York City’s Local Law 144 Algorithmic auditing tests AI products for risks like discrimination and toxic content. In late 2023, researchers from the Ada Lovelace Institute and Data & Society analyzed New York City’s Local Law 144 (LL 144), which mandates independent bias audits for employers using automated decision-making tools. The study found flaws in the law’s framework, such as a lack of a robust third-party auditing ecosystem, insufficient requirements to stop using biased tools, and weak enforcement mechanisms. Auditors also faced challenges in accessing necessary data and a lack of standardized practices. The authors of the report offered 6 recommendations for policymakers designing future algorithmic auditing regimes: 1. Auditing laws must establish clear definitions that capture the full range of AI systems in scope, in consultation with affected communities 2. Auditing laws must establish clear standards of practice on the role and responsibilities of auditors 3. Auditing laws must enable smooth data collection for auditors, including clear procedures and requirements around data access (e.g. which information and documentation about datasets needs to be turned over) 4. Auditing laws must establish meaningful metrics that accurately capture algorithmic risks 5. Audits should follow a theory of change that results in meaningful outcomes and accountability 6. Auditing laws need mechanisms to monitor and enforce against non-compliance. Source: https://www.adalovelaceinstitute.org/report/code-conduct-ai/ Application-specific regulation targets These rules can often be imposed via particular kinds of AI products, such as secondary legislation or other executive GenAI applications. China was one of the action. For example, the US Department of first jurisdictions to introduce any form of Commerce, Bureau of Industry Security (BIS) binding legislation governing AI applications has introduced a range of export control – it currently has separate laws regulating measures aimed at restricting the export of recommendation algorithms, ‘deep synthesis’ advanced semiconductors and other related technologies (a subset of generative AI equipment to China and other countries.138 technologies that includes deepfakes and Sector-specific and technology-specific digital simulation models), and generative approaches can provide a highly contextual AI services.137 For a deeper discussion of form of elaborating regulation. They can be China’s regulatory regime, see Annex 2. particularly effective if enforced by sectoral New binding frameworks can also be targeted regulators with specific expertise in working towards specific elements within the AI stack, with regulated entities. However, as noted such as hardware or other infrastructure. above, policymakers should avoid an excessively 137 https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf 138 https://www.nortonrosefulbright.com/en/knowledge/publications/5a936192/us-expands-export-restrictions-on-advanced- semiconductors 58 Regulatory and Policy Frameworks fragmented legal regime governing AI, to Audits are crucial for ensuring compliance maximize legal certainty and minimize gaps in with established standards and identifying protections for consumers regarding AI harms. potential risks in AI systems. Regulators can Laws which are scoped to only apply to certain and should leverage audits as a powerful tool types of AI systems (e.g. generative AI systems) to enforce accountability and transparency risk becoming out of date if technological in AI development and deployment. By developments create new forms of AI harm that regularly assessing AI systems, audits can do not neatly map onto existing AI taxonomies. uncover vulnerabilities and ensure that ethical guidelines are being followed. Box 18: President Biden’s Executive Order: Signed in October 2023, this binding directive directs multiple federal agencies to develop and implement guidelines, standards, and best practices for the safe, secure, and trustworthy development and use of AI technologies. The actions mandated by the executive order include both voluntary guidelines and mandatory requirements for AI developers and organizations, particularly for those creating high-risk AI systems. Key Roles and Responsibilities The EO underscores the importance of various federal agencies in regulating and securing the development and use of AI. Some key roles and responsibilities highlighted in the order include: • Evaluation of Misuse Potential: Assigned to the Secretary of Homeland Security, Secretary of Energy, and OSTP , this role involves assessing the potential misuse of AI for developing CBRN (chemical, biological, radiological, or nuclear) threats • Red-Team Testing Standards: The National Institute of Standards and Technology (NIST) is tasked with setting rigorous standards for red-team testing (see box 23) to ensure the safety of AI systems before public release. • Content Authentication: The Department of Commerce is responsible for developing guidance on content authentication and watermarking to protect Americans from AI-enabled fraud and deception. • Military and National Security Oversight: The Department of Defense, Department of State, and the U.S. Intelligence Community are mandated to ensure secure and responsible AI use in military and national security contexts. • Critical Infrastructure Safeguarding: The Department of Homeland Security (DHS) is assigned the role of safeguarding critical infrastructure against potential AI threats, focusing on resilience and security. Source: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure- and-trustworthy-development-and-use-of-artificial-intelligence/ 59 Global Trends in AI Governance Evolving Country Approaches Tool 4: Regulatory They can be designed for a range of different policy objectives, meaning that policymakers Sandboxes should first consider their objectives and the problems they are trying to solve before setting up a sandbox, as these choices will Regulatory sandboxes offer a controlled and define the design and measurement of sandbox time-bound environment for the development outcomes.141 Most sandboxes share the and testing of new products and technologies. following features: : (i) they are temporary, (ii) Sandboxes have been widely adopted in they use an agile, fail-fast and feedback loop the financial sector, where they first rose to approach, and (iii) they involve collaboration prominence. They enable government agencies and iteration between stakeholders - specifically to maintain oversight and control while creating industry and policymakers.142 Based on a dynamic regulatory environment for testing practice from more mature sectors such emerging technologies and business models, as Fintech, policymakers globally have thereby generating empirical evidence to begun to pilot regulatory sandboxes for inform policy.139 Regulatory sandboxes enable AI143 – case studies from Colombia, Singapore experimental innovation within a framework and Brazil are discussed in Box 19. of controlled risks, improving regulators’ understanding of new technologies.140 139 Global Experiences from Regulatory Sandboxes, Appaya et al.(2020) 140 https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733544/EPRS_BRI(2022)733544_EN.pdf 141 https://documents1.worldbank.org/curated/en/579101587660589857/pdf/How-Regulators-Respond-To-FinTech-Evaluating- the-Different-Approaches-Sandboxes-and-Beyond.pdf, p.19. In the FinTech context, four sandbox types have been identified: (1) policy focused, seeking to remove regulatory barriers to innovation, identifying if the regulatory framework is fit for purpose, (2) innovation focused, seeking to increase competition in the marketplace and encourage innovation, (3) thematic, focusing on precise policy objectives or supporting developments of particular sub-sectors or products, and (4) cross-border, seeking to support cross-border operation of firms while encouraging regulatory cooperation and harmonization; id at pp. 20-22. 142 https://www.oecd-ilibrary.org/docserver/8f80a0e6-en.pdf?expires=1715608170&id=id&accname=ocid195787& checksum=B31A4DD15AD2C4539A32D1D779C4372F, p.8. 143 See discussion in https://scholarlycommons.law.cwsl.edu/cwilj/vol54/iss2/3/ 60 Regulatory and Policy Frameworks Box 19: AI Regulatory Sandbox Case Studies: Colombia, Brazil and the EU144 Colombia In collaboration with the Superintendence for Industry and Commerce, Colombia’s data protection authority has launched a regulatory sandbox on ‘privacy by design’ and by default in AI projects.145 The purpose of this sandbox is to create a controlled environment for AI developers to collaborate with relevant regulatory authorities to develop their products in a manner that is compliant with regulation.146 Brazil In October 2023, Brazil’s data protection authority (the ANPD) launched a regulatory sandbox pilot program for AI and data protection – this included a public consultation process with the public and private sectors.147 Although the impact of the Brazilian regulatory sandbox on the local innovation ecosystem and regulatory compliance is not yet clear due to the early stage of implementation, it takes a broad multi-stakeholder approach, coordinating action between regulators, regulated entities, technology companies, academics and civil society organizations.148 EU In the EU, regulatory sandboxes are planned under the remit of the recently established EU Office. They are designed to foster innovation, particularly for SMEs, by facilitating the training, testing, and validation of AI systems before market entry. The Office will provide technical support, advice, and tools for establishing and operating these sandboxes, coordinating with national authorities to encourage cooperation among Member States. It is expected that two years after the AI Act comes into force, each Member State must establish at least one AI regulatory sandbox, either independently or by joining with other Member States. This supervision aims to provide legal clarity, improve regulatory expertise and policy learning, and enable market access. The AI Office should be notified by national authorities of any suspension in sandbox testing due to significant risks, and national authorities must submit annual reports to the AI Office and AI Board detailing sandbox progress, incidents, and recommendations.149 Source: Adapted from Barzelay et al. (2024), https://scholarlycommons.law.cwsl.edu/cwilj/vol54/iss2/3/ 144 Adapted from https://scholarlycommons.law.cwsl.edu/cwilj/vol54/iss2/3/. 145 See SIC announces privacy-by-design and-default sandbox, https://iapp.org/news/a/colombian-dpa-announces-privacy-by- design-and-default-sandbox/. 146 The objectives of this regulatory sandbox are to (i) establish criteria to facilitate compliance with the regulation on data processing in artificial intelligence projects; (ii) ensure that personal data processing is done appropriately; (iii) promote rights- respecting AI products by design; (iv) accompany and advise companies to mitigate associated risks; (v) consolidate a proactive approach towards compliance with human rights in AI projects and (vi) suggest or recommend adjustments, corrections or adaptations to Columbia’s regulatory framework for technological advances. See https://www.redipd.org/en/news/colombia- data-protection-authority-launches-innovative-regulatory-sandbox-privacy-design-and. For a more general account of trends in AI Regulation across Latin America, see https://www.accessnow.org/wp-content/uploads/2024/02/LAC-Reporte-regional-de- politicas-de-regulacion-a-la-IA.pdf. 147 ANPD’s Call for Contributions to the regulatory sandbox for artificial intelligence and data protection in Brazil is now open, https://www.gov.br/anpd/pt-br/assuntos/noticias/anpds-call-for-contributions-to-the-regulatory-sandbox-for-artificial- intelligence-and-data-protection-in-brazil-is-now-open. 148 https://www.dataguidance.com/news/brazil-anpd-opens-ai-regulation-sandbox-public. 149 https://artificialintelligenceact.eu/the-ai-office-summary/ 61 Global Trends in AI Governance Evolving Country Approaches When implemented effectively, regulatory to comply with stringent legal obligations sandboxes can complement traditional without hands-on guidance from regulators.151 regulatory approaches by generating concrete However, it is important to note that evidence on how certain governance tools sandboxes on their own are not turnkey interact with AI systems in practice. This solutions for AI governance – sandboxes are allows policymakers to test and evaluate new most useful where there are regulatory questions regulatory methods, ensuring frameworks that can be solved with evidence derived from achieve intended policy objectives while experimentation.152 In other circumstances, avoiding unintended consequences. Regulatory the resources needed to run a sandbox may sandboxes can be combined with other outweigh the upsides – a 2020 World Bank study regulatory tools set out in this section, as part on Fintech sandboxes found that running such of an iterative, evidence-based approach to sandboxes is extremely resource-intensive and regulation. Sandboxes can also leverage the can place great burdens on regulators, diverting expertise of existing supervisory authorities resources and limited capacity away from other (e.g. data protection authorities) to ensure critical functions.153 Regulatory sandboxes coordination between different regulatory can also create potential market distortions frameworks (see the discussion below regarding and unfair competition, as participants in the the application of existing bodies of law to sandbox have the first mover advantage and AI).150 A collaborative form of regulation where may be seen to have the regulator’s ‘stamp of the oversight authority acts as a partner and approval’. Ultimately, sandboxes should not be not simply an enforcer may be particularly a substitute for building effective, permanent useful in economies where the AI ecosystem regulatory and legal frameworks for AI. is relatively young, given that AI model providers and deployers may not be equipped Box 19: Case Study: Singapore’s AI Verify Singapore’s AI Governance testing framework and toolkit, ‘AI Verify,’ launched as a pilot in May 2022, is a unique example of a ‘light-touch approach’ to AI governance and regulation. 154 AI verify validates the performance of AI systems against a set of internationally recognized principles and frameworks through standardized tests. It provides a testing report that serves to inform users, developers, and researchers. The Future of Privacy Forum notes that, ‘rather than defining ethical standards, AI Verify provides verifiability by allowing AI system developers and owners to demonstrate their claims about the performance of their AI systems.’155 The Singaporean approach is notable because it aims to facilitate interoperability with other regulatory frameworks. Source: https://aiverifyfoundation.sg/ 150 https://scholarlycommons.law.cwsl.edu/cwilj/vol54/iss2/3/. 151 Id. 152 https://documents1.worldbank.org/curated/en/579101587660589857/pdf/How-Regulators-Respond-To-FinTech-Evaluating-the- Different-Approaches-Sandboxes-and-Beyond.pdf, p.25. 153 https://documents1.worldbank.org/curated/en/912001605241080935/pdf/Global-Experiences-from-Regulatory-Sandboxes.pdf 154 https://aiverifyfoundation.sg/. 155 https://fpf.org/blog/ai-verify-singapores-ai-governance-testing-initiative-explained/. 62 Dimensions for AI Governance Section 5 Dimensions for AI Governance 63 Global Trends in AI Governance Evolving Country Approaches This section sets out some guiding factors for policymakers when designing their AI governance interventions. These are designed to be resilient to technological, economic and societal changes. This report puts forward the following 6 preliminary dimensions for designing AI governance frameworks:156 Agile and adaptive Proportionate Trustworthy-by-design Dimensions for AI Governance Context-speci c Consumer-centric Evidence-based Figure 4. Preliminary dimensions for designing AI governance frameworks 156 These principles are intended to guide countries in designing their governance frameworks. However, they share many similarities with substantive principles designed to guide the development of AI systems, such as those set out by the OECD. 64 Dimensions for AI Governance Table 4. Dimension Application Proportionate The level and intensity of precautionary requirements can be matched to the risk or scale of the activities being regulated. The goal is to ensure that governance interventions are effective in managing risks and achieving policy objectives without imposing unnecessary burdens, particularly on smaller or less risky entities. Principles Key principles when adopting this approach include: • Risk-Based Approach: Regulations are matched to the risk level, with higher-risk activities facing stricter requirements and lower-risk activities lighter regulations. • Scalability: The regulatory framework adjusts to the size and capacity of entities, reducing the burden on smaller, less risky organizations. • Flexibility: Allows for adjustments over time to keep regulations relevant and effective. Challenges Assessment Accuracy: While practical, a proportionate approach also requires a means of assessing risk, harm, and societal impacts, which can be unpredictable or unforeseeable.157 As a result, some regulators, like those in the United States and Europe rely on proxy indicators, such as the amount of computing power required to train models, the number of parameters or other technical features - which can become out of data with technological advancements.158 Consistency: Ensuring consistency in regulatory application across different sectors and entities is important to avoid perceptions of unfair treatment. The size of models may also not accurately map onto either the likelihood or severity of harms for affected populations. Moreover, the need for continuous dynamic adjustments to keep pace with changes in the industry, technology, and risk landscape can be challenging. 157 https://arxiv.org/abs/2403.13793 158 The October 30, 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence places requirements on AI models trained on 10^26 floating point operations (FLOPs). In addition, it places reporting requirements on AI models of 10^23 FLOPS that use biological sequence data. The February 2024 EU AI Act version relies on 10^25 FLOPs. These indicators may become out of date with advances in computational efficiency requiring less FLOPs to train the same or more powerful models. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive- order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ 65 Global Trends in AI Governance Evolving Country Approaches Dimension Application Trustworthy-by-design Governance frameworks can be designed to encourage AI development that is ‘trustworthy-by-design’ – embedding trustworthiness throughout the AI lifecycle, from initial conceptualization and data collection through model training, testing, deployment, and monitoring. Relying solely on ex-post regulatory enforcement, or practices like red- teaming during deployment, is not wholly sufficient.159 Principles Proactive governance requires encouraging sociotechnical practices such as stakeholder engagement, governance boards, ethical reviews, and impact assessments, which can be integrated early in the development lifecycle. Human in the loop. Given the risks of AI systems, it is crucial for governance frameworks to encourage AI developers and deployers to maintain a human ‘in the loop’, ensuring that AI-driven decisions are validated and aligned with human judgment. The effectiveness of AI systems heavily relies on the quality of data, human capital, and the expertise of the interdisciplinary team responsible for their development and deployment.160 The framework developed by the Government of Singapore can be helpful in this regard (Figure 5). Challenges Creating governance frameworks that encourage ex ante trustworthiness requires significant commitment by private sector AI stakeholders and often demands significant upfront investment. However, it avoids costly fixes (and potential regulatory sanctions) down the line. Addressing these challenges requires continuous collaboration among developers, policymakers, and stakeholders. Figure 5. Level of human involvement in AI deployment Source: https://file.go.gov.sg/ai-gov-use-cases-2.pdf 159 ‘Institutional Design Principles for Global AI Governance in the Age of Foundation Models and Generative AI,’ Safety and Global Governance of Generative Al Report, WFEO-CEIT, Shenzhen Association for Science and Technology, https://www.wfeo.org/ wp-content/uploads/2024/CEIT_Safety-and-Global-Governance-of-Generative-AI.pdf 160 Stankovich (2021). 66 Dimensions for AI Governance Dimension Application Human-centric A human-centric approach to AI governance places ‘the needs and values of people and communities at the center of AI governance and deployment’.161 Procedurally, this means that viewpoints and inputs from individuals with different backgrounds, interests, and values, as well as the expectations of affected or vulnerable communities is highlighted through the policymaking process.162 Human-centric AI governance requires designing rules that place fundamental rights and consumer interests as a priority. Human-centric AI governance can incorporate concepts such as data stewardship, that emphasize practices empowering people to inform, shape, and govern their own data.163 It can also incorporate ‘civic tech’ tools that help operationalize large-scale engagement and participation in decision-making processes. For example, the ‘vTaiwan’ project is an ‘an open consultation process that brings the Taiwanese consumers and the Taiwanese government together to craft country-wide digital legislation’ using collaborative, open-source engagement tools – in 2018 it was reported that 26 issues had been discussed through this open consultation process, with more than 80% leading to government action.164 Principles • User Involvement and Inclusivity: Actively engaging end-users in the policy design and development process to ensure that AI governance frameworks meets their needs and expectations. • Transparency: Ensure that the AI policymaking process is understandable and transparent, providing clear insights into how decisions are made. • Responsiveness: Implement mechanisms for ongoing user feedback and continuously improve governance systems based on this feedback. Challenges Implementing a human-centric approach to AI governance presents several challenges. Ensuring broad and meaningful engagement from diverse user groups can be difficult, requiring significant resources and commitment. Additionally, balancing the needs and values of different stakeholders, especially in global contexts, can lead to conflicts and complexities. 161 https://www.frontiersin.org/articles/10.3389/frai.2023.976887/full 162 Id. 163 https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/ 164 https://www.frontiersin.org/articles/10.3389/frai.2023.976887/full 67 Global Trends in AI Governance Evolving Country Approaches Dimension Application Agile and adaptive Agile and adaptive AI regulatory frameworks are designed to be flexible and responsive, allowing for iterative development and rapid adjustments in response to technological and market changes.165 These frameworks rely on trial and error, co-designing governance frameworks and standards with stakeholders, and incorporating shorter feedback loops. Principles • Iterative Approach: Governance frameworks are developed and refined through continuous feedback and iteration, enabling quick responses to changes in technology and market conditions. • Multi-Stakeholder Collaboration: Effective governance requires collaboration among regulators, industry players, academia, and civil society to ensure diverse perspectives and expertise are integrated. • Data-Driven: Utilizing real-time data flows and open data sources to inform regulatory decisions, enhancing the ability to monitor compliance and adapt governance dynamically. Challenges Challenges include complex coordination among multiple stakeholders, which can be resource-intensive and require effective communication mechanisms and ensuring that governance keeps pace with rapid technological advancements while maintaining their effectiveness without over-regulating are critical concerns. Striking a balance between being adaptive and providing a stable regulatory environment can also be challenging. 165 Adapted from OECD (2021) Recommendation of the Council for Agile Regulatory Governance to Harness Innovation. 68 Dimensions for AI Governance Dimension Application Evidence-based This emphasizes the need for empirical evidence to demonstrate the impact of regulatory interventions on AI companies’ internal safety, ethics, and security practices. This approach seeks to go beyond self-monitoring by requiring transparent reporting and accountability measures – in other words, AI companies cannot be allowed to ‘assign and mark their own homework’.166 Principles • Empirical Validation: AI developers must provide empirical evidence of their safety and security practices, moving beyond opaque internal compliance to transparent, verifiable measures. • Mandatory Reporting: Mandatory requirements for reporting incidents and corrective measures, similar to cybersecurity frameworks for data breaches. For example, the EU AI Act requires large AI developers to track, document and report serious incidents and possible corrective measures to the AI Office and relevant national competent authorities without undue delay.167 • Continuous Monitoring: Integrating real-time data flows and open data sources to dynamically monitor compliance and effectiveness of AI systems.168 For example, in deploying AI in healthcare, regulators might monitor products using publicly available data such as software bugs and error reports, customer feedback and social media. Integrating data flows can allow for automation in the regulatory process. Enforcement becomes dynamic, with review and monitoring built into the system.169 Challenges Balancing the need for transparency with the protection of proprietary information and trade secrets is a significant challenge in evidence-based AI governance. Additionally, ensuring that evidence-based measures evolve alongside advancements in AI capabilities and risk mitigation strategies is crucial to maintaining effective and relevant governance interventions. 166 https://ainowinstitute.org/general/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to-protect-the-public 167 https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf 168 International Telecommunication Union (2020). 169 World Bank Digital Regulation Platform (2021). 69 Global Trends in AI Governance Evolving Country Approaches Dimension Application Context-specific Context-specific AI governance involves evaluating AI risks and impacts within the specific environments in which AI systems are deployed.170 AI safety is not an inherent model property; instead, AI safety questions depend ‘to a large extent on the context and the environment in which the AI model or AI system is deployed.’171 Unlike conventional goods that are tested for product safety (e.g. drugs, airplanes, cars), it is difficult to specify in advance how general-purpose AI models will be used and the environments in which they will be embedded – for example, ‘evaluations of a foundation model like GPT-4 tell us very little about the overall safety of a product built on it (for example, an app built on GPT-4).’172 Principles • Contextual Evaluation: Assessing AI risks and impacts in the specific deployment environment, considering how the AI system will interact with real human behaviors and societal norms.173 • Sector-Specific Focus: Supporting sector-specific regulatory enforcement and civil society action to address unique challenges and requirements of different industries. Challenges Implementing context-specific AI governance involves accounting for the vast diversity of deployment environments and use cases for AI systems, which can complicate risk assessments and regulatory measures. It requires interdisciplinary expertise to fully understand the context-specific impacts and ensure comprehensive governance. This may require independent public interest research into context-specific AI safety issues, supporting sector-specific regulatory enforcement and civil society action.174 170 Adapted from https://ainowinstitute.org/general/ai-now-joins-civil-society-groups-in-statement-calling-for-regulation-to- protect-the-public 171 https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property 172 https://www.adalovelaceinstitute.org/blog/safety-first/ 173 https://www.trailofbits.com/documents/Toward_comprehensive_risk_assessments.pdf 174 https://www.adalovelaceinstitute.org/blog/safety-first/ 70 Stakeholder Ecosystem & Institutional Frameworks Section 6 Stakeholder Ecosystem & Institutional Frameworks 71 Global Trends in AI Governance Evolving Country Approaches Effective design, implementation and who develop and deploy AI technologies, civil supervision of the governance tools described society, academic, and research institutions above requires active input and coordination organizations that advocate for ethical and between a wide range of stakeholders. equitable AI use and advance AI knowledge, This section aims to map some of the key and end-users who interact with AI systems and functions and roles of these stakeholders. provide valuable feedback. Additionally, the While this note does not go into details of international community, including international all the different institutional arrangements organizations and multinational partnerships, in place for AI - which will be the topic of a plays a crucial role in harmonizing standards, forthcoming note - we have highlighted some fostering cooperation, and addressing important governance arrangements here. cross-border challenges in AI development and deployment. This ecosystem requires The stakeholder ecosystem for AI collaboration and communication among all encompasses a diverse group, including parties to ensure AI is developed and used the public sector who establish and enforce responsibly, ethically, and beneficially. regulation and policies, private sector players d 1. Supervision and Harmonization 1. Haa oversight 2. Coo Combatting 2. Regulatory o cro cross-border coordination harms a ha 3. Enforcement Promote 3. Pro interests Public ublic Pu International International of less developed a na nations Bodies odies Bo Commun nity Community Stakeholders rsight 1. Democratic oversight S Civil Society, 1. Coo Compliance and of AI Academic Acaddemic Private o co consultation tion in 2. Active consultation tutions Institutions Sector 2. Staa Standard-setting esses regulatory processes d 3. Ind Independent 3rd f 3. Amplify voice of paa party audit marginalized or vulnerable Figure 6. Source: Authors 72 Stakeholder Ecosystem & Institutional Frameworks 6.1. Public and Regulatory AI. In some jurisdictions, enforcement of AI regulations is delegated to existing Data Bodies Protection Authorities (DPAs) - for instance, the European Data Protection Board (EDPB)178 has recommended that DPAs be designated A number of countries are developing as the ‘market surveillance authorities’ comprehensive AI strategies to guide the responsible for enforcing obligations for national development of AI technologies. high-risk AI systems under the EU AI Act.179 These strategies are spearheaded by However, some jurisdictions are also various ministries depending on the country. establishing dedicated bodies to oversee For example, in Canada175, the Ministry AI governance. For instance, the European of Innovation, Science, and Economic Union has set up the EU AI Office (see box Development (ISED) leads the AI strategy below), tasked with implementing the AI Act, initiatives. In Estonia176, the Ministry of with a particular focus on general-purpose Economic Affairs and Communications is AI systems. Similarly, the United Kingdom responsible for digital and AI policies, while in has created the Responsible Technology Japan177, the Ministry of Economy, Trade, and Adoption Unit (RTA) (previously the Centre Industry (METI) takes charge of AI strategy for Data Ethics and Innovation (CDEI))180 to development. However, the enforcement advise the government on the responsible of AI regulations is typically managed by use of AI and data-driven technologies. different entities or regulatory bodies. Both existing and new national regulatory institutions play key roles in designing and implementing regulatory tools for 175 https://ised-isde.canada.ca/site/ai-strategy/en 176 https://digital-skills-jobs.europa.eu/en/actions/national-initiatives/national-strategies/estonia-estonian-digital-agenda-2030 177 https://www.meti.go.jp/english/press/2022/0128_003.html 178 https://www.edpb.europa.eu/our-work-tools/our-documents/statements/statement-32024-data-protection-authorities-role- artificial_en 179 https://www.edpb.europa.eu/news/news/2024/edpb-adopts-statement-dpas-role-ai-act-framework-eu-us-data-privacy- framework-faq_en 180 https://www.gov.uk/government/news/the-cdei-is-now-the-responsible-technology-adoption-unit 73 Global Trends in AI Governance Evolving Country Approaches Box 20: Implementation and enforcement of the EU AI Act Enforcement of the EU AI Act will involve a combination of national and supranational authorities. National Notifying and Market Surveillance Authorities At the national level, EU member states will establish one notifying authority and one market surveillance authority and must ensure that these national competent authorities have adequate technical, financial and human resources, and infrastructure to fulfil their tasks under the Act.181 The market surveillance authority is the primary body responsible for enforcement at the national level, and will report to the Commission and relevant national competition authorities on an annual basis.182 Much of the future complexity in implementing the AI Act is that there are a huge range of design choices that need to be undertaken at national level – see table 5 for more. The EU AI Office & Board At the supranational level, the EU AI Board and the EU AI Office are two distinct entities within the governance framework of the AI Act, each with specific roles and responsibilities.183 The EU AI Office, established within the Directorate-General for Communication Networks, Content and Technology (DG CNECT) of the European Commission, will work to ensure consistent application of the AI Act across the EU. It will work directly with providers and will monitor, supervise, and enforce the AI Act requirements across the 27 EU Member States. It also aims to facilitate legal clarity and market access by developing voluntary codes of practice, which create a presumption of conformity for AI model providers. Additionally, the AI Office will lead international cooperation on AI, strengthen ties between the European Commission and the scientific community, and serve as the Secretariat to the EU AI Board.184 The EU AI Board serves as an advisory and coordinating body, composed of representatives from Member States and other relevant entities. It advises the European Commission on AI-related issues – its primary role is to ensure the harmonization of AI regulations, provide guidance, and resolve disputes among national authorities.185 Source: adapted from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4817755 Delegation of regulatory authority for enforcing AI obligations can be designed in several ways, with each having their own benefits and drawbacks.186 181 EU AI Act, Art 70. 182 Art 74, EU AI Act. 183 https://ec.europa.eu/commission/presscorner/detail/en/ip_24_2982 184 https://artificialintelligenceact.eu/the-ai-office-summary/ 185 Ibid. 186 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4817755 74 Stakeholder Ecosystem & Institutional Frameworks Table 5. Option Benefits Drawbacks Create a new national Centralized body resourced New regulatory body will lack agency specifically specifically for AI oversight industry-specific expertise. dedicated to enforcement will have experts with Process for setting up a of new AI rules. specific AI skills. new regulatory institution is costly and time-intensive. Assign responsibility Leverages existing Potential for disputes over for enforcing new organizational framework jurisdiction of each regulator, AI rules to existing and sectoral knowledge. depending on scope of sectorial regulator (e.g. existing legal powers. Enables evaluation of AI systems banking, telecoms, data in specific deployment contexts. Potential for ‘path dependency’ protection, competition). focus on issues familiar to existing regulators, overlooking novel or emergent AI harms. Establish a ‘competence Brings together AI experts Potential recruitment challenges, center’ within existing from different backgrounds particularly for inter-disciplinary authority with AI and sectors (on temporary technical positions. experience, e.g. banking or permanent basis) to Relatively novel approach with or network regulator. form interdisciplinary lack of established precedent. teams on specific cases. Source: adapted from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4817755 Practical experience in implementing these some central monitoring and coordination design choices is scarce at present. Regardless functions established by the government, to of the approach taken, it is important to ensure facilitate coordinated regulatory action. For coordination between new AI regulators and example, the UK’s white paper on AI regulation existing regulatory institutions – for example, recognizes that a patchwork approach to law firm DLA Piper has noted that there are regulation with little central coordination important areas of overlap in the substantive EU or oversight may in fact create barriers to rules governing data protection (in the GPDR) innovation due to a lack of coherence and and AI (in the AI Act)187 – regulators need to clarity in regulatory obligations – as such, the coordinate to ensure that regulatory resources UK has committed to create a set of centralized are expended in the most efficient manner mechanisms to ensure the sectoral approach to possible, avoiding both duplicated efforts AI regulation can be monitored and adapted, as as well as gaps in regulatory supervision. well as facilitate a single point of collaboration for all interested parties (international Existing regulatory institutions will have a partners, industry, civil society, academia and large role to play, even in the absence of a the public). For more detail, see figure 7. new binding AI law. Even in a de-centralized supervisory model, there may still need to be 187 https://privacymatters.dlapiper.com/2024/04/europe-the-eu-ai-acts-relationship-with-data-protection-law-key-takeaways/ 75 Global Trends in AI Governance Evolving Country Approaches Figure 7. Centralized Risk Function within De-Centralized, Sector-Based Regulatory Approach Source:https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper#section323 National AI safety institutes are also emerging NIST),191 the Japanese AI Safety Institute as crucial entities dedicated to researching, (within the Information-technology Promotion understanding, and mitigating the risks Agency),192 and the Canadian AI Safety associated with AI. While each institute has Institute193 among others. The effectiveness its own distinct policy goals and mandates,188 of these institutes will depend on the scope they focus on ensuring that AI systems are of their mandate and their resources. safe, reliable, and aligned with human values. Several of these Institutes have also organized Importantly, for AI Safety Institutes with a AI Safety Summits to bring stakeholders quasi-regulatory mandate (e.g. pre-release together to address the critical challenges safety testing), these bodies need to collaborate and risks posed by advanced AI technologies. with existing sectoral regulators or be They also importantly contribute to public accompanied by other regulatory interventions awareness and education on AI safety. The to mandate transparency and auditability, to first Safety Summit was held in Bletchley in prevent industry actors from refusing to allow November 2023 (see box 21), this was then access to models.189 Some established Safety followed by a summit in Seoul in May 2024, with Institutes include the UK AI Safety Institute,190 plans for another in Paris planned for 2025. the US AI Safety Institute consortium (under 188 For a comparison of the mandates and functions of the various AI Safety Institutes, see Forum For Cooperation on AI, Briefing Booklet, Dialogue on Artificial Intelligence #22 (on file with author). 189 https://www.adalovelaceinstitute.org/blog/safety-first/ 190 https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute 191 https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium- dedicated 192 https://aisi.go.jp/ 76 193 https://www.pm.gc.ca/en/news/news-releases/2024/04/07/securing-canadas-ai Stakeholder Ecosystem & Institutional Frameworks Box 21: UK AI Safety Institute and Summit and Pre-Evaluations of Models The creation of the UK AI Safety Institute, hailed as the first state-backed organization focusing on advancing AI safety for the public interest, was announced at the UK AI Safety Summit in November 2023. The UK AI Safety Institute has three core functions:194 1. Develop and conduct evaluations on advanced AI systems, to assess safety- relevant capabilities, safety and security of systems, and societal impacts. 2. Drive foundational AI safety research, through exploratory research projects and convening external researchers. 3. Facilitate information exchange, by establishing clear, voluntary channels for sharing information with national and international stakeholders, subject to privacy and data regulations. On the back of the UK AI Safety Summit – which was purported to be the first global AI Summit- the Bletchley Declaration195 was endorsed by twenty-eight nations. The event was useful in defying previous concerns of intense competition among top AI advanced countries i.e. UK, US, and China. Instead, these nations, along with others like Brazil, India, and Indonesia, pledged to engage in collaborative AI safety research, emphasizing the implementation of safety tests before the release of new products. The signatory countries expressed a shared commitment to international cooperation, aiming to drive inclusive economic growth, sustainable development, innovation, protect human rights, and instill public trust in AI systems. Further, during the UK AI Safety Summit, 8 leading AI tech companies’196 voluntarily committed to subject their models to pre-release safety testing. However, in April 2024 it was revealed that, although the UK government has said it has begun pre-deployment testing, the AI Safety Institute has only been able to gain access to models after release in most cases (with only London-headquartered Google DeepMind offering a form of pre-deployment access to its Gemini models).197 Although both OpenAI and Meta were set to imminently roll out their next-generation models (OpenAI’s GPT-5 and Meta’s Llama-3), neither company had granted access to the UK AI Safety Institute to conduct pre-release testing.198 As such, the ability of these bodies to function effectively as part of a ‘wait and see’ approach may be conditional on the introduction of mandatory transparency and audit requirements, imposed through hard law or via other regulatory avenues. The Ada Lovelace Institute has recommended several improvements to the UK AI Safety Institute:199 1. Integrate the AI Safety Institute into existing regulatory frameworks by working with sectoral regulators to test AI products in particular contexts for safety and efficacy. 2. Give the AI Safety Institute legal authority to compel companies to provide access to AI models, training data, relevant documentation, and information about the model supply chain (including energy/water costs and labor practices). 3. Give the AI Safety Institute and downstream regulators the power to block release of models that pose safety risks (‘pre-market approvals’ powers). Source: https://assets.publishing.service.gov.uk/media/65438d159e05fd0014be7bd9/introducing- ai-safety-institute-web-accessible.pdf; https://www.adalovelaceinstitute.org/blog/safety-first/; 194 https://assets.publishing.service.gov.uk/media/65438d159e05fd0014be7bd9/introducing-ai-safety-institute-web-accessible. pdf, p.8 195 https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by- countries-attending-the-ai-safety-summit-1-2-november-2023 196 Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and Open AI. 197 https://www.politico.eu/article/rishi-sunak-ai-testing-tech-ai-safety-institute/ 198 Id. 77 199 https://www.adalovelaceinstitute.org/blog/safety-first/ Global Trends in AI Governance Evolving Country Approaches More generally, the public sector has procurement regulations and guidelines will an important role to play in shaping the play an increasingly important role in setting responsible AI ecosystem, by leveraging robust guardrails and promoting public trust, the economic weight of the government’s by ensuring that any use of AI is effective, purchasing power to instigate changes proportionate, legitimate and in line with in how large AI companies design and broader public sector duties. This is especially deliver AI solutions --- procurement policy important given that many governments will rely frameworks play a crucial role here. As on the expertise of AI providers and will likely governments increasingly seek to integrate be AI ‘consumers’ rather than AI ‘developers’. AI into the provision of public sector services, Box 22: WEF AI Government Procurement Guidelines (2020) In June 2020 the World Economic Forum (WEF) released a set of 10 guidelines for AI government procurement, outlining key considerations when starting a procurement process, writing a request for proposal (RFP), and evaluating RFP responses: 1. Use procurement processes that focus not on prescribing a specific solution but rather on outlining problems and opportunities, and allow room for iteration. 2. Define the public benefit of using AI while assessing risks. 3. Align your procurement with relevant existing governmental strategies and contribute to their further improvement. 4. Incorporate potentially relevant legislation and codes of practice in your RFP. 5. Articulate the technical and administrative feasibility of accessing relevant data. 6. Highlight the technical and ethical limitations of intended uses of data to avoid issues such as historical data bias 7. Work with a diverse, multidisciplinary team. 8. Focus throughout the procurement process on mechanisms of algorithmic accountability and of transparency norms. 9. Implement a process for the continued engagement of the AI provider with the acquiring entity for knowledge transfer and long-term risk assessment. 10. Create the conditions for a level and fair playing field among AI solution providers. Source: https://www3.weforum.org/docs/WEF_AI_Procurement_in_a_Box_AI_Government_Procurement_Guidelines_2020. pdf; 78 Stakeholder Ecosystem & Institutional Frameworks 6.2. Private sector design AI governance frameworks. Outside of multi-stakeholder standards processes, large AI developers play a critical role in Because the development of cutting-edge AI mainstreaming good AI governance practices models is predominantly led by a few large across the ecosystem, through licensing or technology companies, the private sector plays other contracting frameworks. This is important a vital role in ensuring responsible AI practices. given that the private sector AI ecosystem Policymakers need to consult with industry includes not just model developers, but players and bodies to develop a robust AI downstream deployers of AI across a range of governance roadmap. At the same time, caution industry areas, including banking, healthcare, must be taken to ensure that AI governance education, retail, transportation, energy, etc. does not become subject to industry capture Third-party oversight and audits: Private sector – ultimately responsibility for AI policy should actors can also play a useful role in the AI audit remain with the state, acting in the interests of ecosystem by providing independent third-party all consumers and stakeholders, and should not oversight and testing of AI systems, to validate be inappropriately delegated to private actors. their impact and safety in context. Such third- Private sector involvement in AI governance party AI audit firms could play a is crucial for several reasons: role in a binding regulatory framework, in the • Innovation and expertise: The private same way that accounting firms audit the sector drives much of the innovation in AI books of private companies200 – in such and possesses deep technical expertise an ecosystem, robust professional and that can inform effective governance. ethical safeguards would need to be put in place to mediate conflicts of interests. • Resource availability: Large technology companies have the resources to Public-Private Partnerships: Public-private AI conduct thorough testing and validation partnership may be useful where governments of AI systems, contributing to safer need specific inputs or expertise from AI and more reliable AI deployment. practitioners – for example, can provide training to regulatory bodies on the harms posed by • Market implementation: As the the latest AI models and help democratize primary developers and deployers of AI development. For instance, in January AI technologies, private companies 2024, the US National Science Foundation are well-positioned to implement announced the National Artificial Intelligence governance frameworks and ensure Research Resource, providing a platform for US compliance with regulatory standards. AI researchers to access computational, data, Drawing on experience from other sectors, and training resources donated by large AI some effective interactions include: companies.201 However, the AI Now Institute has cautioned that the incentives of private sector Ensuring compliance and building trust: The actors private sector is integral in ensuring compliance within public-private partnerships must be with regulatory requirements and building trust carefully scrutinized AI companies need in AI systems. Red-teaming and adversarial to articulate a robust vision for how public testing are critical practices where private funds’ investment within a public-private sector involvement is essential (see Box 23) structure will advance the public good and Collaborative governance and standardization: benefit society at large, ensuring public Standardization processes provide an funding does not simply enable innovation important avenue for the private sector benefits to accrue to incumbent players.202 to collaborate with other stakeholders to 200 https://www.anthropic.com/news/third-party-testing 201 https://foreignpolicy.com/2024/02/12/ai-public-private-partnerships-task-force-nairr/; https://nairrpilot.org/about 79 202 https://foreignpolicy.com/2024/02/12/ai-public-private-partnerships-task-force-nairr/ Global Trends in AI Governance Evolving Country Approaches Box 23: Red-teaming and adversarial testing Red-teaming is a critical practice in the AI industry aimed at ensuring the robustness and security of AI systems. It involves a structured testing approach where dedicated teams, known as red teams, use adversarial methods to identify vulnerabilities, flaws, and potential risks in AI models. This practice is essential for uncovering harmful or unintended behaviors that could arise from the deployment of AI systems. For instance, the Biden administration’s executive order on AI requires high-risk generative AI models to undergo red-teaming, defined as ‘a structured testing effort to find flaws and vulnerabilities in an AI system.’203 Key Aspects of Red-Teaming: 1. Identification of Vulnerabilities: Red teams simulate attacks on AI systems to identify weaknesses that could be exploited by malicious actors, including biases, discriminatory outputs, and other harmful behaviors not evident during standard testing procedures. 2. Adversarial Testing: This involves deliberately attempting to cause the AI system to fail or produce incorrect results. By exposing the system to various adversarial scenarios, red teams can identify potential points of failure and areas requiring strengthening. 3. Internal vs. External Red Teams: Companies can choose between internal red teams composed of their employees or external red teams made up of independent experts. Internal teams benefit from a deep understanding of the company’s systems, while external teams bring a fresh perspective and can often identify issues that internal teams might overlook. For example, Google uses internal red teams, whereas OpenAI creates a network of external red-teamers. 4. Customized Approach: The structure and methodology of red-teaming should be tailored to the specific AI system and its deployment context. High-risk AI systems, such as those used in healthcare or finance, may require more rigorous and comprehensive red-teaming efforts. 5. Continuous Improvement: Red-teaming should be an ongoing process. As AI systems evolve and new threats emerge, continuous red-teaming efforts are necessary to ensure the systems remain secure and reliable. Red-teaming is crucial for ensuring the robustness and security of AI systems. It helps identify and mitigate security vulnerabilities before they can be exploited, enhancing the overall security of AI systems. Additionally, red-teaming supports compliance with regulatory requirements by providing evidence that AI systems have been rigorously tested for safety and reliability. By proactively identifying and addressing potential issues, organizations can build trust with users, stakeholders, and regulators, demonstrating their commitment to responsible AI deployment. However, red-teaming is not a one-size-fits-all solution. Jiahao Chen, director of AI/ML at the NUC Office of Technology and Innovation, notes that red teaming is an assurance framework ensuring AI systems operate as intended. They are not a substitute for audit frameworks which ensure that private sector entities fulfill their regulatory and ethical responsibilities.204 A policy brief from Data & Society recommends that red-teaming should be accompanied by full accountability measures such as algorithmic impact assessments, external audits, and public consultation.205 Source: https://hbr.org/2024/01/how-to-red-team-a-gen-ai-model; https://www.ibm.com/think/topics/red-teaming 203 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and- trustworthy-development-and-use-of-artificial-intelligence/, section 3(d). 204 https://www.linkedin.com/pulse/red-teaming-assurance-accountability-jiahao-chen-pzj9e/ 205 https://datasociety.net/library/ai-red-teaming-is-not-a-one-stop-solution-to-ai-harms-recommendations-for-using-red-teaming- for-ai-accountability/ 80 Stakeholder Ecosystem & Institutional Frameworks 6.3. Civil society and direct To facilitate these activities, policymakers should ensure a genuine multi-stakeholder, public participation user-driven approach to crafting AI governance interventions. This requires policy interventions that provide civil Civil society organizations have a society with the necessary civic space, critical role to play in representing the financial, human, and technical resources interests of consumers and marginalized to conduct this work and being cognizant groups in policymaking fora. of the potential barriers to participation. First, civil society can provide an active layer Ensuring opportunities for direct public of democratic, regulatory oversight, holding input in legislative and regulatory processes AI developers and deployers accountable to is also crucial. Some governments have public interests and goals.206 In particular, civil already begun experimenting with ways of society organizations can provide an additional encouraging direct democratic consumer layer of accountability by actively auditing and engagement in AI governance – in February evaluating AI-driven practices on the ground. 2024 Belgium launched a consumers’ panel This role requires governance interventions that on AI, comprised of 60 people selected at mandate transparency and access to critical data random bringing together a diverse group flows. For example, the EU actively supports in terms of age, gender, education levels, independent fact-checking organizations such and other demographic criteria.208 The as the European Digital Media Observatory panel’s conclusions were presented to (EDMO) and European Fact-Checking Standards Belgian and European political leaders, as Network (EFSCN) as part of their wider an input to inform Belgium’s positions with approach to combatting disinformation.207 the Council of the EU when defining the European strategic agenda for 2024-2027. Second, civil society has an important role in amplifying the voice of vulnerable or Academics and research institutions are marginalized populations. This can take fundamental to advancing AI knowledge the form of feedback on formal legislation and informing policy development. Their and regulatory initiatives, as well as inputs research provides the evidence base needed representing consumers’ interests and to understand AI’s impacts, risks, and benefits. fundamental rights at standards development Academics contribute to setting ethical organizations – although their ability to guidelines and developing best practices for participate in standard-setting activities is AI deployment. By conducting independent limited at present, this should be expanded. studies and publishing findings, they offer critical insights that help shape effective and responsible AI governance frameworks. Academics also play a key role in educating the next generation of AI practitioners.209 206 https://wiserd.ac.uk/blog/civil-society-perspectives-on-ai-in-the-eu/#:~:text=Namely%2C%20that%20civil%20society%20 involvement,founded%20on%20accountability%20and%20transparency. 207 https://commission.europa.eu/topics/strategic-communication-and-tackling-disinformation/supporting-fact-checking-and-civil- society-organisations_en 208 https://belgian-presidency.consilium.europa.eu/en/news/launch-of-citizens-panel-on-artificial-intelligence/. See also https:// iai.tv/articles/we-need-to-democratize-ai-helene-landemore-john-tasioulas-auid-2680 on the role of citizen assemblies in AI governance. 209 Id., https://belgian-presidency.consilium.europa.eu/en/news/the-citizens-panel-on-ai-issues-its-report/. 81 Global Trends in AI Governance Evolving Country Approaches 6.4. International Third, international coordination on AI governance can encourage responsible community innovation by lowering compliance costs for businesses – the cost of complying with dozens of fragmented national rules International coordination on AI regulation disproportionately disadvantages new startups and governance is critical for several reasons.210 and market entrants, who do not have the same compliance resources as larger companies. First, the scale and transboundary nature of AI systems can lead to cross-border impacts and As countries continue to establish and harms. For instance, privacy harms arising from refine their institutional arrangements, mass data collection are unlikely to be confined international cooperation remains vital. The to a single country. Similarly, biased outputs Global Partnership on Artificial Intelligence can be generated from an AI system initially (GPAI), an initiative involving multiple countries, built in Western Europe but deployed globally, aims to bridge the gap between theory and with the potential for harm spreading rapidly practice on AI. GPAI facilitates international across borders and populations. In the absence collaboration by bringing together experts of global cooperation on rulemaking, large from various fields to promote responsible technology companies leading AI development AI development. Additionally, the OECD has and deployment based overwhelmingly in the established the AI Policy Observatory, which Global North may choose only to comply with provides a platform for countries to share best the regulatory frameworks applicable in their practices and align their AI policies. Furthermore large priority markets.211 Smaller EMDEs may countries are entering into Memoranda of therefore find it difficult to exert regulatory Understanding (MoUs) to facilitate collaboration influence over AI developers based in other and harmonize AI policies. For instance the jurisdictions due imbalances in market size and/ EU and the US have agreed to increase co- or relative geopolitical influence. International operation in the development of technologies coordination on AI governance, if created in a based on AI, with an emphasis on safety and participatory and robust manner, gives EMDEs governance,212 while the UK and US Safety the opportunity to ensure their needs and Institutes are collaborating to formulate a concerns are reflected in how AI is governed. framework to test the safety of LLMs213, while at the AI Seoul Summit in May 2024, 10 countries Second, international coordination on AI and the EU agreed to launch an international governance is needed in order to prevent network dedicated to advancing the science of a ‘race to the bottom’ – i.e. to prevent AI safety.214 Global dialogue is due to continue ‘regulatory arbitrage’ as private firms seek into 2025, beginning with the AI Action to relocate their most harmful activities to Summit planned for February 2025 hosted by areas of low regulatory barriers. Ensuring a France, which will include a track on global AI level regulatory playing field is particularly governance which aims to shape an effective important for smaller, less developed states and inclusive framework for AI governance. who may otherwise face pressures to lower guardrails in order to encourage local innovation or foster foreign investment. 210 See Veale et al., (2023), p. 265-6. 211 See Bradford (2019), https://academic-oup-com.libproxy-wb.imf.org/book/36491?login=true&token= 212 https://www.cio.com/article/2083973/eu-and-us-agree-to-chart-common-course-on-ai-regulation.html 213 https://www.commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety 214 https://www.gov.uk/government/news/global-leaders-agree-to-launch-first-international-network-of-ai-safety-institutes-to- boost-understanding-of-aim 82 Stakeholder Ecosystem & Institutional Frameworks Multilaterals and intergovernmental Challenge Programs to focus funding in the organizations are also increasing their coming years.216 The World Bank also has an strategic support for AI. The UN is currently ongoing project, funded by its Human Rights, engaged in several projects designed to Inclusion and Empowerment Umbrella Trust coordinate international AI governance Fund, seeking to design AI governance, risk- including the Global Digital Compact which mitigation and safeguards for Bank-funded aligns countries on a common, inclusive digital projects with AI components. A number of development agenda.215 In parallel, the UN development organizations are also increasingly Advisory Body on Artificial Intelligence has providing technical assistance and capacity outlined plans to establish an inclusive AI building on AI strategy and policy interventions governance institution, aiming to harmonize – and will have an increasingly important role efforts across various global initiatives, building to play in helping their recipient countries on existing processes. At the World Bank, craft tailored AI governance frameworks.217 digitization has been identified as a core Global 215 https://www.un.org/techenvoy/global-digital-compact 216 https://www.devcommittee.org/content/dam/sites/devcommittee/doc/documents/2023/Final%20Updated%20Evolution%20 Paper%20DC2023-0003.pdf 217 See Jeremy Ng, ‘MDBs and The Legal Fabric of Global AI Governance: Infrastructure as Regulation in the Global Majority’ AI In Society (OUP, 2024, forthcoming). 83 Global Trends in AI Governance Evolving Country Approaches Section 7 Guidance for Policymakers 84 Guidance for Policymakers Trusted AI ecosystems need proportionate, governance. This section is designed to local regulatory frameworks to mitigate risks guide policymakers through the stages of that arise from AI adoption. As discussed developing and implementing effective AI in Section I above, these risks can arise governance interventions. The process allows throughout the AI lifecycle – including data for continuous evaluation and adjustment to privacy and copyright issues that arise during address emerging challenges and opportunities mass data collection, the significant water in the AI landscape. AI governance should and energy consumption requirements for be a dynamic, agile and adaptive process. model training and inference, and potentially Given the constantly evolving landscape, we systemic impacts of AI deployment across do not set out a single prescriptive approach healthcare, education, social protection, for designing AI governance. The approaches transport, and other critical sectors. outlined in this section are indicative only and Addressing these risks requires a coordinated, aimed at stimulating policy-level thinking – holistic multistakeholder approach to they are not intended to be a strict rulebook. Define policy objectives Assign priorities Asscess AI ecosystem maturity Assess legal framework 1. Promoting trust in AI Choose priority policy objectives, 1. Digital/data infrastructure 1. Existing legal frameworks 2. Fostering digital inclusion taking into account citizen 2. Number/size of market 2. Existing regulatory bodies Evaluate public resources 3. Protecting fundamental feedback, broader stakeholder players and capacity Take stock of, and allocate rights consultation, and international 3. Human capital public expenditures for: 4. Encouraging local g obligations legal g (incl ( regarding g g 4. Research ecosystem y 1. Creating of new innovation human rirights) ghts) frameworks/public bodies 2. Modernization of existing regulatory frameworks/bodies 1 2 3 4 5 Identify risks Id 6 Consider Co AI o AI risks throughout I lifecycle, from the AI infrastructure inf f and model supply su u chain to data collection, co o processing, model training, tra a system design, and 7 deployment dee Select S e regulatory approaches 12 1. 1 . Private/national ethical principles 8 2. 2 . International agreements 3. 3 . Technical standards 4. Regulatory sandbox 4 5. New horizontal AI law 5 11 9 6. Update or apply existing 6 law 10 7. Targeted/sectoral law 7 Consult citizens and CSOs Ensure that citizens (particularly vulneralble Implement and monitor Coordinate internationallyy ult private Consult p sector secto r groups and affected persons) Implement chosen regulatory Consider coordinations with Ensure private sector is and CSOs are consulted to approaches. Monitor and international partners to consulted early and often to ensure governance evaluate outcomes in relation harmonize key frameworks, maximize awareness of new frameworks adequetly refect to policy objectives. share good practices, and governance frameworks and public concerns. combat cross-border harms. encourage compliance. Figure 7. Process for AI Governance Source: Authors 85 Global Trends in AI Governance Evolving Country Approaches 7.1. Key Considerations AI risks – regulators should ensure that a combination of the regulatory tools outlined above are adapted to their local country This section is intended to provide high- context, consumer needs, and policy priorities. level guidance for policymakers, outlining When selecting the correct regulatory and 5 key areas for consideration. It is important governance interventions for their consumer’s to note that there is no single ‘one-size- needs and country context, policymakers need fits-all’ regulatory intervention to address to assess at the minimum the following factors: Table 6. 5 Key considerations before adopting a regulatory approach Factor Description Policy priorities and A country’s approach to AI governance will ultimately depend local context on its policy priorities. These may include promoting trust, protecting fundamental rights of users, increasing digital inclusion, fostering local innovation ecosystems, increasing market competition, or attracting investment. Maturity of AI ecosystem Regulatory approaches vary based on the maturity of the local AI ecosystem (including availability of infrastructure and human expertise). In regions with limited AI development, priorities may include promoting good data governance and introducing baseline governance requirements. In more advanced regions, early binding measures to prevent AI risks and harmonization with global standards may be necessary. Legal framework and Policymakers need to assess existing legal frameworks to determine regulatory environment which regulatory tools can be implemented immediately and which may require legislative reform. This includes considering AI-specific provisions in data protection, cybercrime, competition, and human rights laws and dispute resolution mechanisms among others. Public resources Policymakers must consider available public resources when and capacity designing AI policy interventions. Binding measures require significant investment and skilled staff but create lasting oversight mechanisms. In contrast, self-governance and soft law approaches need less investment but may require some centralized public monitoring. Stakeholder ecosystem Effective AI governance requires a comprehensive stakeholder ecosystem, including government bodies, industry participants, academia, civil society, and consumers. Engaging these stakeholders ensures that AI policies are well-rounded, addressing diverse concerns and leveraging collective expertise. Market trust and active participation are crucial for the success of regulatory frameworks. 86 Guidance for Policymakers A country’s policy priorities are shaped by table below as they are specific to each country. the unique socio-economic, political, and The remaining four considerations are included technological contexts of each country. For in the analytical matrix to support policymakers’ instance, a country with a robust tech sector decision-making. This matrix should be read may prioritize fostering innovation and market in conjunction with the Dimensions for AI competition, while a developing country might Governance in Section IV and the Tradeoffs focus more on digital inclusion and protecting for AI Governance in Section III. Note that user rights. Moreover, policymakers should this matrix is not intended to be exhaustive. ensure that these priorities are grounded in the needs of their consumers. Consumer needs can vary significantly across different demographics and regions, and it is crucial for governance frameworks to reflect these diverse perspectives. Policy priorities have not been included in the Table 7. Governance Tradeoffs for AI Governance Regulatory Tool Maturity of AI ecosystem Legal framework Public resources Stakeholder ecosystem and capacity Industry self-governance Private ethical codes Suitable for robust No legal framework Few public sector Requires trust from and councils industries with required. Public sector resources needed industry actors. established players. can encourage adoption. for monitoring and Limited public input. encouragement. Soft Law Non-binding Suitable for varying sizes No legal framework Minimal resources to Requires cooperation international and capacities; global required at first but can accede; increases if between international agreements harmonizing effect. eventually lead to the translated into policy. and national creation of national laws. stakeholders. National AI principles Suitable for all No explicit legal Varies; can become Requires broad / ethics frameworks market sizes; flexible framework required; resource-intensive if new stakeholder engagement and adaptive. new public bodies institutions are needed. for effective may be needed. implementation. Technical standards Suitable for mature No legal framework Varies based on the Multi-stakeholder markets with required; can interact standard-setting involvement; risk technical capacity. with future frameworks. organization and modes of dominant player of participation. influence. Potentially resource- intensive if setting up new national standard- setting body. 87 Global Trends in AI Governance Evolving Country Approaches Regulatory Tool Maturity of AI ecosystem Legal framework Public resources Stakeholder ecosystem and capacity Industry self-governance Regulatory sandboxes Relevant for more Requires legal powers Requires substantial High trust needed developed AI markets to amend regulatory resources for from market; with active players. obligations as needed. establishment, regulator decisions maintenance and are discretionary. monitoring. Industry self-governance New horizontal AI law Relevant for markets Requires new legal Requires substantial Stakeholder buy- with a clear gap in the framework. resources to design, in critical; public regulatory environment. implement and consultations often oversee new legal and required; Coordination ‘Asymmetric’ regulatory regulatory framework. between different approach can be ministries is essential. adopted, placing greater burden on larger or more systemically risky market players. Update or apply Most relevant Existing, robust legal Resources required to Requires familiarization existing laws for markets with frameworks required. modernize regulatory from regulated entities; established actors frameworks and co-ordination between Some legislative familiar with existing increase AI-specific different agencies and intervention (at primary compliance regimes. capacity within each public trust needed. or secondary levels) may regulatory body. be needed to modify scope of existing legal regimes or empower existing regulators. Targeted technical or Suitable for markets with New legal frameworks Resource requirements Requires engagement sectoral approaches clear, specific use cases. required. will vary greatly with specific sectors; depending on scope high market trust. and regulatory burdens contemplated by new legal framework. Source: authors 88 Guidance for Policymakers 7.2. Looking to the future due to unrepresentative datasets and a lack of transparency in algorithms. The adoption of AI technologies may lead to significant labor In conclusion, the rapidly evolving landscape market disruption, resulting in job losses and of AI presents unique challenges and a widening digital divide. Additionally, AI can opportunities for policymakers worldwide. As be misused for spreading misinformation, AI technologies become increasingly integral creating deepfakes, conducting cybercrime, to various sectors, it is imperative to establish interfering with elections, and facilitating fraud, robust regulatory frameworks that can both all of which erode trust in media and news. harness the benefits of AI and mitigate its The environmental impacts of AI are also potential risks. The diverse regulatory tools concerning, as AI systems, particularly those explored in this paper—including industry involving large-scale data processing, consume self-governance, soft law, national AI principles, significant amounts of energy, contributing technical standards, regulatory sandboxes, to environmental degradation and increased and hard law approaches—each offer distinct carbon emissions. Furthermore, cybersecurity advantages and limitations. Policymakers vulnerabilities in AI systems and applications must carefully consider the maturity of are significant due to their complexity and their AI ecosystem, the existing legal and multiple points of vulnerability, highlighting the regulatory environment, public resources, and urgent need for robust regulatory frameworks. stakeholder ecosystems when selecting the Ultimately, the goal of AI governance should most appropriate regulatory mechanisms. be to create a balanced and forward-looking Effective AI governance requires a dynamic framework that protects fundamental rights, and flexible approach, allowing for continuous promotes digital inclusion, and drives sustainable adaptation to new technological developments innovation. As AI continues to transform our and societal needs. The proposed Framework world, it is crucial that regulatory approaches provides a structured yet agile process for are not only effective but also equitable policymakers to follow, emphasizing the and responsive to the diverse needs of all importance of defining clear policy objectives, stakeholders. Specific policy recommendations prioritizing actions based on consumer needs should be carefully considered and tailored and local contexts, and engaging with a to the context of each country. Upcoming wide range of stakeholders. By fostering papers on prerequisites for AI and AI toolkits collaboration between the public and private for creating strategies will further support sectors, civil society, and international partners, policymakers in their efforts to build effective policymakers can ensure that AI governance AI governance frameworks. By leveraging frameworks are comprehensive, inclusive, the insights and tools discussed in this and capable of promoting trust, innovation, paper as a starting point for broader policy and ethical standards in AI deployment. discussion and stakeholder consultation, policymakers can navigate the complexities However, the need for governance is of AI governance and build a foundation for a underscored by several critical risks associated safer, more inclusive, and innovative future. with AI, though these risks are not exhaustive. AI systems can perpetuate bias and discrimination 89 Global Trends in AI Governance Evolving Country Approaches GLOSSARY AI (Artificial Intelligence): A branch of computer Deepfakes: Synthetic media where a person science focused on creating systems capable in an existing image or video is replaced of performing tasks that typically require human with someone else’s likeness using AI. intelligence, such as decision-making, Digital Inclusion: Efforts to ensure that all visual perception, speech recognition, individuals and communities, including the most and language translation. disadvantaged, have access to and can use AI Ethics: The study of the ethical and moral information and communication technologies. implications of AI, focusing on ensuring that AI Digital Literacy: The ability to use information technologies are developed and used in ways and communication technologies to find, that are fair, transparent, and accountable. evaluate, create, and communicate information, AI Governance: The framework of requiring both cognitive and technical skills. laws, rules, practices, and processes Digital Public Infrastructure (DPI): Digital used to ensure AI technologies are platforms for identity, payments, and data developed and used responsibly. sharing that are foundational for accessing AI Regulation: Binding legal and regulatory services and boosting digital inclusion. frameworks enacted to influence AI Environmental Impact of AI: The effects that development and deployment. the development, training, and deployment of AI Systems: A machine-based system that AI systems have on the environment, including infers, from the input it receives, how to energy consumption and carbon emissions. generate outputs such as predictions, content, Generative AI: A type of AI that can recommendations, or decisions that can generate new content, such as text, audio, influence physical or virtual environments. images, and video, based on the data it Algorithmic Bias: The systematic and has been trained on. Examples include repeatable errors in a computer system that OpenAI’s GPT series and Meta’s Llama. create unfair outcomes, such as privileging Generative AI Models: AI systems that can one arbitrary group of users over others. create new content. Examples include OpenAI’s Compute Capacity: The ability to GPT series, Anthropic’s Claude, Google store, process, and transfer data at DeepMind’s Gemini, and Meta’s Llama. scale, which is crucial for training and High-Quality Data: Data that is accurate, deploying AI models and applications. complete, reliable, relevant, and timely, essential Cybersecurity: The practice of for effective AI training and deployment. protecting systems, networks, and Human Capital: The skills, knowledge, and programs from digital attacks. experience possessed by an individual or Data Governance: The management of data population, viewed in terms of their value availability, usability, integrity, and security in an or cost to an organization or country. enterprise or organization. This includes data Intellectual Property (IP): Legal rights privacy and cybersecurity laws and regulations. that result from intellectual activity in the Data Privacy: The right of individuals industrial, scientific, literary, and artistic fields. to control how their personal In AI, this includes patents, copyrights, and information is collected and used. trademarks related to AI technologies. Deep Learning: A subset of ML involving neural networks with many layers (hence ‘deep’) that can learn from large amounts of data. 90 Glossary Machine Learning (ML): A subset of AI that involves the use of algorithms and statistical models to enable computers to improve their performance on a specific task with experience. Narrow AI: Also known as traditional AI, designed to perform a specific task such as facial recognition or fraud detection. It operates based on explicit programming and rules. OECD AI Principles: Guidelines set by the Organisation for Economic Co- operation and Development to promote innovative and trustworthy AI that respects human rights and democratic values. Public-Private Partnership (PPP): A cooperative arrangement between one or more public and private sectors, typically of a long-term nature. Regulatory Sandbox: A framework set up by a regulator that allows small-scale, live testing of innovations under a regulator’s oversight. Regulatory Trade-offs: The balancing of benefits and risks when creating regulations, ensuring innovation is not stifled while protecting public interest. Sustainable AI: AI that is developed and deployed in a manner that is environmentally, economically, and socially sustainable. 91 Global Trends in AI Governance Evolving Country Approaches ANNEX: SAMPLE COUNTRY APPROACHES TO AI GOVERNANCE This Annex is intended to provide a high-level snapshot of how certain countries have built AI governance frameworks by combining the regulatory tools outlined in Section 4. Brazil Development Administration, Brazil approved the Iber American Charter on Artificial Intelligence in Civil Service in November Soft Law / Regulatory Sandbox 2023.222 The Charter is a non-binding roadmap of best practices for states to guide the In October 2023, Brazil’s data protection implementation of AI in public administration, authority (the ANPD) launched a regulatory and emphasizes de-biasing AI systems, sandbox pilot program for AI and data improving transparency, protection fundamental protection – this included a public consultation rights, and improving public trust in AI. The process with the public and private sectors.218 charter suggests creation of domestic public Although the impact of the Brazilian regulatory registry of algorithms used in the public sector, sandbox on the local innovation ecosystem and the establishment of public oversight, and regulatory compliance is not yet clear audit, and risk assessment mechanisms. due to its early stage, it is notable for a broad multi-stakeholder approach, seeking Hard Law to coordinate action between regulators, Brazil’s Bill 2.338/2023 proposes a risk and rights- regulated entities, technology companies, based approach to AI governance. It proposes academics and civil society organizations.219 classifying AI systems into three levels of risk: Brazil has endorsed both the OECD and (i) excessive risk, in which the use is prohibited; G20 AI Principles and has referenced the (ii) high risk; and (iii) non-high risk. AI systems OECD Principles as guidance for developing should pass a preliminary self-assessment its own national AI strategy. Brazil has also analysis conducted by the AI provider to classify joined the Global Partnership on AI. its risk level. Every AI system must implement a governance structure involving transparency, Brazil has also endorsed the UNESCO data governance and security measures.223 In Recommendation on the Ethics of AI. Brazil addition, high-risk AI systems must include was one of the first countries to complete technical documentation, log registers, reliability the UNESCO Readiness Assessment tests, technical explainability measures and Methodology.220 Brazil is also a signatory to measures to mitigate discriminatory biases.224 the 2023 Santiago Declaration to Promote Ethical Artificial Intelligence,221 which The Bill proposes individual rights, such as the reflects the UNESCO Recommendation. right to explanation about decisions, non- discrimination and correction of discriminatory As a member of the Latin American Centre for biases, and the right to privacy and protection 218 ANPD’s Call for Contributions to the regulatory sandbox for artificial intelligence and data protection in Brazil is now open, GOV.BR (Oct. 3, 2023), https://www.gov.br/anpd/pt-br/assuntos/noticias/anpds-call-for-contributions-to-the-regulatory- sandbox-for-artificial-intelligence-and-data-protection-in-brazil-is-now-open. 219 Brazil: ANPD opens AI regulation sandbox for public consultation, ONETRUST DATAGUIDANCE (Oct. 4, 2023), https://www. dataguidance.com/news/brazil-anpd-opens-ai-regulation-sandbox-public. 220 https://www.unesco.org/ethics-ai/en/brazil 221 https://minciencia.gob.cl/uploads/filer_public/40/2a/402a35a0-1222-4dab-b090-5c81bbf34237/declaracion_de_santiago.pdf. 222 https://clad.org/wp-content/uploads/2024/03/CIIA-EN-03-2024.pdf. 223 Id. 224 https://accesspartnership.com/access-alert-brazils-new-ai-bill-a-comprehensive-framework-for-ethical-and-responsible-use-of- ai-systems/; https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-brazil 92 Annex: Sample Country Approaches to AI Governance of personal data.225 The Bill also includes NIST also houses the US AI Safety Institute, a rules for civil liability, codes of best practice, consortium of over 200 leading AI stakeholders notification of AI incidents, copyright exceptions including AI creators and users, academics, for data mining processing, and fostering government and industry researchers, and of regulatory sandboxes. Furthermore, it civil society organizations, which aims to proposes the creation of an open public advance the development of safe, trustworthy database of high-risk AI systems that contains AI. The AI Safety Institute contributes to the public documentation of algorithmic impact priority actions outlined in the administration’s assessments. The Executive Branch is tasked Executive Order, including developing to designate a supervisory authority to guidelines for red-teaming, capability regulate and enforce legislation regarding evaluations, risk management, safety and Brazil’s National AI Strategy (EBIA). security, and watermarking synthetic content.231 Brazil’s national data protection authority, the In 2022 the Office of Science and Technology ANDP , has also adopted a Resolution CD/ Policy (OSTP) of the White House produced a ANPD No 10 on strengthening data protection ‘Blueprint for an AI Bill of Rights’ suggesting and oversight of AI applications.226 As a fundamental principles to guide and govern member of the Iber-American Network for the efficient development and implementation the Protection of Personal Data, the ANDP of AI systems. These include the following: has also endorsed the region-wide General 1. Safe and effective systems: Users Recommendations for the Processing of should be protected from unsafe Personal Data in Artificial Intelligence.227 or ineffective systems. The ANDP has been active in taking regulatory 2. Algorithmic discrimination protections: action against AI developers. For example, in users should not be exposed to July 2024 the ANDP took regulatory action discrimination by algorithms; automated to suspend Meta’s latest privacy policy, decision-making systems should be preventing it from using Brazilians’ Instagram used and designed equitably. and Facebook posts to train its AI models.228 3. Data privacy: users should be protected United States from abusive data practices via built- in protections and have agency over how their data is used. Soft Law / Regulatory Sandbox 4. Notice and explanation: users must be The National Institute of Standards and informed that an automated system is Technology (NIST) AI Risk Management being used and understand how and why it Framework (RMF) provides guidance for contributes to outcomes that impact them. risk mitigation across the value chain.229 NIST convenes multistakeholder experts 5. Alternative options: users should have the to develop guidance for generative AI right to opt out, where appropriate, and and on safety concerns such as synthetic have access to a person who can quickly content, capability evaluations, red- consider and remedy their problems.232 teaming of AI systems, biosecurity and cybersecurity risks for foundation models.230 225 https://accesspartnership.com/access-alert-brazils-new-ai-bill-a-comprehensive-framework-for-ethical-and-responsible-use-of- ai-systems/ 226 https://www.gov.br/anpd/pt-br/documentos-e-publicacoes/documentos-depublicacoes/nota-tecnica-no-19-2023-fis-cgf-anpd. pdf 227 https://www.redipd.org/sites/default/files/2020-02/guide-generalrecommendations-processing-personal-data-ai.pdf. 228 https://www.bbc.com/news/articles/c7291l3nvwvo 229 https://www.nist.gov/itl/ai-risk-management-framework 230 https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute/aisic-working-groups 231 https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium- dedicated 232 https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 93 Global Trends in AI Governance Evolving Country Approaches Hard Law Several US sectoral regulators have begun clarifying the scope of their regulatory authority The United States does not have any over AI. The United States the Federal Trade comprehensive federal legislation on AI, or Commission (FTC) has clarified that they will a single national AI governance strategy; its exercise powers against ‘unfair and deceptive emerging AI governance regime is composed practices,’ fraud, scams,236 deception, including from various pieces of state-level legislation, impersonations generated by AI.237 It has issued along with national principles, guidance, a resolution for civil investigative demands into and policies. AI products.238 The U.S. Securities and Exchange In October 2023 the U.S. President signed an Commission indicated they will address AI Executive Order directing federal agencies to and predictive data analytics in finance and update their mandates for ensuring safe, secure investing.239 The Consumer Financial Protection and trustworthy AI - on topics ranging from Bureau (CFPB) is clarifying how existing federal biosecurity and cybersecurity to discrimination anti-discrimination law applies to algorithmic and international development.233 Accordingly, systems used for lending decisions240 and agencies have been progressing in 2024 to has published a report on the risks and use meet their objectives.234 For example, the of Chatbots in Consumer Finance.241 The National Telecommunications and Information Equal Employment Opportunity Commission Administration (NTIA), housed within the (EEOC) has provided guidance for anti- Department of Commerce, has published discrimination laws related to algorithm-based the Artificial Intelligence Accountability Policy hiring.242 In addition, the proposed Algorithmic Report, suggesting independent audits and Accountability Act of 2022 would direct certifications, funding for red-teaming and the FTC to develop impact assessments of evaluations, applying liability laws, transparency automated ML decision-making processes.243 disclosures and reporting for incidents and The US has also sought to govern AI through information about models and their training, the imposition of a range of more targeted and consequences for imposing unacceptable governance measures. For example, the US risks or making unfounded claims.235 This report Department of Commerce, Bureau of Industry and NTIA’s forthcoming guidance on open Security (BIS) has introduced a range of export source AI risk mitigation were informed by control measures aimed at restricting the export public requests for comment or information. of advanced semiconductors and other related equipment to China and other countries.244 233 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and- trustworthy-development-and-use-of-artificial-intelligence/ 234 https://www.whitehouse.gov/briefing-room/statements-releases/2024/03/28/fact-sheet-vice-president-harris-announces-omb- policy-to-advance-governance-innovation-and-risk-management-in-federal-agencies-use-of-artificial-intelligence/ 235 https://www.ntia.gov/sites/default/files/publications/ntia_ai_report_final-3-27-24.pdf 236 https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check 237 https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-proposes-new-protections-combat-ai-impersonation- individuals 238 https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products- services 239 https://www.sec.gov/news/testimony/gensler-testimony-house-financial-services-041823 240 https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using- complex-algorithms/ 241 https://www.consumerfinance.gov/data-research/research-reports/chatbots-in-consumer-finance/chatbots-in-consumer- finance/ 242 https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial 243 https://www.congress.gov/bill/117th-congress/house-bill/6580/text 244 https://www.nortonrosefulbright.com/en/knowledge/publications/5a936192/us-expands-export-restrictions-on-advanced- semiconductors 94 Annex: Sample Country Approaches to AI Governance China Law firm Bird & Bird has identified three main pillars of China’s overall AI governance regime:248 1. Content moderation: The first pillar of Soft Law / Regulatory Sandbox China’s AI regulatory regime concerns National standard-setting bodies have begun to the governance and management of play key role in formulating standards to facilitate online content. With respect to AI- the implementation of the legal frameworks generated content (such as the output outlined above. For example, the National text of an LLM), regulators will prioritize Information Security Standardisation Technical traceability and authenticity of the content Committee of China (‘TC260’) released TC260- to restrict circulation of information 003 Basic security requirements for generative that would violate well-established artificial intelligence service,245 which provides information services regulations. companies with practical guidance on complying 2. Data protection: Data protection is with the 2023 Generative AI Measures’ governed by the 2021 Personal Information requirements regarding training data security, Protection Law, which aims to ensure model security, internal governance measures, that personal data processing does not and the conducting of security assessments. harm users or otherwise undermine public China has also recently announced that it will order. The PIPL enshrines key principles launch a ‘Global AI Governance Initiative’ in a including lawfulness of processing, keynote speech at the Opening Ceremony of transparency, sincerity, and accountability. the third Belt and Road Forum on International 3. Algorithmic governance: Security Cooperation. The official government statement assessments play a key role in Chinese notes that it supports discussions with the AI regulation; administered by the CAC, UN framework to establish ‘an international these assessments involve complex filing institution to govern AI, and to coordinate procedures and require listing of in-scope efforts to address major issues concerning algorithms on an online registry (particularly international AI development, security and for services with ‘public opinion attributes governance.’ While the details of the initiative or social mobilization capabilities.’) Chinese are not clear, the press release issued by the AI regulation also seeks to ensure that AI government provides that the focus will be on services reflect ‘public order and morality’, China’s proposals on AI governance regarding e.g. the CAC prohibits the use of AI to the development, security and governance of AI. generate any discriminatory content or Hard Law decision based on race, ethnicity, beliefs, China was one of the first jurisdictions to nationality, region, gender, age, occupation, introduce any form of legislation specifically and health.249 In addition, AI services that governing AI – it currently has separate laws generate human-like content (whether regulating recommendation algorithms, ‘deep textual, visual, or auditory) must present synthesis’ technologies (a subset of generative clear, specific and actionable annotation AI technologies that includes deepfakes and rules and make clear that content has digital simulation models), and generative AI been generated with the use of AI.250 The services.246 Chinese policymakers have also Generative AI Measures directly regulate indicated that they will seek to formulate a model training practices by requiring service general, horizontal AI law in the coming years.247 providers to use data and models from 245 https://www.tc260.org.cn/upload/2024-03-01/1709282398070082466.pdf 246 https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf 247 https://www.gov.cn/zhengce/content/202306/content_6884925.htm 248 https://www.twobirds.com/en/insights/2024/china/ai-governance-in-china-strategies-initiatives-and-key- considerations#:~:text=China’s%20AI%20governance%20framework%20attempts,to%20make%20decisions%20about%20 individuals. 249 Article 4(2), 2023 Generative AI Measures. 250 Articles 8 and 12, 2023 Generative AI Measures. 95 Global Trends in AI Governance Evolving Country Approaches United Kingdom legitimate sources, respect intellectual property rights and personal information, and strive to improve the quality, authenticity, accuracy, objectivity and Soft Law / Regulatory Sandbox diversity of the training data they utilize.251 The United Kingdom government introduced China’s AI governance measures are formulated its cross-sector plan for AI regulation on July to promote China’s specific policy interests 18, 2022, which features a ‘pro-innovation’ and national priorities – for example, the framework. These non-statutory principles Generative AI Measures require generative apply broadly and are supplemented by AI service providers to uphold ‘socialist ‘context-specific’ regulatory guidance and core values’ and prohibits the generation of voluntary standards developed by UK certain types of content, such as content that regulators. The UK is moving towards a light- incites ‘subversion of the state power or the touch, risk-based, context-specific approach overthrow of the socialist system, endangers focused on proportionality, with practical national security and interests, damages the requirements determined by the industry and national image, incites splitting the country, dependent on the AI system’s deployment undermines national unity and social stability, context.254 The Alan Turing Institute, as the advocates terrorism, extremism, ethnic hatred national institute for data science and AI, and discrimination, violence, pornography, plays a pivotal role in research and ethics in and false and harmful information’.252 AI. In February 2024, the UK Department for Science, Innovation and Technology In addition to the use-case focused regulatory updated its ‘A pro-innovation approach to frameworks outlined above, regional regulations AI regulation’ after a public consultation.255 have been used in China to promote local AI development and create local-level experiments Outside of the ‘pro-innovation’ regulatory to attract AI investment. Enacted in 2022, framework, the UK government has also the Shanghai Regulations on Promoting the adopted the following policy tools: Development of AI Industry 2022 and the • The government has announced a Shenzhen Special Economic Zone Regulations Foundation Model Taskforce which on AI Industry Promotion 2022 both call for the has been allocated £100 million in creation of AI Ethics Committees to oversee AI funding and will focus on accelerating development, conduct audits and assessments, the UK’s capability to develop ‘safe and promote industrial parks where input and and reliable’ foundation models. training data may be traded easily and lawfully.253 • The UK AI Safety Institute (discussed at box 21 above). • The UK has also developed an AI Standards Hub to share knowledge, capacity, and research on AI standards.256 • The UK hosted the highly publicized AI Safety Summit on 1 - 2 November 2023. 251 Article 7, 2023 Generative AI Measures. 252 Article 4(1), 2023 Generative AI Measures. 253 https://www.twobirds.com/en/insights/2024/china/ai-governance-in-china-strategies-initiatives-and-key- considerations#:~:text=China’s%20AI%20governance%20framework%20attempts,to%20make%20decisions%20about%20 individuals. 254 UK Government (2022a). 255 https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro- innovation-approach-to-ai-regulation-government-response#a-regulatory-framework-to-keep-pace-with-a-rapidly-advancing- technology 256 https://aistandardshub.org/ 96 Annex: Sample Country Approaches to AI Governance The Summit was attended by a number an ‘Update Paper’ published in April of countries, as well as companies and a 2024. These reports examine the CMA’s small selection of civil society organizations. understanding of AI risks, how the CMA’s The Bletchley Declaration by countries competition and consumer remit applies in attendance at the AI Safety Summit to those AI risks, forthcoming changes refers to ensuring wider international to the CMA’s powers and the CMA’s AI cooperation on AI and sustaining an capabilities.258 These reports were published inclusive global dialogue that engages after broad stakeholder consultation existing international fora and other relevant with consumer groups, civil society, initiatives and contributes in an open leading AI developers and deployers, manner to broader international discussions. academics, and other regulators.259 Hard Law • In April 2024 the UK data protection The UK has stated that it will harness its existing regulator, the Information Commissioner’s regulators through cross-sector legislation Office (ICO), published its strategic rather than setting up a new regulator. At approach to regulating AI, which sets the same time, the UK has recognized that a out how the ICO is driving forward the patchwork approach to regulation with little principles set out in the UK government’s central coordination or oversight may in fact AI regulation white paper.260 Although create barriers to innovation due to a lack of the UK government has not appointed a coherence and clarity in regulatory obligations separate AI regulator, the ICO notes that – as such, the UK has committed to create a set many of the principles identified in the of centralized mechanisms to ensure the sectoral UK’s AI regulation white paper align with approach to AI regulation can be monitored established data protection principles, and adapted, as well as facilitate a single meaning that the ICO may eventually point of collaboration for all interested parties become a de facto AI regulator. (international partners, industry, civil society, At the same time, discussions on introducing academia and the public) (see figure 7 above). a formal regulatory framework for AI in the UK One coordination mechanism is the Digital have gained momentum. In November 2023, Regulation Cooperation Forum (DRCF) Multi- the Artificial Intelligence (Regulation) Bill was Agency Advisory Service, a pilot scheme introduced in the House of Lords.261 The Bill that will see a number of regulators develop proposes a central AI authority which will ensure a multi-agency advice service providing alignment of approach by different regulators, tailored support to businesses using AI as well as ensuring that relevant regulators take and digital innovations so they can meet account of AI, monitoring the effectiveness requirements across various sectors.257 of the UK AI framework and collaborate with regulators to construct regulatory sandboxes for Sectoral regulators in the UK have AI. The Bill also requires AI developers to comply already begun to clarify the scope of with requirements regarding transparency, their mandates as they relate to AI: IP rights, and labelling of AI outputs. It also • The UK Competition and Markets establishes the role of ‘AI responsible officers’ Authority published an Initial Report for businesses that develop, deploy, or use AI. on AI Foundation Motels in September 2023, which was supplemented by 257 https://www.drcf.org.uk/home 258 https://www.gov.uk/government/publications/ai-foundation-models-initial-report; https://www.gov.uk/government/ publications/cma-ai-strategic-update/cma-ai-strategic-update 259 Id. 260 https://www.skadden.com/-/media/files/publications/2024/05/the-uk-ico-publishes-its-strategy-on-ai-governance/regulating-ai- the-icos-strategic-approach.pdf?rev=7752a638c485400fb9e1e84dbe077ab6&hash=16CEEC36DAF5CD2A68D4F7A14F30A401 97 Global Trends in AI Governance Evolving Country Approaches India 2. In the health sector, the strategy for National Digital Health Mission identifies the need for the creation of guidance and standards to Soft Law / Regulatory Sandbox ensure the reliability of AI systems in health. India’s National Strategy for AI (published The Indian Ministry of Electronics & in June 2018) identified a lack of formal Information Technology has established four regulation around data as a key barrier committees on AI, which have published for large scale adoption of AI.262 several reports on security, safety, legal and ethical issues relating to AI.264 The Principles for Responsible AI (adopted in February 2021) serve as India’s India is a member of the Global Partnership roadmap for creating a responsible AI on Artificial Intelligence (GPAI). The 2023 ecosystem across sectors. It identifies GPAI Summit was recently held in New Delhi, the following relevant principles: where GPAI experts presented their work on responsible AI, data governance, and the future 1. The principle of safety and reliability of work, innovation, and commercialization.265 2. The principle of equality The Bureau of Indian Standards, the 3. The principle of inclusivity national standards body of India, has and non-discrimination established a committee on AI that is 4. The principle of privacy and security proposing draft Indian standards for AI.266 5. The principle of transparency India is a party to the OECD’s AI principles and has adopted UNESCO’s 6. The principle of accountability Recommendation on the Ethics of AI.267 7. The principle of protection and Hard Law reinforcement of positive human values India does not currently have The Operationalizing Principles for Responsible any horizontal AI law. AI (August 2021) identifies actions that need However, the proposed Digital India Act, to be taken by both the government and the replacing the IT Act of 2000, may regulate AI private sector, in partnership with research systems. Although the Act is primarily intended institutes, to cover regulatory and policy to be a form of internet platform regulation, interventions, capacity building, incentivizing the proposed Act intends to regulate high-risk ethics by design, and creating frameworks systems through ‘legal, institutional quality for compliance with relevant AI standards. testing framework to examine regulatory Indian sectoral regulators have issued models, algorithmic accountability, zero-day guidance on the regulation of AI.263 threat & vulnerability assessment, examine AI 1. In the finance sector, the Securities and based ad-targeting, content moderation etc.’268 Exchange Board of India issued a circular India also recently concluded its first data in January 2019 on reporting requirements protection law, the Digital Personal Data for AI and machine learning applications Protection Act 2023 – however, as of the time of and systems offered and used. publication, the law has yet to come into force.269 261 https://www.engage.hoganlovells.com/knowledgeservices/news/new-uk-regulation-bill-potential-step-forward-to-the- statutory-regulation-of-ai-systems-in-the-uk#:~:text=The%20primary%20purpose%20of%20the,the%20regulatory%20 approach%20to%20AI. 262 https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf 263 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-india 264 https://www.meity.gov.in/artificial-intelligence-committees-reports 265 https://www.morganlewis.com/blogs/sourcingatmorganlewis/2024/01/ai-regulation-in-india-current-state-and-future- perspectives 266 https://www.services.bis.gov.in/php/BIS_2.0/dgdashboard/Published_Standards_new/standards?commttid=Mzg2&commttna me=TElURCAzMA%3D%3D&aspect=&doe=&from=2022-07-21&to=2023-07-21 267 https://iapp.org/media/pdf/resource_center/global_ai_law_policy_tracker.pdf 98 268 269 https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf https://www.dlapiperdataprotection.com/index.html?t=law&c=IN#:~:text=Under%20the%20DPDP%20Act%2C%20 Data,necessary%20for%20the%20specified%20purpose. Annex: Sample Country Approaches to AI Governance Nigeria 3. Develop a national AI policy framework that defines governance guidelines and principles for AI systems Soft Law / Regulatory Sandbox 4. Develop a national AI risk In August 2024 the National Information management framework Technology Development Agency’s (NITDA) The NAIS also identifies the US NIST National Center for Artificial Intelligence and Framework for AI Risk Management as Robotics (NCAIR) published a draft National a valuable tool for guiding the design Artificial Intelligence Strategy (NAIS).270 and deployment of AI systems. Two pillars of the strategy that touch on governance issues are highlighted below: Hard Law Pillar 4: Ensuring Responsible and Nigeria currently has not proposed any AI Ethical AI Development. legislation. However, law firm White & Case has identified several existing laws that affect the Under this pillar, Nigeria aims to: development or use of AI in Nigeria, including:271 1. Create a high-level AI ethics expert group 1. The Cybercrimes (Prohibition, / national ethics commission comprised Prevention, etc.) Act, 2015 of stakeholders from academia, industry, government, and civil society, to develop 2. The Nigeria Data Protection Act, 2023 and implement ethical AI principles. 3. The Security and Exchange Commission 2. Develop national AI ethical principles (SEC) Rules on Robo-Advisory Services that align with critical Nigerian values. 4. The Federal Competition and 3. Develop a comprehensive AI Consumer Protection Act, 2018 ethics assessment framework 5. The Copyright Act, 2022 4. Implement legislative reforms to address 6. The Nigerian Communication emerging legal and ethical challenges Commission Act, 2003 Pillar 5: Developing a Robust AI In addition, Pillar 4 of the draft National AI Governance Framework Strategy notes that legal reforms may be Under this pillar, Nigeria aims to: needed to address particular legal or ethical concerns arising from AI, including ‘protecting 1. Develop national AI principles to guide workers’ rights through retraining programs, development, deployment and use of AI tailored unemployment benefits, and policies 2. Establish an AI governance regulatory body encouraging job sharing and reduced work to oversee implementation of the national hours. Additionally, bridging the digital divide AI principles, ensure compliance with ethical requires legislation promoting digital literacy standards, and mediate potential disputes. and equitable access to technology’.272 270 https://ncair.nitda.gov.ng/wp-content/uploads/2024/08/National-AI-Strategy_01082024-copy.pdf 271 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-nigeria#:~:text=There%20is%20 currently%20no%20specific,Artificial%20Intelligence%20Policy%20(NAIP). 272 https://ncair.nitda.gov.ng/wp-content/uploads/2024/08/National-AI-Strategy_01082024-copy.pdf 99 Global Trends in AI Governance Evolving Country Approaches Singapore 8. Safety and alignment research & development – accelerating investment in research & development to improve model Soft Law / Regulatory Sandbox alignment with human intention and values Singapore has developed a range of 9. AI for public good – harnessing AI to voluntary governance frameworks for ethical benefit the public by democratizing AI deployment. This includes the Model AI access, improving public sector Governance Framework (2019, updated in 2020) adoption, upskilling workers and which provides detailed guidance to private developing AI systems sustainably sector organizations to address key ethical and Singapore’s AI Governance testing framework governance issues when deploying AI solutions.273 and toolkit, ‘AI Verify,’ launched as a pilot in May In response to the growing adoption of 2022, validates the performance of AI systems generative AI, the AI Verify Foundation and against a set of internationally recognized IMDA published a Model AI Governance principles and frameworks through standardized Framework for Generative AI in May 2024, tests. It provides a testing report that serves to emphasizing the need for building a trusted inform, users, developers, and researchers. The ecosystem for AI, highlighting unique risks that Future of Privacy Forum notes that, ‘rather than arise from generative AI (e.g. hallucination, defining ethical standards, AI Verify provides copyright infringement, value alignment) verifiability by allowing AI system developers and emphasizing the need for balancing and owners to demonstrate their claims about user protection and innovation.274 The nine the performance of their AI systems.’275 dimensions of the framework include: Singapore has previously experimented with 1. Accountability – allocation of responsibility sector-specific regulatory sandboxes for AI. to players along the AI development chain In 2017, a 5-year regulatory sandbox was created to facilitate the safe development 2. Data – ensuring quality of data and integration of autonomous vehicles.276 fed to AI models through the use of trusted data sources Hard Law 3. Trusted development and deployment – Singapore does not have a horizontal AI encouraging transparency and disclosure law at present. However, it has several to enhance broader awareness and safety sectoral laws applicable to AI, including:277 4. Incident reporting – establishing incident- 1. The Road Traffic Act 1961, which was management structures and processes amended in 2017 to allow for the testing for timely notification and remediation and use of autonomous motor vehicles. 5. Testing and assurance – adopting third- 2. The Health Products Act 2007, party testing against common AI testing which requires medical devices that standards to demonstrate trust to end-users incorporate AI technology to be registered before they are used. 6. Security – addressing risks of new threat vectors being injected through AI models Sectoral regulators have begun issuing non- binding guidance on the use of AI in specific 7. Content provenance – developing industries.278 The Monetary Authority of technologies to enhance transparency Singapore issued the Principles to Promote about where and how content is generated 273 https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf 274 https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1. pdf 275 Josh Lee Kok Thong, AI Verify: Singapore’s AI Governance Testing Initiative Explained, FUTURE OF PRIVACY FORUM (June 6, 2023), https://fpf.org/blog/ai-verify-singapores-ai-governance-testing-initiative-explained/. 276 https://www.ippapublicpolicy.org/file/paper/5cea683b9a45b.pdf 277 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-singapore 100 278 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-singapore Annex: Sample Country Approaches to AI Governance Fairness, Ethics, Accountability and Transparency with clarity on the use of personal data at (FEAT) in the Use of Artificial Intelligence three stages of AI system implementation: and Data Analytics in Singapore’s Financial (a) development, testing and monitoring, Sector8 in 2018 (updated in 2019) to provide (b) deployment, and (c) procurement.280 a set of foundational principles for firms to consider when using AI in decision-making in the provision of financial products and Rwanda services. The Ministry of Health, Health Sciences Authority and Integrated Health Information Soft Law / Regulatory Sandbox Systems jointly issued the Artificial Intelligence In May 2023 Rwanda adopted its first National in Healthcare Guidelines in 2021 to improve Artificial Intelligence Policy. The drafting of the understanding, codify good practice and the policy was led by Rwanda’s Ministry of support the safe growth of AI in healthcare ICT and Innovation (MINICT) and Rwanda Utilities Regulatory Authority (RURA) and One important law is the Personal Data supported by GIZ FAIR Forward and The Protection Act. In March 2024, Singapore’s Future Society as an implementation partner. Personal Data Protection Commission (PDPC) issued Advisory Guidelines on the Use of The policy identifies six priority Personal Data in AI Recommendation and policy areas (Figure 8). Decision Systems,279 providing organizations Figure 8. Rwanda National AI Policy Source: https://www.minict.gov.rw/index.php?eID=dumpFile&t=f&f=67550&token=6195a53203e197efa47592f40ff4aaf24579640e 279 https://www.pdpc.gov.sg/guidelines-and-consultation/2024/02/advisory-guidelines-on-use-of-personal-data-in-ai- recommendation-and-decision-systems 280 https://www.dataprotectionreport.com/2024/03/singapore-releases-new-guidelines-on-the-use-of-personal-data-in-ai- systems/ 101 Global Trends in AI Governance Evolving Country Approaches Key actions arising from the report 3. Fairness: Mitigating bias in AI systems include the following: to ensure equitable treatment of all individuals and groups. 1. Strengthen AI policy and regulation, build capacity of regulatory authorities, Digital Dubai, a government platform and ensure public trust in AI established in 2021, has also released an Ethical AI Toolkit for businesses to use for practical 2. Operationalize and share Rwanda’s guidance, including a self-assessment tool.283 ‘Guidelines on the Ethical Development and Implementation of AI’, led by RURA. A range of non-binding national guidelines have been issued in relation to AI, including:284 3. Actively contribute to shaping responsible AI principles & practices 1. The Deepfake Guide (2021) – sets out in international platforms information on deepfakes, and provides advice on measures to protect against Hard Law deepfakes and guidance on how to report Rwanda does not have any binding AI laws at deepfakes to the appropriate authorities present. However, there are several existing legal frameworks that could apply to AI 2. The AI Ethics Guide (2022) – sets systems. These include the following: out non-mandatory guidelines with respect to the ethical design and 1. The Law n°058/2021 of 13/10/2021 deployment of AI systems in both relating to the protection of the public and private sectors personal data and privacy 3. The AI Adoption Guideline in Government 2. The Law n° 24/2016 of 18/06/2016 Services (2023) – aims to create awareness, governing Information and Communication accelerate AI impact and to provide Technologies in Rwanda a continuously updated repository 3. The Law nº 60/2018 of 22/8/2018 on of clear use cases with respect to the prevention and punishment of cyber-crimes deployment of AI in government services 4. The Responsible Metaverse Self- UAE 281 Governance Framework (2023) – a whitepaper which seeks to establish common minimum self- Soft Law / Regulatory Sandbox regulatory principles with respect to In 2017, the Minister of State for Artificial responsible use in the metaverse Intelligence Office adopted a National Strategy for Artificial Intelligence 2031. 5. The Guidelines for Financial Institutions adopting Enabling Technologies – issued by The UAE has adopted a set of non-binding AI the financial services regulators in Ethics Principles, aiming to ensure responsible the Financial Free Zones and in Mainland and ethical use of AI. Key principles include:282 UAE; suggests governance frameworks for a 1. Transparency: Ensuring that AI systems variety of emerging technologies, including and their decision-making processes are ‘big data analytics and artificial intelligence’. understandable and accessible to users. 2. Accountability: Establishing clear lines of responsibility for the development and use of AI systems. 281 Note that the UAE comprises multiple legal jurisdictions – for the purposes of this Annex, these are categorized as follows: (a) the Financial Free Zones (Dubai International Financial Centre (DIFC) and Abu Dhabi Global Market (ADGM), and (b) the Mainland UAE (the remainder of the UAE outside the FFZ. 282 https://insight.thomsonreuters.com/mena/legal/posts/how-is-ai-regulated-in-the-uae-what-lawyers-need-to-know 283 https://www.digitaldubai.ae/initiatives/ai-principles-ethics 102 284 https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-uae#:~:text=Mainland%20UAE,or%20 deployers%20of%20AI%20systems. Annex: Sample Country Approaches to AI Governance Hard Law well as to establish a set of rules on civil liability related to AI.’287 The AI strategy was developed In the Mainland UAE, there is no single law by a cross-sectoral taskforce including regulating AI. However, several decrees representatives from state authorities, the have been passed on discrete issues: private sector, universities, and sectoral experts.288 1. In 2018, the Federal Decree Law No. 25 The strategy mentions several specific actions on the Project of Future Nature was issued to implement human-centric, trustworthy – it allows the Cabinet to issue interim AI. For example, it requests the Ministry licenses for innovative projects being rolled of Economic Affairs and Communications out in the UAE that use AI, where there (MEAC), the Ministry of Justice (MK) and the is no existing regulatory framework.285 Data Protection Inspectorate (DPI) to develop 2. In 2024, Law No. (3) of 2024 Establishing ‘requirements and measures to support the the Artificial Intelligence and Advanced development and use of human-centered and Technology Council (AIATC) was issued, reliable AI solutions’, and develop relevant establishing a Council to regulate projects, policies to increase public trust and mitigate investments and research related to AI risks. Another suggestion is for Estonia artificial intelligence and advanced to develop a fundamental rights impact technology in the emirate of Abu Dhabi. assessment model and guidance materials. 3. Existing sectoral laws may apply to AI – for In 2018, the Estonian minister for digital example, the UAE Penal Code and UAE development signed a declaration on ‘AI in the Federal Decree-Law No. 34 of 2021 on Nordic-Baltic region’, establishing a collective Combatting Rumors and Cybercrimes, framework for ‘AI in the NordicBaltic region’ as amended (‘UAE Cybercrimes Law’) establishing a collaborative framework on might be applied to criminalize deepfakes, ‘developing ethical and transparent guidelines, voice theft or IP infringement by AI.286 standards, principles and values to guide when and how AI applications should be used’ and In the Financial Free Zones, no horizontal laws ‘on the objective that infrastructure, hardware, have been issued regulating AI. However, software and data, all of which are central amendments have been made to existing data to the use of AI, are based on standards, protection legislation that applies in the Dubai enabling interoperability, privacy, security, International Financial Centre. Article 10 of the trust, good usability, and portability.’289 Data Protection Regulations imposes certain obligations on deployers and operators of Estonia has endorsed the OECD AI Principles ‘autonomous and semi-autonomous systems’. and the UNESCO Recommendation on the Ethics of AI. Estonia Hard Law Estonia is a member of the EU and therefore Soft Law / Regulatory Sandbox the EU AI Act is applicable within its territory Estonia’s new AI Strategy (2022-2023) is a (for more on the EU AI Act, see box [x] above). continuation of its first national AI strategy for Estonia also contributed to negotiations for 2019-2021. It aims to support ‘regulat[ig] the the Council of Europe Framework Convention development and use of AI in a human-centered on AI, Human Rights, Democracy and the and trustworthy way, i.e. in a reliable, ethical, and Rule of Law (discussed at section [x] above). lawful way that respects fundamental rights, as As an EU member state, the Digital Services 285 https://uaelegislation.gov.ae/en/legislations/1980 286 https://insightplus.bakermckenzie.com/bm/data-technology/united-arab-emirates-deepfakes-and-the-use-of-artificial- intelligence-ai-legal-issues-and-considerations 287 https://en.kratid.ee/_files/ugd/980182_e319a94450384ca198f027ba84fcbace.pdf 288 https://f98cc689-5814-47ec-86b3-db505a7c3978.filesusr.com/ugd/7df26f_486454c9f32340b2820 6e140350159cf.pdf 103 289 https://www.norden.org/en/declaration/ai-nordic-baltic-region Global Trends in AI Governance Evolving Country Approaches Act also imposes some obligations on online intermediaries and platforms that use AI – for example, it prohibits targeted advertising based on a person’s sexual orientation, religion, ethnicity, or political beliefs. Other relevant laws applicable to AI systems include: 1. GDPR 2. EU Charter of Fundamental Rights 3. Council of Europe Convention 108+ for the protection of individuals with regard to the processing of personal data Estonia is also considering a package of modifications to existing legal frameworks, separate to the EU AI Act.290 These are intended to be targeted to solving specific problems that can be regulated independently of EU action. 290 AIDV 2023 Report, p.443. 104