Several years ago, I compiled the first reference book on AI policies.1 I aimed to provide a ready reference for the emerging field of artificial intelligence similar to the books I had published on privacy law.2 At that time, we noted the rapid explosion of AI ethics frameworks, but it was still early days for AI governance. The OECD had just finalized the first AI principles endorsed by national governments. The Universal Guidelines for AI, published the year before, were gaining influence among AI policymakers. But work on the EU AI Act had not yet started. At the Council of Europe, the first steps were taken toward a global AI treaty.
This year, I returned to the project with an updated AI Policy Sourcebook3. As I wrote in the introduction to the new volume, a lot has happened in five years. Governments have raced to develop national AI strategies, amend current laws, and enact new laws. Remarkable progress has been made by international organizations. The early, now venerable, OECD AI Principles were recently updated to account for recent developments in AI technology. The EU AI Act was adopted in 2024, and implementation has begun. Forty countries have endorsed the Council of Europe Framework Convention on AI. At the United Nations, there is consensus on the need to ensure safe, secure, and trustworthy AI. But challenges remain in many domains: sustainability, autonomous weapons, algorithmic transparency, labor impacts, copyright, and more.
I provide brief commentary on the various AI governance frameworks provided in the Sourcebook. Then I shared a few insights, having both followed the development of the AI governance frameworks over the last several years and also helping to develop and draft several.
First, the level of engagement among countries worldwide in AI policy is striking and encouraging. A topic that was esoteric less than a decade ago is now front and center for many governments. However, the public remains largely unaware of these initiatives, particularly in the United States. Reporting on AI tends to swing between the hype of overstated innovation and the doom of exaggerated concern. Thoughtful reporting on AI governance, focusing on progress and setbacks, would engage the public, strengthen democratic institutions, and promote the development of well-informed public policy.
Second, the development of AI policy should be viewed as an evolutionary process. From the OECD AI Principles to the EU AI Act, we can see the rapid development and sophistication in the understanding of AI policy. More issues are considered and in greater detail. New initiatives build on earlier initiatives. It is essential for those who joined the conversation after the release of ChatGPT in late 2022 to recognize the long history of efforts by governments to regulate AI. Even this collection provides only a recent timeline of a topic that goes back to the debate over autonomous weapons more than 40 years ago.
Third, AI policymakers must be careful to steer a steady course, neither veering too quickly in one direction or another. The introduction of generative AI in 2022 posed new challenges, but many of the common elements for effective governance were already well known: the need for impact assessments, the CAIDP AI Policy Sourcebook establishment of supervisory authorities, the allocation of rights and responsibilities for those who use AI systems and those who design, develop, and deploy AI systems. Coordinating new AI regulations with preexisting rules for automated decision-making is not a simple task. However, it is a mistake to exclude from a modern definition of AI systems, rule-based expert systems (symbolic AI) that have defined the field from the start. A good definition of AI should remain technology-neutral.
Fourth, we are now entering a new phase of AI governance. If the period 2019-2024 could be described as Establishing Norms for AI Governance, the period 2025-2029 is about the Implementation and Enforcement of AI Governance Norms. This is when the hard work begins. Governments must spend less time articulating AI principles and more time advancing the principles they have already endorsed. There is a real risk in this movement of moving sideways or even backward. With AI governance, we do not have the luxury of time. AI is evolving rapidly. Regulation must do so as well.
Fifth, it is not too soon to ask questions about regulatory convergence and divergence, progress and setbacks, which governance strategies are working and which are in need of repair. Public policy benefits from these comparative assessments. CAIDP’s comprehensive report, the AI and Democratic Values Index, provides the basis for this work.4 With the AI and Democratic Values Index, we provide a narrative survey of AI policies and a methodology to assess AI policies and practices against democratic values. This methodology provides an opportunity to compare national AI policies at a moment in time and to analyze trends over time.
Finally, we need to underscore the urgency of AI governance. Companies and countries are rushing quickly into a future they do not fully understand. The leading AI experts caution that advanced AI systems are not reliable or trustworthy. They have urged us to pause, at least to implement the safeguards and guardrails necessary for sustainable progress. Others warn that many of the AI systems already deployed lack meaningful transparency and accountability. Bias is embedded and replicated at scale.
It is not too soon to think critically about the consequences of the AI age and how we are to maintain human control over this rapidly evolving technology. Human reason remains central to this task. With the publication of this reference book on early efforts to govern AI, we hope to advance that work. Sapere aude!5
1. Marc Rotenberg, ed., The AI Policy Sourcebook (2019)
2. Marc Rotenberg, ed., The Privacy Law Sourcebook (1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2016, 2018, 2020).
3. Marc Rotenberg and Eleni Kyriakides, eds., The AI Policy Sourcebook (2025)
4. April Yoder, Marc Rotenberg, and Merve Hickok, eds., AI and Democratic Values Index (CAIDP 2025)
5. Immanuel Kant, What is Enlightenment? (1784). Marc Rotenberg, Artificial Intelligence and Galileo’s Telescope, Review of the Age of AI by Henry Kissinger, Eric Schmidt, and Douglas Huttenlocher, Issues in Science and Technology (December 2021) (discussing Kant’s views on the role of human reason as applied to AI).
Marc Rotenberg is founder and executive director of the Center for AI and Digital Policy, a global network of AI policy experts and human rights advocates.