一本道无码

一本道无码

AI Governance, Safety, and Risk Assessment for Policy Makers

Lead instructor(s):

  • Pam Dixon, Founder & Executive Director of World Privacy Forum, and co-chair of the United Nations Statistics Data Governance Committee
  • Stephanie Ifayemi, Head of Policy for Partnership on AI, former Head of Digital Standards Policy in the UK Government’s DCMS where she led government’s work on AI and quantum technology standards.

The course begins with an overview and timeline of key AI policy and technical milestones, key definitions for AI policy work, and a discussion of global, regional, national and other major AI frameworks that are already in place. The course covers key background in the history of AI policy and technology that contribute to todays policy approaches regarding AI, including key developments from 2017 to the present. These include the OECD Guidelines on AI, the UNESCO Ethical AI Recommendation, a comparison of key country-level National AI Strategies, and an analysis of emerging national AI laws in all major regions of the world. This section also covers major international and regional standards relevant to AI governance and policy.

The next section of the course covers how to frame AI risk and safety in policy making and elsewhere, and how AI risk assessment and mitigation differ from privacy risk assessment and mitigation. This section includes a side-by-side comparison of data privacy legislation and AI legislation and how they do and do not map to each other, as well as a side by side comparison of privacy risk assessment and AI risk assessment. Key AI risk assessment frameworks are discussed in this section of the course, including the NIST AI Risk Map Framework and Playbook, The Asilomar AI Principles, The Future of AI Governance, African Legal Impact (ALI) for Emerging Technologies, and other key AI risk frameworks, including regional AI risk assessment frameworks and risk assessment frameworks created by industry. This section also discusses general AI governance models, including the layered governance model for AI.

The final portion of this course covers best practices and high-quality toolkits to identify, assess, and mitigate AI risks and implementation challenges. This rich and practical section of the course will discuss the techniques and tools available for identifying, assessing, categorizing, and mitigating AI risks using current risk and safety models as well as highly developed AI governance tools and toolkits. These include policy toolkits such as the OECD.AI Observatory Toolkit, the AI Policy Labs AI Governance Toolkit, and others, and more technical toolkits such as the OpenAI Safety Gym, VerifAI toolkit, and others.

The course concludes with an in-depth AI policy case study that walks students through the development of the multiple layers of an AI policy. Students will see how AI policy development works in practice and provides practical ideas, workflows, examples, tools, and best practices for policy development. Several AI governance and risk toolkits will be modeled from start to finish in this section.

  • This course is intended for people involved in ICT or AI in policy, law, governance, or technology, or for people who will be involved in AI policy. Because AI is an increasingly integral part of core private and public sector work and strategies, the course is designed to be broadly applicable.
  • Upon completing the course, participants will have a greater understanding of existing AI laws, frameworks, and standards, and will be able to analyze how rapidly evolving AI policy norms (in practical application and in legislation) fit into existing structures and policies, and how to in very practical terms they can shift policy thinking to adapt to the requirements of AI frameworks. This course will help people make decisions about policies and practices around AI using new frameworks that are applicable to AI risks, framing, and mitigations.
  • No background is required for this course.