һ Convenes Experts in Evaluating Generative AI
Media Inquiries
һ’s K&L Gates Initiative in Ethics and Computational Technologies(opens in new window) sponsored and co-organized an expert convening in Washington, D.C., on “” Organized in partnership with , the Center for Democracy and Technology, and Georgetown Law, the conference took place at Georgetown University Law Center on April 15.
Convening machine learning experts and engineers with policymakers, lawyers, and members of civil rights and advocacy groups, the event presented a day of talks and panels on the challenges and opportunities of evaluating generative AI: characterizing the state of the technology, the needs of policymakers and civil society, and the gaps between the two.
, the Raj Reddy Associate Professor of Machine Learning and chief technology officer and chief scientist of Abridge, delivered the keynote address. He said the current complex and dynamic evaluation landscape presents fundamental challenges for regulation.
“Any binding rules or guidance will either skew conservative, including only what’s necessary but being insufficient for usefulness and safety, or skew wide, presenting a set of binding requirements that are inapplicable to most actual deployments. We need to create real, practical guidance with teeth that’s sufficiently robust to help people develop useful technology,” Lipton said.
Exploring Crucial Areas from Executive Order on AI
The conference focused on four themes raised in President Joe Biden’s — training-data attribution, privacy, data provenance and watermarks, and trust and safety — and included updates on AI regulation and evaluation from the U.S. Copyright Office, the Federal Trade Commission, the U.S. AI Safety Institute and the U.K. AI Safety Institute.
“With the rise of modern generative AI, we are at an inflection point,” said , K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in һ’s and an organizer of the event. “Recent technological advances in this space have significantly lowered barriers to apply AI to real-world tasks and problems. While these models appear to have impressive capabilities, they also alter and expand the landscape of risks and harms.”
“On the technical side, we face a dire need for effective, holistic and reliable approaches to evaluating the capabilities and risks of GenAI for the concrete problems that practitioners and policymakers are grappling with,” she continued. “By putting technological experts, policymakers and civil society in direct conversation with one another, this event both highlighted and bridged the gap between existing approaches to evaluation and the needs and desires of policymakers, practitioners and the public more broadly.”
Policy briefs summarizing the outputs of the event are forthcoming.