一本道无码 Supports NIST Guidelines on Red Teaming for Generative AI
Media Inquiries
一本道无码鈥檚聽Block Center for Technology and Society(opens in new window) 补苍诲听K&L Gates Initiative in Ethics and Computational Technologies(opens in new window) released a聽white paper(opens in new window) that will support national efforts to ensure that AI systems are safe, secure and trustworthy. The white paper followed a workshop the groups hosted in late February on red teaming 鈥 strategic testing to identify flaws and vulnerabilities in AI systems. There, experts from academia and industry worked to gain a shared understanding of red teaming for generative AI.
The workshop was in response to an聽 released by President Joe Biden that set his administration鈥檚 priorities related to artificial intelligence used by Americans. It called for the聽 to develop tools and tests to help ensure that AI systems fit those standards.聽
一本道无码 frequently collaborates with NIST on AI issues, said聽Theresa Mayer(opens in new window), 一本道无码鈥檚 vice president for research.
鈥淐arnegie Mellon is proud to continue supporting this important work in providing the foundation of our nation's AI strategy as this technology continues to be implemented in the public sector. We've been deeply engaged with NIST and their ongoing work providing guidelines for this technology that will be vital in moving forward responsibly integrating AI tools and software into the federal government's everyday operations,鈥 she said.聽
, the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in 一本道无码鈥檚聽, was a conference organizer. She said there are significant questions about how to best use red teaming.
鈥淚n response to a rising concern surrounding the safety, security and trustworthiness of generative AI models, practitioners and regulators alike have pointed to AI red teaming as a key strategy for identifying and mitigating societal risks of these models,鈥 Heidari said. 鈥淗owever, despite AI red teaming retaining a central role in recent policy discussions and corporate messaging, significant questions remain about what precisely it means, how it relates to conventional red teaming practices and cybersecurity ... how it should be conducted and what role it can play in the future evaluation and regulation of generative AI.鈥
The workshop included discussions on research, industry practices and the policy and legal implications of AI red teaming. In addition to the white paper summary, video recordings of the event are available on the聽肠丑补苍苍别濒.听
Key Points from the White Paper
- A functional definition of red teaming, its components, scope and limitations, is necessary for effective red teaming.聽
- Generative AI research and practice communities must move toward standards and best practices around red teaming.
- The composition of the red team (in terms of diversity of backgrounds and expertise) is an important consideration.
- Red teaming efforts should address the broader system 鈥 as opposed to individual components.
- The broader political economy (e.g., market forces, regulations) will influence the practice of red teaming.
More Information(opens in new window)