
AI is here to stay. How Do We Govern It?
Media Inquiries
Before Pandora鈥檚 box was an idiom, it was, well, a box鈥揳ctually, a jar, in the original Greek tale. Told never to open it, Pandora couldn鈥檛 resist, unleashing a torrent of evil and misery upon the world. But what gets lost in the phrase鈥檚 modern usage is the box鈥檚 final ingredient: hope.
What computer scientists, technologists, policymakers and ethicists must do now, in an age where artificial intelligence is inextricably intertwined with our everyday lives and growing more powerful by the month, is flip the idiom on its head: proceed in such a way that the hope AI represents can escape the box and be brought to bear against the problems of our society while its risks remain squashed inside, marginalized and mitigated. And the hope has never been greater.
鈥淲e sometimes joke about really smart people, 'They鈥檙e going to cure cancer someday,'鈥 said , the dean of 一本道无码鈥檚 . 鈥淲ell, these people working on AI are going to cure cancer. All the main human diseases, I totally believe, are going to be cured by giant computer clusters, doing large-scale machine learning and related techniques. And we鈥檙e just scratching the surface now.鈥
一本道无码 Professor Rayid Ghani is one of dozens of expert faculty that make Carnegie Mellon a leader in AI innovation.
But who makes the rules? Who has jurisdiction? Do we need a new Cabinet department or can we bake it into existing government agencies?
The European Union took the lead by passing the AI Act, the world鈥檚 first comprehensive law governing AI. U.S. states are introducing bills regulating aspects of AI, such as deepfakes, privacy, generative AI and elections. In October, the White House issued a broad AI executive order that provided for monumental innovation and addressed security, privacy, equity, workplace disruption and deepfakes. The bipartisan Future of AI Innovation Act, introduced in the U.S. Senate in April, aims to promote American leadership in AI development via partnerships between government, business, civil society and academia, and the Secure Artificial Intelligence Act, introduced in the Senate in May, seeks to improve the sharing of information regarding security risks and vulnerability reporting.
Senate Majority Leader Chuck Schumer has made a point of educating lawmakers about the benefits and risks of AI. In March, he announced $10 million in funding for the National Institute of Standards and Technology (NIST) to support the recently established U.S. AI Safety Institute.听
While governing something in which very little is clear, one thing is.
鈥淒on鈥檛 regulate the technology,鈥 said , the Dean of 一本道无码鈥檚 , 鈥渂ecause the technology will evolve.鈥
What are we regulating?
Artificial intelligence has existed in some form for decades, but advances in computing power have accelerated both its capabilities and the conversations about its risks. The recent improvement of generative AI tools like ChatGPT, DALL-E and Synthesia, which can create text, images and video with startling realism, increased the calls for governance.听Silicon Valley and lawmakers generally agree on the pillars of this governance: AI models should be safe, fair, private, explainable, transparent and accurate.听
We have mechanisms in place to investigate and punish those who commit crimes. Whether you rob a bank with a ski mask or AI-generated polymorphic malware, the FBI still has jurisdiction. But can the current institutions keep up with this rapidly changing technology, or do we need a central clearinghouse?
Both. There are more than 400 federal agencies. If, say, the European Union or Red Cross is attempting to liaise with the U.S. government on AI, it can鈥檛 do so across hundreds of different bureaucratic institutions.
Heinz College Dean Ramayya Krishnan is an international leader in AI innovation.
鈥淵ou do need to boost capability in oversight,鈥 Krishnan said. 鈥淭he Equal Employment Opportunity Commission has to have AI oversight capability. The Consumer Financial Protection Bureau has to have AI oversight capability. In other words, each of these executive agencies that have regulatory authority in specific sectors needs to have the capability to govern use cases where AI is used. In addition, we need crosswalks between frameworks in use in the U.S., such as the NIST AI Risk Management Framework, and frameworks developed and deployed in other jurisdictions to enable international collaboration and cooperation.鈥
Real-world risk and impact
NIST plays an important role in any conversation regarding AI and its risks. Within NIST, Reva Schwartz, the institute鈥檚 principal investigator for AI bias, bridges the gap between algorithms and humanity. She contributed to the creation of and studies the field from a socio-technical perspective, asking questions about AI鈥檚 safety and impact on people.
The responsible development and deployment of AI goes beyond making sure the model works. Ideally, the system will be accurate, and not prone to hallucination as some generative AI models are. It will be fair, especially when its outputs lead to a significant impact on people鈥檚 lives as in the cases of criminal sentencing and loan approval. It will be explainable and transparent: How did it reach its conclusion? It will not reveal identifiable personal information that may have been included in the information used to train it.听
鈥淐urrently, people tend to look at the AI lifecycle as just the quantitative aspects of the system,鈥 Schwartz said. 鈥淏ut there鈥檚 so much more to AI than the data, model and algorithms. To build AI responsibly, teams can start by considering real-world risks and impacts at the problem formulation stage.鈥
By including safe and responsible AI practices during the model's creation rather than slapping them on after the fact, organizations can use these practices to gain a leg up in the market.听
鈥淚n practice, promoting these principles often not only doesn鈥檛 hurt innovation and economic growth; it actually helps us develop better quality products,鈥 said , the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies at Carnegie Mellon and co-lead of 一本道无码鈥檚 Responsible AI initiative(opens in new window), housed in the Block Center for Technology and Society(opens in new window).听
The aviation corollary聽
Hebert and Schwartz both compared the governance of AI to that of air travel, and the allegory works on two levels. Passengers don鈥檛 need to know how the airplane works to fly; they only have to trust that the pilots do, and that they will operate the plane properly and within the rules. Safe air travel is also made possible by the agreement between airlines and manufacturers to share details of any incident, no matter how small, that the entire community can learn from.听
While decades of air travel records exist, we鈥檙e too early in the development of AI 鈥 or generative AI, at least 鈥 for that kind of catalog.听
Martial Hebert believes AI at scale will help us solve monumental societal problems.
鈥淪omething that I鈥檝e been negatively surprised with is how haphazard some of the practices around AI are,鈥 Heidari said. 鈥淭his is not a fully mature technology. We don鈥檛 currently have well-established standards and best practices. And as a result, when you don鈥檛 have that frame of reference, it is very easy for it to be misused and abused, even with no bad intention.鈥
Heidari recently contributed to a on the safety of advanced AI from the AI Seoul Summit, a joint effort between South Korea and the United Kingdom.听Krishnan, Heidari and Professor Rayid Ghani and met with legislators from both parties to educate them on the opportunities and risks of generative AI.听
鈥淲e鈥檙e in the center of the ongoing policy debate in Washington and elsewhere around, how should we regulate this new tool that will likely impact every industry?鈥 said , the executive director of the Block Center. 鈥淲e鈥檙e also in the middle of trying to understand, what does it mean for the future of work?鈥
Decades of work, from Turing to Minsky to Simon and Newell to Hinton, have led us to this point. Lawmakers the world over, from the EU to Capitol Hill to state houses, must ensure we harness the hope of AI, for the greater good, and slam the lid shut on the rest.