一本道无码

一本道无码
November 21, 2023

Researchers Propose Framework for AI Use in Health Care

Caitlin Kizielewicz

Health care organizations are looking to artificial intelligence (AI) tools to improve patient care, but the translation into clinical settings has been inconsistent, in part because evaluating AI in health care remains challenging. A multi-institutional team of researchers propose a framework for using AI that includes practical guidance for applying values and that incorporates not just the tool’s properties but the systems surrounding its use. The article was published in November issue of the journal “.”

“Regulatory guidelines and institutional approaches have focused narrowly on the performance of AI tools, neglecting knowledge, practices and procedures necessary to integrate the model within the larger social systems of medical practice,” said Alex John London, the K&L Gates Professor of Ethics and Computational Technologies in the Department of Philosophy at Carnegie Mellon and coauthor of the study. “Tools are not neutral—they reflect our values—so how they work reflects the people, processes and environments in which they are put to work.”

London is also director of Carnegie Mellon’s Center for Ethics and Policy and chief ethicist at Carnegie Mellon’s Block Center for Technology and Society.

London and his coauthors advocate for a conceptual shift in which AI tools are viewed as parts of a larger “intervention ensemble,” a set of knowledge, practices and procedures that are necessary to deliver care to patients. In previous work with other colleagues, London has applied this concept to pharmaceuticals and to autonomous vehicles. The approach treats AI tools as “sociotechnical systems,” and the authors’ proposed framework seeks to advance the responsible integration of AI systems into health care.

Previous work in this area has been largely descriptive, explaining how AI systems interact with human systems. The framework proposed by London and his colleagues is proactive, providing guidance to designers, funders and users about how to ensure that AI systems can be integrated into workflows with the greatest potential to help patients. Their approach can also be used for regulation and institutional insights, as well as for appraising, evaluating and using AI tools responsibly and ethically. To illustrate their framework, the authors apply it to the development of AI systems developed for diagnosing more than mild diabetic retinopathy.

“Only a small majority of models evaluated through clinical trials have shown a net benefit,” said Melissa McCradden, a bioethicist at the Hospital for Sick Children, assistant professor of Clinical and Public Health at the Dalla Lana School of Public Health and coauthor on the study. “We hope our proposed framework lends precision to evaluation and interests regulatory bodies exploring the kinds of evidence needed to support the oversight of AI systems.”


London and McCradden were joined by Shalmali Joshi at Columbia University and James A. Anderson at The Hospital for Sick Children on the study, titled “A normative framework for artificial intelligence as a sociotechnical system in healthcare.”