一本道无码

一本道无码
Ethics in Medicine and Scientific Research

Ethics & Artificial Intelligence

Recent advances in computer science and robotics are rapidly producing computational systems with the ability to perform tasks that were once reserved solely for humans. In a variety of sectors of life, from driverless cars and automated factories, to medical diagnostics and personal care robots, to military drones and cyber defense systems, the deployment of computational decision makers raises a complex array of ethical issues.

Work on these issues in the Philosophy Department is distinctive for its interdisciplinary character, its deep connections with the technical disciplines driving these advances, and its focus on concrete ethical problems that are pressing issues now, or (likely) in the next five to ten years. It is also distinctive for its focus on the likely broader social impacts of the integration of artificial decision makers into various social systems, including the assessment of ways that human social relations are likely to change in light of the new forms of interaction that such systems are likely to enable. Moreover, we work to proactively shape the development of these technologies towards more ethical processes, products, and outcomes.

Danks has explored the importance of trust in systems with autonomous capabilities, including the ways this important ethical value can impact human understanding of those systems, and our incorporation of them into functional and effective teams. He has examined these issues in a range of domains, including security & defense, cyber-systems, healthcare, and autonomous vehicles. This work raises and addresses basic questions about responsibility, liability, and the permissible division of labor between human and computational systems.

London and Danks have explored challenges in the development and regulation of autonomous systems, and the respects in which oversight mechanisms from other areas might usefully be emulated in this context. Their work draws heavily on a model of technological development articulated by London and colleagues in the context of drug development. They have also explored issues of bias in algorithms and the way that normative standards for ethically appropriate processes and outcomes are essential to evaluating the potential for different kinds of bias in algorithms.

As ambitions increase for developing computational systems that are capable of making a wider range of ethical decisions it becomes increasingly important to understand the structure of ethical decision making and the prospects for implementing such structures in various kinds of formal systems. Here, foundational issues about methodology in theoretical and applied ethics intersect with formal methods from game and decision theory, causal modeling, and machine learning.

Finally, the work of several faculty, such as Wenner and London, is relevant to understanding the conditions under which these new technologies might be used in ways that enhance human capabilities or exacerbate domination or exploitation.