Presentation Type
Poster
Presentation Type
Submission
Keywords
Artificial Intelligence, Bioethics, Medical ethics, Humean Ethics, AI reasoning, Informed Consent
Department
Philosophy
Major
Philosophy
Abstract
As generative AI becomes more integrated in healthcare, it seems inevitable that AI will eventually be used on hospital ethics committees. However, before implementation, their roles need careful consideration. Although AI promises to reduce costs, increase efficiency, and reduce human workloads, there are important ways in which it is limited, especially when human emotion and connection are crucial, as in clinical ethics boards.
In this paper, I highlight several problems preventing AI from being useful on hospital ethics boards. These include issues of opaque reasoning (the “black box” problem), liability, transparency, privacy, and consent. While there are proposed frameworks for implementation from the Coalition for Health AI (CHAI) and the CARE-AI assessment tool, these do not address the underlying problem of whether an AI can truly be ethical.
By drawing on a Humean perspective on morality, I argue that the crux of the problem is that ethics boards not only try to reach the correct conclusions but do so in an emotionally-grounded and essentially contestable way. Thus, even if an AI system lands on the “objectively optimal” solution (if such a thing exists), it is only arriving at that conclusion through statistical matching and not deliberation in the way humans reason morally. What this means is that the system cannot participate meaningfully. AI is structurally incapable of the kind of moral reasoning that ethics boards demand, and so I argue that we should not implement AI on clinical ethics boards until it is capable of robust moral reasoning–not just statistical inference-making based on historical training data.
Faculty Mentor
Derek Estes
Location
Waves Cafeteria
Start Date
10-4-2026 1:00 PM
End Date
10-4-2026 2:00 PM
Included in
Applied Ethics Commons, Artificial Intelligence and Robotics Commons, Bioethics and Medical Ethics Commons, Medical Humanities Commons
Statistical Inference Is Not Moral Reasoning: The Case Against AI on Hospital Ethics Boards
Waves Cafeteria
As generative AI becomes more integrated in healthcare, it seems inevitable that AI will eventually be used on hospital ethics committees. However, before implementation, their roles need careful consideration. Although AI promises to reduce costs, increase efficiency, and reduce human workloads, there are important ways in which it is limited, especially when human emotion and connection are crucial, as in clinical ethics boards.
In this paper, I highlight several problems preventing AI from being useful on hospital ethics boards. These include issues of opaque reasoning (the “black box” problem), liability, transparency, privacy, and consent. While there are proposed frameworks for implementation from the Coalition for Health AI (CHAI) and the CARE-AI assessment tool, these do not address the underlying problem of whether an AI can truly be ethical.
By drawing on a Humean perspective on morality, I argue that the crux of the problem is that ethics boards not only try to reach the correct conclusions but do so in an emotionally-grounded and essentially contestable way. Thus, even if an AI system lands on the “objectively optimal” solution (if such a thing exists), it is only arriving at that conclusion through statistical matching and not deliberation in the way humans reason morally. What this means is that the system cannot participate meaningfully. AI is structurally incapable of the kind of moral reasoning that ethics boards demand, and so I argue that we should not implement AI on clinical ethics boards until it is capable of robust moral reasoning–not just statistical inference-making based on historical training data.
Comments
If a poster is required, I will work with Dr. Estes to create it and submit it based on the abstract