New developments in machine learning and Artificial Intelligence (AI) have put healthcare on the cusp of another great technological leap. However, great opportunities come with great risks. Poorly designed and implemented AI has contributed to failures in Facebook, biased hiring algorithms, criminal sentencing algorithms perpetuating discrimination, and facial recognition that may not recognize you or recognize you all too well.
Ethical AI: A Framework for Justice
AI in Healthcare
AI will soon be pervasive in healthcare. How do we embrace this inevitable shift while navigating the accompanying uncertainty? As stakeholders, we can agree we need guidelines for safety, but what about fairness, accountability and protections for vulnerable patients to protect them from systemic bias? There is an urgent need for industry standards and for contextual guidance accounting for theoretical problems facing development of intelligence for healthcare applications.
AI turns long and laborious decision-making processes and predictions over to machines. When these processes and predictions create results that influence life and death, as in healthcare, we want to make sure they are the results we need to see.
Three factors create a context in which AI tools can produce ethically problematic outputs that can be difficult to detect and audit:
- AI tools are black boxes. For complexity and intellectual property reasons, they can be difficult/impossible to scrutinize when they produce unexpected outcomes.
- AI tools have been shown to reproduce systemic biases contained in the data used to train them.
- AI tools are afforded deference by human users; AI judgments are generally considered to be “more objective” than those made by people.
The Ethical AI Project
Recognizing the need to leverage AI’s benefits and mitigate risks, the Center for Practical Bioethics, in collaboration with Cerner Corporation and other leading healthcare institutions in the Kansas City region, is developing EthicalAI strategies tailored to the unique needs of healthcare.
The work began at an August 2019 workshop in Kansas City. Fifty-four professionals across numerous fields of healthcare and technology, including engineering, medicine, social work, research, data science, user experience, nursing and other related fields, gathered to examine various concerns regarding ethics in AI and propose interventions. In October of 2020, we conducted our second workshop (virtually) at the annual American Congress of Rehabilitative Medicine conference.
Current EthicalAI Project collaborators include BioNexus KC, Children’s Mercy Hospitals & Clinics, the University of Missouri-Kansas City and Cerner Corporation. We have also worked with several start-ups, universities, and healthcare providers in the Kansas City area. With these partners, we will curate and create content to initially serve as a foundational library of resources for healthcare IT companies and other stakeholders.
The EthicalAI Project, with support from the Sunderland Foundation, will lead to the integration of ethical principles into the design, development, dissemination, and implementation of AI tools so that ethical principles, such as equity and justice, are demonstrated in the results.