Technology (e.g. robotics, assistive technology, mHealth)
Clinical Practice (assessment, diagnosis, treatment, knowledge translation/EBP, implementation science, program development)
New developments in machine learning and AI has put healthcare on the cusp of another great technological leap, this time in automation and assisted decision making. However, great opportunities come with great risks. AI has seen immense failure from Facebook, Theranos, biased hiring algorithms, sentencing algorithms perpetuating discrimination, facial recognition that may not recognize you or recognize you all too well. Over the past few years, such examples have made individuals take pause in using intelligence: forcing companies to examine their products and processes in a whole new way.
Yet there is promise in intelligence: adding convenience, accessibility and potentially removing barriers. AI will soon be pervasive in healthcare. How do we embrace this inevitable shift while navigating the accompanying uncertainty? As stakeholders, we can agree we need guidelines for safety, but what about other things like fairness and accountability? Recognizing the need to leverage its benefits and mitigate risks, the Center for Practical Bioethics (CPB), in collaboration with Cerner Corporation and other leading healthcare institutions, are developing ethical AI strategies tailored to the unique needs of healthcare. In 2019, a workshop was held to address the ethical challenges of AI in healthcare. Fifty-four professionals across numerous fields of healthcare and technology from the US, including engineering, medicine, social work, research, data science, user experience, nursing and other related fields gathered to examine various concerns regarding ethics in AI and propose solutions. Questions related to keywords including fairness, inclusion, accountability, transparency, data privacy & security, reliability & safety were addressed and leveraged to establish a healthcare intelligence framework for discussion. The current proposal is an extension of the previous workshop to other healthcare communities and regions. The workshop has three main objectives:
1) For individuals to share what each keyword looks like from their professional and personal perspective.
2) For each group to discuss and record what each keyword should look like across design/development, implementation/dissemination and user experience pillars of healthcare intelligence.
3) To achieve consensus on recommendations for incorporating ethics across the pillars of healthcare intelligence.
Each attendee will be required to submit their occupation and knowledge level of ethics and technology prior to the course so they may be grouped across areas of expertise and level of knowledge. The course will begin with an introduction of why ethics is important as we develop intelligence in healthcare and the expectations for the workshop. A crash course on Ethics, including the history and implications of ethics in healthcare intelligence will be followed with three sessions introducing attendees to ethical considerations within three pillars of AI design. Each session will be followed by a group discussion on how ethics should be applied within the areas of the keywords above. Facilitators will organize themes from each table. The course will conclude with an open discussion of common themes and examine how recommendations can be formatted into guidelines. Throughout the course, attendees will be directed to evaluate their own experiences as healthcare providers and users, regardless of their knowledge of ethics and/or intelligence.