A risk governance framework for healthcare decision support systems based on socio-technical analysis

Abstract

We are developing an Artificial Intelligence (AI) risk governance framework based on human factors and AI governance principles to make automated healthcare decision-support safer and more accountable. Today, the healthcare system is facing a huge overload in reporting, which has made manual processing and comprehensive decision-making impossible. Emerging advances in AI and especially Natural Language Processing seem an attractive answer to human limitations in processing high volumes of reports. However, there are known risks to automation, including the risk in change of deploying AI itself into organisations, emotions, and ethics, which are rarely taken into consideration when making AI-based decisions. To explore this, we will first construct a Decision Support System (DSS) tool based on a knowledge graph extracted from real-world healthcare reports. Then, the tool will be deployed in a controlled manner in a hospital and its operation will be analysed using an established socio-technical methodology developed by the Centre for Innovative Human Systems in Trinity College Dublin over 25 years of research. We will contribute by integrating computer science with organizational psychology and the use of human factors methods to identify the impact of AI-based healthcare DSS, their associated risks, and the ethical and legal challenges. We hypothesize that collaborating with the organisational psychologists to consider the global system of human decision-making and AI-based DSS will help in minimizing the AI-based decision-making risk in a way that ensures fairness, accountability, and transparency. This study will be carried out with our partner hospital, St. James in Dublin

    Similar works