2 research outputs found
Organizational Governance of Emerging Technologies: AI Adoption in Healthcare
Private and public sector structures and norms refine how emerging technology
is used in practice. In healthcare, despite a proliferation of AI adoption, the
organizational governance surrounding its use and integration is often poorly
understood. What the Health AI Partnership (HAIP) aims to do in this research
is to better define the requirements for adequate organizational governance of
AI systems in healthcare settings and support health system leaders to make
more informed decisions around AI adoption. To work towards this understanding,
we first identify how the standards for the AI adoption in healthcare may be
designed to be used easily and efficiently. Then, we map out the precise
decision points involved in the practical institutional adoption of AI
technology within specific health systems. Practically, we achieve this through
a multi-organizational collaboration with leaders from major health systems
across the United States and key informants from related fields. Working with
the consultancy IDEO.org, we were able to conduct usability-testing sessions
with healthcare and AI ethics professionals. Usability analysis revealed a
prototype structured around mock key decision points that align with how
organizational leaders approach technology adoption. Concurrently, we conducted
semi-structured interviews with 89 professionals in healthcare and other
relevant fields. Using a modified grounded theory approach, we were able to
identify 8 key decision points and comprehensive procedures throughout the AI
adoption lifecycle. This is one of the most detailed qualitative analyses to
date of the current governance structures and processes involved in AI adoption
by health systems in the United States. We hope these findings can inform
future efforts to build capabilities to promote the safe, effective, and
responsible adoption of emerging technologies in healthcare
Development and preliminary testing of Health Equity Across the AI Lifecycle (HEAAL): A framework for healthcare delivery organizations to mitigate the risk of AI solutions worsening health inequities.
The use of data-driven technologies such as Artificial Intelligence (AI) and Machine Learning (ML) is growing in healthcare. However, the proliferation of healthcare AI tools has outpaced regulatory frameworks, accountability measures, and governance standards to ensure safe, effective, and equitable use. To address these gaps and tackle a common challenge faced by healthcare delivery organizations, a case-based workshop was organized, and a framework was developed to evaluate the potential impact of implementing an AI solution on health equity. The Health Equity Across the AI Lifecycle (HEAAL) is co-designed with extensive engagement of clinical, operational, technical, and regulatory leaders across healthcare delivery organizations and ecosystem partners in the US. It assesses 5 equity assessment domains-accountability, fairness, fitness for purpose, reliability and validity, and transparency-across the span of eight key decision points in the AI adoption lifecycle. It is a process-oriented framework containing 37 step-by-step procedures for evaluating an existing AI solution and 34 procedures for evaluating a new AI solution in total. Within each procedure, it identifies relevant key stakeholders and data sources used to conduct the procedure. HEAAL guides how healthcare delivery organizations may mitigate the potential risk of AI solutions worsening health inequities. It also informs how much resources and support are required to assess the potential impact of AI solutions on health inequities