4 research outputs found

    Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice

    Get PDF
    Objective: To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. Methods: This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury’s imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. Results: RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI’s lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring.Conclusion: The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. Critical relevance statement: The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. </p

    Broadening the HTA of medical AI:A review of the literature to inform a tailored approach

    Get PDF
    Objectives: As current health technology assessment (HTA) frameworks do not provide specific guidance on the assessment of medical artificial intelligence (AI), this study aimed to propose a conceptual framework for a broad HTA of medical AI. Methods: A systematic literature review and a targeted search of policy documents was conducted to distill the relevant medical AI assessment elements. Three exemplary cases were selected to illustrate various elements: (1) An application supporting radiologists in stroke-care (2) A natural language processing application for clinical data abstraction (3) An ICU-discharge decision-making application. Results: A total of 31 policy documents and 9 academic publications were selected, from which a list of 29 issues was distilled. The issues were grouped by four focus areas: (1) Technology &amp; Performance, (2) Human &amp; Organizational, (3) Legal &amp; Ethical and (4) Transparency &amp; Usability. Each assessment element was extensively discussed in the test, and the elements clinical effectiveness, clinical workflow, workforce, interoperability, fairness and explainability were further highlighted through the exemplary cases. Conclusion: The current methodology of HTA requires extension to make it suitable for a broad evaluation of medical AI technologies. The 29-item assessment list that we propose needs a tailored approach for distinct types of medical AI, since the conceptualisation of the issues differs across applications.</p

    Broadening the HTA of medical AI:A review of the literature to inform a tailored approach

    Get PDF
    Objectives: As current health technology assessment (HTA) frameworks do not provide specific guidance on the assessment of medical artificial intelligence (AI), this study aimed to propose a conceptual framework for a broad HTA of medical AI. Methods: A systematic literature review and a targeted search of policy documents was conducted to distill the relevant medical AI assessment elements. Three exemplary cases were selected to illustrate various elements: (1) An application supporting radiologists in stroke-care (2) A natural language processing application for clinical data abstraction (3) An ICU-discharge decision-making application. Results: A total of 31 policy documents and 9 academic publications were selected, from which a list of 29 issues was distilled. The issues were grouped by four focus areas: (1) Technology &amp; Performance, (2) Human &amp; Organizational, (3) Legal &amp; Ethical and (4) Transparency &amp; Usability. Each assessment element was extensively discussed in the test, and the elements clinical effectiveness, clinical workflow, workforce, interoperability, fairness and explainability were further highlighted through the exemplary cases. Conclusion: The current methodology of HTA requires extension to make it suitable for a broad evaluation of medical AI technologies. The 29-item assessment list that we propose needs a tailored approach for distinct types of medical AI, since the conceptualisation of the issues differs across applications.</p

    Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice

    No full text
    Abstract Objective To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. Methods This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury’s imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. Results RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI’s lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. Conclusion The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. Critical relevance statement The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. Keypoints • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap
    corecore