24 research outputs found
Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI
A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico, evaluation, but few have yet demonstrated real benefit to patient care. Early stage clinical evaluation is important to assess an AI system’s actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use, and pave the way to further large scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multistakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two round, modified Delphi process to collect and analyse expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 predefined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. 123 experts participated in the first round of Delphi, 138 in the second, 16 in the consensus meeting, and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI specific reporting items (made of 28 subitems) and 10 generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we have developed a guideline comprising key items that should be reported in early stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings
Eliciting test-selection strategies for a decision-support system in oncology
Contains fulltext :
56813.pdf (publisher's version ) (Open Access
Automated Test Selection in Decision-Support Systems: a Case Study in Oncology
Decision-support systems in medicine should be equipped with a facility that provides patient-tailored information about which test had best be performed in which phase of the patient's management. A decision-support system with a good test-selection facility may result in ordering fewer tests, decreasing financial costs, improving a patient's quality of life, and in an improvement of medical care in general. In close cooperation with two experts in oncology, we designed such a facility for a decision-support system for the staging of cancer of the oesophagus. The facility selects tests based upon a patient's health status and closely matches current routines. We feel that by extending our decision-support system with the facility, it provides further support for a patient's management and will be more interesting for use in daily medical practice. In this paper, we describe the test-selection facility that we designed for our decision-support system in oncology and present some initial result
Enhancing surgical internship experiences: The potential of a supporting digital curriculum
Background: Centralization of care jeopardizes interns' learning experiences and necessitates educational changes. Here we present the development and evaluation of a structured digital curriculum, offered in addition to the clinical internship, to address these challenges. Methods: The structured digital curriculum was implemented in a the VUmc/Amsterdam UMC surgical internship program in the Netherlands. The curriculum used a modular format built around a skill or clinical condition. Each module included background information, digital elements like e-learnings and interactive vlogs, and self-assessments. From April 1st to June 30th, 2022, we conducted a mixed-methods evaluation comparing interns' experiences between the conventional and digital curriculum through surveys and interviews. Results: Thirty-nine interns (28.1 %) completed the survey, 17 (24.2 %) from the traditional curriculum and 22 (31.9 %) from the structured blended curriculum. Results from the interviews triangulated and complemented survey results. Interns appreciated both curricula (course marks 7.4 ± 2.0 vs. 8.1 ± 1.1, P = 0.207). The intervention cohort specifically appreciated the structured and comprehensive presentation of available study materials, which resulted in a sense of empowerment. Conclusions: Integrating a structured digital curriculum to support clinical internships provides interns with comprehensive, readily accessible knowledge, refines their understanding of clinical topics, and results in feelings of empowerment. The combination of clinical and digital education ensures adequate exposure to subjects vital for future doctors, even if clinical exposure is limited. Thus, using a structured digital curriculum prepares the intern and helps the internship program to adequately navigate future medical challenges. Key message: Centralization of care jeopardizes interns' learning experiences and necessitates educational changes. A structured digital curriculum can empower interns in this scenario by providing readily accessible knowledge which refines their understanding of clinical topics
A quality improvement study on how a simulation model can help decision making on organization of ICU wards
<jats:title>Abstract</jats:title><jats:sec>
<jats:title>Background</jats:title>
<jats:p>Intensive Care Unit (ICU) capacity management is essential to provide high-quality healthcare for critically ill patients. Yet, consensus on the most favorable ICU design is lacking, especially whether ICUs should deliver dedicated or non-dedicated care. The decision for dedicated or non-dedicated ICU design considers a trade-off in the degree of specialization for individual patient care and efficient use of resources for society. We aim to share insights of a model simulating capacity effects for different ICU designs. Upon request, this simulation model is available for other ICUs.</jats:p>
</jats:sec><jats:sec>
<jats:title>Methods</jats:title>
<jats:p>A discrete event simulation model was developed and used, to study the hypothetical performance of a large University Hospital ICU on occupancy, rejection, and rescheduling rates for a dedicated and non-dedicated ICU design in four different scenarios. These scenarios either simulate the base-case situation of the local ICU, varying bed capacity levels, potential effects of reduced length of stay for a dedicated design and unexpected increased inflow of unplanned patients.</jats:p>
</jats:sec><jats:sec>
<jats:title>Results</jats:title>
<jats:p>The simulation model provided insights to foresee effects of capacity choices that should be made. The non-dedicated ICU design outperformed the dedicated ICU design in terms of efficient use of scarce resources.</jats:p>
</jats:sec><jats:sec>
<jats:title>Conclusions</jats:title>
<jats:p>The choice to use dedicated ICUs does not only affect the clinical outcome, but also rejection- rescheduling and occupancy rates. Our analysis of a large university hospital demonstrates how such a model can support decision making on ICU design, in conjunction with other operation characteristics such as staffing and quality management.</jats:p>
</jats:sec>
Eliciting Test-selection Strategies for a Decision-Support System in Oncology
Decision-support systems often include a strategy for selecting tests in their domain of application. Such a strategy serves to provide support for the reasoning processes in the domain. Generally a test-selection strategy is offered in which tests are selected sequentially. Upon building a system for the domain of oesophageal cancer, however, we felt that a sequential strategy would be an oversimplification of daily practice. To design a test-selection strategy for our system, we decided therefore to acquire knowledge about the actual strategy used by the experts in the domain and, more specifically, about the arguments underlying their strategy. For this purpose, we used an elicitation method that was composed of an unstructured interview to gain general insight in the test-selection strategy used, and a subsequent structured interview, simulating daily practice, in which full details were acquired. We used the method with two experts in our application domain and found that the method closely fitted in with their daily practice and resulted in a large amount of detailed knowledge
