3 research outputs found
Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists.
BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM. METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101 radiologists (28 296 independent interpretations). An AI system analyzed these exams yielding a level of suspicion of cancer present between 1 and 10. The detection performance between the radiologists and the AI system was compared using a noninferiority null hypothesis at a margin of 0.05. RESULTS: The performance of the AI system was statistically noninferior to that of the average of the 101 radiologists. The AI system had a 0.840 (95% confidence interval [CI] = 0.820 to 0.860) area under the ROC curve and the average of the radiologists was 0.814 (95% CI = 0.787 to 0.841) (difference 95% CI = -0.003 to 0.055). The AI system had an AUC higher than 61.4% of the radiologists. CONCLUSIONS: The evaluated AI system achieved a cancer detection accuracy comparable to an average breast radiologist in this retrospective setting. Although promising, the performance and impact of such a system in a screening setting needs further investigation
Advancing Regulatory Science With Computational Modeling for Medical Devices at the FDA's Office of Science and Engineering Laboratories
Protecting and promoting public health is the mission of the U.S. Food and Drug Administration (FDA). FDA's Center for Devices and Radiological Health (CDRH), which regulates medical devices marketed in the U.S., envisions itself as the world's leader in medical device innovation and regulatory science–the development of new methods, standards, and approaches to assess the safety, efficacy, quality, and performance of medical devices. Traditionally, bench testing, animal studies, and clinical trials have been the main sources of evidence for getting medical devices on the market in the U.S. In recent years, however, computational modeling has become an increasingly powerful tool for evaluating medical devices, complementing bench, animal and clinical methods. Moreover, computational modeling methods are increasingly being used within software platforms, serving as clinical decision support tools, and are being embedded in medical devices. Because of its reach and huge potential, computational modeling has been identified as a priority by CDRH, and indeed by FDA's leadership. Therefore, the Office of Science and Engineering Laboratories (OSEL)—the research arm of CDRH—has committed significant resources to transforming computational modeling from a valuable scientific tool to a valuable regulatory tool, and developing mechanisms to rely more on digital evidence in place of other evidence. This article introduces the role of computational modeling for medical devices, describes OSEL's ongoing research, and overviews how evidence from computational modeling (i.e., digital evidence) has been used in regulatory submissions by industry to CDRH in recent years. It concludes by discussing the potential future role for computational modeling and digital evidence in medical devices
Recommended from our members
Understanding and Mitigating Search Errors in 3D Volumetric Images
In the field of oncology, three-dimensional volumetric medical images provide radiologists with a detailed visual representation of various anatomical structures that facilitate the early detection and characterization of malignant lesions but at the cost of an increased search space. Recent work (Lago et al., 2021) establishes that human observers rely heavily on peripheral visual processing away from the point of fixation when searching for signals in 3D volumetric images. The searcher’s over-reliance on peripheral vision interacts strongly with how much of the volume they explore and with how much they report they have explored. Specifically, observers under-explore—as determined by the percentage of the volume covered by the Useful Field of View (UFOV)—and overestimate the percentage of volume they explored through self-report measures. Consequently, they miss small signals during the search. This thesis aims to elucidate the psychological factors mediating human under-exploration of 3D volumetric image data. The second thrust of this thesis is to investigate three solutions to mitigate the detrimental impact of under-exploration in 3D images. The first method is a 2D synthetic view of the 3D data that observers can utilize as additional information when performing the 3D search. I establish through behavioral measurements and a computational model simulating foveated vision how the 2D-S guides eye movements to suspicious regions in the 3D volume. In turn, this guidance allows observers to find the small signal that would otherwise be missed without the 2D-S adjunct. The second method involves a different type of search aid, a convolutional neural network, which acts as a computer-aided detection system to assist human observers during the 3D search. Like the 2D-S, it guides eye movements to suspicious regions in a 3D volumetric image that observers would have otherwise not looked at.
The last method is inspired by the power of group decision-making. It investigates how combining multiple independent judgments from a group of searchers can lead to more exploration of the search space and a higher chance of detecting the small signal. Together, the body of work herein provides empirical results from laboratory studies to further our understanding of how humans interact with 3D imaging modalities with the goal of improving healthcare services relating to early cancer screenings