3 research outputs found

    Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists.

    Get PDF
    BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM. METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101 radiologists (28 296 independent interpretations). An AI system analyzed these exams yielding a level of suspicion of cancer present between 1 and 10. The detection performance between the radiologists and the AI system was compared using a noninferiority null hypothesis at a margin of 0.05. RESULTS: The performance of the AI system was statistically noninferior to that of the average of the 101 radiologists. The AI system had a 0.840 (95% confidence interval [CI] = 0.820 to 0.860) area under the ROC curve and the average of the radiologists was 0.814 (95% CI = 0.787 to 0.841) (difference 95% CI = -0.003 to 0.055). The AI system had an AUC higher than 61.4% of the radiologists. CONCLUSIONS: The evaluated AI system achieved a cancer detection accuracy comparable to an average breast radiologist in this retrospective setting. Although promising, the performance and impact of such a system in a screening setting needs further investigation

    Advancing Regulatory Science With Computational Modeling for Medical Devices at the FDA's Office of Science and Engineering Laboratories

    Get PDF
    Protecting and promoting public health is the mission of the U.S. Food and Drug Administration (FDA). FDA's Center for Devices and Radiological Health (CDRH), which regulates medical devices marketed in the U.S., envisions itself as the world's leader in medical device innovation and regulatory science–the development of new methods, standards, and approaches to assess the safety, efficacy, quality, and performance of medical devices. Traditionally, bench testing, animal studies, and clinical trials have been the main sources of evidence for getting medical devices on the market in the U.S. In recent years, however, computational modeling has become an increasingly powerful tool for evaluating medical devices, complementing bench, animal and clinical methods. Moreover, computational modeling methods are increasingly being used within software platforms, serving as clinical decision support tools, and are being embedded in medical devices. Because of its reach and huge potential, computational modeling has been identified as a priority by CDRH, and indeed by FDA's leadership. Therefore, the Office of Science and Engineering Laboratories (OSEL)—the research arm of CDRH—has committed significant resources to transforming computational modeling from a valuable scientific tool to a valuable regulatory tool, and developing mechanisms to rely more on digital evidence in place of other evidence. This article introduces the role of computational modeling for medical devices, describes OSEL's ongoing research, and overviews how evidence from computational modeling (i.e., digital evidence) has been used in regulatory submissions by industry to CDRH in recent years. It concludes by discussing the potential future role for computational modeling and digital evidence in medical devices
    corecore