17 research outputs found
Microscope 2.0: An Augmented Reality Microscope with Real-time Artificial Intelligence Integration
The brightfield microscope is instrumental in the visual examination of both
biological and physical samples at sub-millimeter scales. One key clinical
application has been in cancer histopathology, where the microscopic assessment
of the tissue samples is used for the diagnosis and staging of cancer and thus
guides clinical therapy. However, the interpretation of these samples is
inherently subjective, resulting in significant diagnostic variability.
Moreover, in many regions of the world, access to pathologists is severely
limited due to lack of trained personnel. In this regard, Artificial
Intelligence (AI) based tools promise to improve the access and quality of
healthcare. However, despite significant advances in AI research, integration
of these tools into real-world cancer diagnosis workflows remains challenging
because of the costs of image digitization and difficulties in deploying AI
solutions. Here we propose a cost-effective solution to the integration of AI:
the Augmented Reality Microscope (ARM). The ARM overlays AI-based information
onto the current view of the sample through the optical pathway in real-time,
enabling seamless integration of AI into the regular microscopy workflow. We
demonstrate the utility of ARM in the detection of lymph node metastases in
breast cancer and the identification of prostate cancer with a latency that
supports real-time workflows. We anticipate that ARM will remove barriers
towards the use of AI in microscopic analysis and thus improve the accuracy and
efficiency of cancer diagnosis. This approach is applicable to other microscopy
tasks and AI algorithms in the life sciences and beyond
Deep learning-based survival prediction for multiple cancer types using histopathology images
Prognostic information at diagnosis has important implications for cancer
treatment and monitoring. Although cancer staging, histopathological
assessment, molecular features, and clinical variables can provide useful
prognostic insights, improving risk stratification remains an active research
area. We developed a deep learning system (DLS) to predict disease specific
survival across 10 cancer types from The Cancer Genome Atlas (TCGA). We used a
weakly-supervised approach without pixel-level annotations, and tested three
different survival loss functions. The DLS was developed using 9,086 slides
from 3,664 cases and evaluated using 3,009 slides from 1,216 cases. In
multivariable Cox regression analysis of the combined cohort including all 10
cancers, the DLS was significantly associated with disease specific survival
(hazard ratio of 1.58, 95% CI 1.28-1.70, p<0.0001) after adjusting for cancer
type, stage, age, and sex. In a per-cancer adjusted subanalysis, the DLS
remained a significant predictor of survival in 5 of 10 cancer types. Compared
to a baseline model including stage, age, and sex, the c-index of the model
demonstrated an absolute 3.7% improvement (95% CI 1.0-6.5) in the combined
cohort. Additionally, our models stratified patients within individual cancer
stages, particularly stage II (p=0.025) and stage III (p<0.001). By developing
and evaluating prognostic models across multiple cancer types, this work
represents one of the most comprehensive studies exploring the direct
prediction of clinical outcomes using deep learning and histopathology images.
Our analysis demonstrates the potential for this approach to provide prognostic
information in multiple cancer types, and even within specific pathologic
stages. However, given the relatively small number of clinical events, we
observed wide confidence intervals, suggesting that future work will benefit
from larger datasets
Deep Learning for Distinguishing Normal versus Abnormal Chest Radiographs and Generalization to Unseen Diseases
Chest radiography (CXR) is the most widely-used thoracic clinical imaging
modality and is crucial for guiding the management of cardiothoracic
conditions. The detection of specific CXR findings has been the main focus of
several artificial intelligence (AI) systems. However, the wide range of
possible CXR abnormalities makes it impractical to build specific systems to
detect every possible condition. In this work, we developed and evaluated an AI
system to classify CXRs as normal or abnormal. For development, we used a
de-identified dataset of 248,445 patients from a multi-city hospital network in
India. To assess generalizability, we evaluated our system using 6
international datasets from India, China, and the United States. Of these
datasets, 4 focused on diseases that the AI was not trained to detect: 2
datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our
results suggest that the AI system generalizes to new patient populations and
abnormalities. In a simulated workflow where the AI system prioritized abnormal
cases, the turnaround time for abnormal cases reduced by 7-28%. These results
represent an important step towards evaluating whether AI can be safely used to
flag cases in a general setting where previously unseen abnormalities exist
Artificial Intelligence for Diagnosis and Gleason Grading of Prostate Cancer in Biopsies-Current Status and Next Steps
Diagnosis and Gleason grading of prostate cancer in biopsies are critical for the clinical management of men with prostate cancer. Despite this, the high grading variability among pathologists leads to the potential for under- and overtreatment. Artificial intelligence (AI) systems have shown promise in assisting pathologists to perform Gleason grading, which could help address this problem. In this mini-review, we highlight studies reporting on the development of AI systems for cancer detection and Gleason grading, and discuss the progress needed for widespread clinical implementation, as well as anticipated future developments.Patient summaryThis mini-review summarizes the evidence relating to the validation of artificial intelligence (AI)-assisted cancer detection and Gleason grading of prostate cancer in biopsies, and highlights the remaining steps required prior to its widespread clinical implementation. We found that, although there is strong evidence to show that AI is able to perform Gleason grading on par with experienced uropathologists, more work is needed to ensure the accuracy of results from AI systems in diverse settings across different patient populations, digitization platforms, and pathology laboratories.</p
Artificial intelligence for diagnosis and Gleason grading of prostate cancer: The PANDA challenge
Through a community-driven competition, the PANDA challenge provides a curated diverse dataset and a catalog of models for prostate cancer pathology, and represents a blueprint for evaluating AI algorithms in digital pathology.
Artificial intelligence (AI) has shown promise for diagnosing prostate cancer in biopsies. However, results have been limited to individual studies, lacking validation in multinational settings. Competitions have been shown to be accelerators for medical imaging innovations, but their impact is hindered by lack of reproducibility and independent validation. With this in mind, we organized the PANDA challenge-the largest histopathology competition to date, joined by 1,290 developers-to catalyze development of reproducible AI algorithms for Gleason grading using 10,616 digitized prostate biopsies. We validated that a diverse set of submitted algorithms reached pathologist-level performance on independent cross-continental cohorts, fully blinded to the algorithm developers. On United States and European external validation sets, the algorithms achieved agreements of 0.862 (quadratically weighted kappa, 95% confidence interval (CI), 0.840-0.884) and 0.868 (95% CI, 0.835-0.900) with expert uropathologists. Successful generalization across different patient populations, laboratories and reference standards, achieved by a variety of algorithmic approaches, warrants evaluating AI-based Gleason grading in prospective clinical trials.KWF Kankerbestrijding ; Netherlands Organization for Scientific Research (NWO) ; Swedish Research Council European Commission ; Swedish Cancer Society ; Swedish eScience Research Center ; Ake Wiberg Foundation ; Prostatacancerforbundet ; Academy of Finland ; Cancer Foundation Finland ; Google Incorporated ; MICCAI board challenge working group ; Verily Life Sciences ; EIT Health ; Karolinska Institutet ; MICCAI 2020 satellite event team ; ERAPerMe
Artificial Intelligence in a Structurally Unjust Society
Increasing concerns have been raised regarding artificial intelligence (AI) bias, and in response, efforts have been made to pursue AI fairness. In this paper, we argue that the idea of structural injustice serves as a helpful framework for clarifying the ethical concerns surrounding AI bias—including the nature of its moral problem and the responsibility for addressing it—and reconceptualizing the approach to pursuing AI fairness. Using AI in health care as a case study, we argue that AI bias is a form of structural injustice that exists when AI systems interact with other social factors to exacerbate existing social inequalities, making some groups of people more vulnerable to undeserved burdens while conferring unearned benefits to others. The goal of AI fairness, understood this way, is to pursue a more just social structure with the development and use of AI systems when appropriate. We further argue that all participating agents in the unjust social structure associated with AI bias bear a shared responsibility to join collective action with the goal of reforming the social structure, and we provide a list of practical recommendations for agents in various social positions to contribute to this collective action