1,743 research outputs found
Weakly Supervised Universal Fracture Detection in Pelvic X-rays
Hip and pelvic fractures are serious injuries with life-threatening
complications. However, diagnostic errors of fractures in pelvic X-rays (PXRs)
are very common, driving the demand for computer-aided diagnosis (CAD)
solutions. A major challenge lies in the fact that fractures are localized
patterns that require localized analyses. Unfortunately, the PXRs residing in
hospital picture archiving and communication system do not typically specify
region of interests. In this paper, we propose a two-stage hip and pelvic
fracture detection method that executes localized fracture classification using
weakly supervised ROI mining. The first stage uses a large capacity
fully-convolutional network, i.e., deep with high levels of abstraction, in a
multiple instance learning setting to automatically mine probable true positive
and definite hard negative ROIs from the whole PXR in the training data. The
second stage trains a smaller capacity model, i.e., shallower and more
generalizable, with the mined ROIs to perform localized analyses to classify
fractures. During inference, our method detects hip and pelvic fractures in one
pass by chaining the probability outputs of the two stages together. We
evaluate our method on 4 410 PXRs, reporting an area under the ROC curve value
of 0.975, the highest among state-of-the-art fracture detection methods.
Moreover, we show that our two-stage approach can perform comparably to human
physicians (even outperforming emergency physicians and surgeons), in a
preliminary reader study of 23 readers.Comment: MICCAI 2019 (early accept
Artificial intelligence in fracture detection: a systematic review and meta-analysis
Background: Patients with fractures are a common emergency presentation and may be misdiagnosed at radiologic imaging. An increasing number of studies apply artificial intelligence (AI) techniques to fracture detection as an adjunct to clinician diagnosis.
Purpose: To perform a systematic review and meta-analysis comparing the diagnostic performance in fracture detection between AI and clinicians in peer-reviewed publications and the gray literature (ie, articles published on preprint repositories).
Materials and Methods: A search of multiple electronic databases between January 2018 and July 2020 (updated June 2021) was performed that included any primary research studies that developed and/or validated AI for the purposes of fracture detection at any imaging modality and excluded studies that evaluated image segmentation algorithms. Meta-analysis with a hierarchical model to calculate pooled sensitivity and specificity was used. Risk of bias was assessed by using a modified Prediction Model Study Risk of Bias Assessment Tool, or PROBAST, checklist.
Results: Included for analysis were 42 studies, with 115 contingency tables extracted from 32 studies (55061 images). Thirty-seven studies identified fractures on radiographs and five studies identified fractures on CT images. For internal validation test sets, the pooled sensitivity was 92% (95% CI: 88, 93) for AI and 91% (95% CI: 85, 95) for clinicians, and the pooled specificity was 91% (95% CI: 88, 93) for AI and 92% (95% CI: 89, 92) for clinicians. For external validation test sets, the pooled sensitivity was 91% (95% CI: 84, 95) for AI and 94% (95% CI: 90, 96) for clinicians, and the pooled specificity was 91% (95% CI: 81, 95) for AI and 94% (95% CI: 91, 95) for clinicians. There were no statistically significant differences between clinician and AI performance. There were 22 of 42 (52%) studies that were judged to have high risk of bias. Meta-regression identified multiple sources of heterogeneity in the data, including risk of bias and fracture type.
Conclusion: Artificial intelligence (AI) and clinicians had comparable reported diagnostic performance in fracture detection, suggesting that AI technology holds promise as a diagnostic adjunct in future clinical practice
Structured Landmark Detection via Topology-Adapting Deep Graph Learning
Image landmark detection aims to automatically identify the locations of
predefined fiducial points. Despite recent success in this field,
higher-ordered structural modeling to capture implicit or explicit
relationships among anatomical landmarks has not been adequately exploited. In
this work, we present a new topology-adapting deep graph learning approach for
accurate anatomical facial and medical (e.g., hand, pelvis) landmark detection.
The proposed method constructs graph signals leveraging both local image
features and global shape features. The adaptive graph topology naturally
explores and lands on task-specific structures which are learned end-to-end
with two Graph Convolutional Networks (GCNs). Extensive experiments are
conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as
well as three real-world X-ray medical datasets (Cephalometric (public), Hand
and Pelvis). Quantitative results comparing with the previous state-of-the-art
approaches across all studied datasets indicating the superior performance in
both robustness and accuracy. Qualitative visualizations of the learned graph
topologies demonstrate a physically plausible connectivity laying behind the
landmarks.Comment: Accepted to ECCV-20. Camera-ready with supplementary materia
- …