13 research outputs found
Structured Landmark Detection via Topology-Adapting Deep Graph Learning
Image landmark detection aims to automatically identify the locations of
predefined fiducial points. Despite recent success in this field,
higher-ordered structural modeling to capture implicit or explicit
relationships among anatomical landmarks has not been adequately exploited. In
this work, we present a new topology-adapting deep graph learning approach for
accurate anatomical facial and medical (e.g., hand, pelvis) landmark detection.
The proposed method constructs graph signals leveraging both local image
features and global shape features. The adaptive graph topology naturally
explores and lands on task-specific structures which are learned end-to-end
with two Graph Convolutional Networks (GCNs). Extensive experiments are
conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as
well as three real-world X-ray medical datasets (Cephalometric (public), Hand
and Pelvis). Quantitative results comparing with the previous state-of-the-art
approaches across all studied datasets indicating the superior performance in
both robustness and accuracy. Qualitative visualizations of the learned graph
topologies demonstrate a physically plausible connectivity laying behind the
landmarks.Comment: Accepted to ECCV-20. Camera-ready with supplementary materia
Deep learning for cephalometric landmark detection: systematic review and meta-analysis
Objectives: Deep learning (DL) has been increasingly employed for automated landmark detection, e.g., for cephalometric purposes. We performed a systematic review and meta-analysis to assess the accuracy and underlying evidence for DL for cephalometric landmark detection on 2-D and 3-D radiographs.
Methods: Diagnostic accuracy studies published in 2015-2020 in Medline/Embase/IEEE/arXiv and employing DL for cephalometric landmark detection were identified and extracted by two independent reviewers. Random-effects meta-analysis, subgroup, and meta-regression were performed, and study quality was assessed using QUADAS-2. The review was registered (PROSPERO no. 227498).
Data: From 321 identified records, 19 studies (published 2017-2020), all employing convolutional neural networks, mainly on 2-D lateral radiographs (n=15), using data from publicly available datasets (n=12) and testing the detection of a mean of 30 (SD: 25; range.: 7-93) landmarks, were included. The reference test was established by two experts (n=11), 1 expert (n=4), 3 experts (n=3), and a set of annotators (n=1). Risk of bias was high, and applicability concerns were detected for most studies, mainly regarding the data selection and reference test conduct. Landmark prediction error centered around a 2-mm error threshold (mean; 95% confidence interval: (-0.581; 95 CI: -1.264 to 0.102 mm)). The proportion of landmarks detected within this 2-mm threshold was 0.799 (0.770 to 0.824).
Conclusions: DL shows relatively high accuracy for detecting landmarks on cephalometric imagery. The overall body of evidence is consistent but suffers from high risk of bias. Demonstrating robustness and generalizability of DL for landmark detection is needed.
Clinical significance: Existing DL models show consistent and largely high accuracy for automated detection of cephalometric landmarks. The majority of studies so far focused on 2-D imagery; data on 3-D imagery are sparse, but promising. Future studies should focus on demonstrating generalizability, robustness, and clinical usefulness of DL for this objective
CEPHA29: Automatic Cephalometric Landmark Detection Challenge 2023
Quantitative cephalometric analysis is the most widely used clinical and
research tool in modern orthodontics. Accurate localization of cephalometric
landmarks enables the quantification and classification of anatomical
abnormalities, however, the traditional manual way of marking these landmarks
is a very tedious job. Endeavours have constantly been made to develop
automated cephalometric landmark detection systems but they are inadequate for
orthodontic applications. The fundamental reason for this is that the amount of
publicly available datasets as well as the images provided for training in
these datasets are insufficient for an AI model to perform well. To facilitate
the development of robust AI solutions for morphometric analysis, we organise
the CEPHA29 Automatic Cephalometric Landmark Detection Challenge in conjunction
with IEEE International Symposium on Biomedical Imaging (ISBI 2023). In this
context, we provide the largest known publicly available dataset, consisting of
1000 cephalometric X-ray images. We hope that our challenge will not only
derive forward research and innovation in automatic cephalometric landmark
identification but will also signal the beginning of a new era in the
discipline
EchoGLAD: Hierarchical Graph Neural Networks for Left Ventricle Landmark Detection on Echocardiograms
The functional assessment of the left ventricle chamber of the heart requires
detecting four landmark locations and measuring the internal dimension of the
left ventricle and the approximate mass of the surrounding muscle. The key
challenge of automating this task with machine learning is the sparsity of
clinical labels, i.e., only a few landmark pixels in a high-dimensional image
are annotated, leading many prior works to heavily rely on isotropic label
smoothing. However, such a label smoothing strategy ignores the anatomical
information of the image and induces some bias. To address this challenge, we
introduce an echocardiogram-based, hierarchical graph neural network (GNN) for
left ventricle landmark detection (EchoGLAD). Our main contributions are: 1) a
hierarchical graph representation learning framework for multi-resolution
landmark detection via GNNs; 2) induced hierarchical supervision at different
levels of granularity using a multi-level loss. We evaluate our model on a
public and a private dataset under the in-distribution (ID) and
out-of-distribution (OOD) settings. For the ID setting, we achieve the
state-of-the-art mean absolute errors (MAEs) of 1.46 mm and 1.86 mm on the two
datasets. Our model also shows better OOD generalization than prior works with
a testing MAE of 4.3 mm.Comment: To be published in MICCAI 202
Cross-Task Representation Learning for Anatomical Landmark Detection
Recently, there is an increasing demand for automatically detecting
anatomical landmarks which provide rich structural information to facilitate
subsequent medical image analysis. Current methods related to this task often
leverage the power of deep neural networks, while a major challenge in fine
tuning such models in medical applications arises from insufficient number of
labeled samples. To address this, we propose to regularize the knowledge
transfer across source and target tasks through cross-task representation
learning. The proposed method is demonstrated for extracting facial anatomical
landmarks which facilitate the diagnosis of fetal alcohol syndrome. The source
and target tasks in this work are face recognition and landmark detection,
respectively. The main idea of the proposed method is to retain the feature
representations of the source model on the target task data, and to leverage
them as an additional source of supervisory signals for regularizing the target
model learning, thereby improving its performance under limited training
samples. Concretely, we present two approaches for the proposed representation
learning by constraining either final or intermediate model features on the
target model. Experimental results on a clinical face image dataset demonstrate
that the proposed approach works well with few labeled data, and outperforms
other compared approaches.Comment: MICCAI-MLMI 202