1,410 research outputs found
Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow
We propose a method to classify cardiac pathology based on a novel approach
to extract image derived features to characterize the shape and motion of the
heart. An original semi-supervised learning procedure, which makes efficient
use of a large amount of non-segmented images and a small amount of images
segmented manually by experts, is developed to generate pixel-wise apparent
flow between two time points of a 2D+t cine MRI image sequence. Combining the
apparent flow maps and cardiac segmentation masks, we obtain a local apparent
flow corresponding to the 2D motion of myocardium and ventricular cavities.
This leads to the generation of time series of the radius and thickness of
myocardial segments to represent cardiac motion. These time series of motion
features are reliable and explainable characteristics of pathological cardiac
motion. Furthermore, they are combined with shape-related features to classify
cardiac pathologies. Using only nine feature values as input, we propose an
explainable, simple and flexible model for pathology classification. On ACDC
training set and testing set, the model achieves 95% and 94% respectively as
classification accuracy. Its performance is hence comparable to that of the
state-of-the-art. Comparison with various other models is performed to outline
some advantages of our model
Patient-specific in silico 3D coronary model in cardiac catheterisation laboratories
Coronary artery disease is caused by the buildup of atherosclerotic plaque in the coronary arteries, affecting the blood supply to the heart, one of the leading causes of death around the world. X-ray coronary angiography is the most common procedure for diagnosing coronary artery disease, which uses contrast material and x-rays to observe vascular lesions. With this type of procedure, blood flow in coronary arteries is viewed in real-time, making it possible to detect stenoses precisely and control percutaneous coronary interventions and stent insertions. Angiograms of coronary arteries are used to plan the necessary revascularisation procedures based on the calculation of occlusions and the affected segments. However, their interpretation in cardiac catheterisation laboratories presently relies on sequentially evaluating multiple 2D image projections, which limits measuring lesion severity, identifying the true shape of vessels, and analysing quantitative data. In silico modelling, which involves computational simulations of patient-specific data, can revolutionise interventional cardiology by providing valuable insights and optimising treatment methods. This paper explores the challenges and future directions associated with applying patient-specific in silico models in catheterisation laboratories. We discuss the implications of the lack of patient-specific in silico models and how their absence hinders the ability to accurately predict and assess the behaviour of individual patients during interventional procedures. Then, we introduce the different components of a typical patient-specific in silico model and explore the potential future directions to bridge this gap and promote the development and utilisation of patient-specific in silico models in the catheterisation laboratories
DeepMesh: Mesh-based Cardiac Motion Tracking using Deep Learning
3D motion estimation from cine cardiac magnetic resonance (CMR) images is
important for the assessment of cardiac function and the diagnosis of
cardiovascular diseases. Current state-of-the art methods focus on estimating
dense pixel-/voxel-wise motion fields in image space, which ignores the fact
that motion estimation is only relevant and useful within the anatomical
objects of interest, e.g., the heart. In this work, we model the heart as a 3D
mesh consisting of epi- and endocardial surfaces. We propose a novel learning
framework, DeepMesh, which propagates a template heart mesh to a subject space
and estimates the 3D motion of the heart mesh from CMR images for individual
subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an
individual subject is first reconstructed from the template mesh. Mesh-based 3D
motion fields with respect to the end-diastolic frame are then estimated from
2D short- and long-axis CMR images. By developing a differentiable
mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information
from multiple anatomical views for 3D mesh reconstruction and mesh motion
estimation. The proposed method estimates vertex-wise displacement and thus
maintains vertex correspondences between time frames, which is important for
the quantitative assessment of cardiac function across different subjects and
populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank.
We focus on 3D motion estimation of the left ventricle in this work.
Experimental results show that the proposed method quantitatively and
qualitatively outperforms other image-based and mesh-based cardiac motion
tracking methods
Contactless Electrocardiogram Monitoring with Millimeter Wave Radar
The electrocardiogram (ECG) has always been an important biomedical test to
diagnose cardiovascular diseases. Current approaches for ECG monitoring are
based on body attached electrodes leading to uncomfortable user experience.
Therefore, contactless ECG monitoring has drawn tremendous attention, which
however remains unsolved. In fact, cardiac electrical-mechanical activities are
coupling in a well-coordinated pattern. In this paper, we achieve contactless
ECG monitoring by breaking the boundary between the cardiac mechanical and
electrical activity. Specifically, we develop a millimeter-wave radar system to
contactlessly measure cardiac mechanical activity and reconstruct ECG without
any contact in. To measure the cardiac mechanical activity comprehensively, we
propose a series of signal processing algorithms to extract 4D cardiac motions
from radio frequency (RF) signals. Furthermore, we design a deep neural network
to solve the cardiac related domain transformation problem and achieve
end-to-end reconstruction mapping from RF input to the ECG output. The
experimental results show that our contactless ECG measurements achieve timing
accuracy of cardiac electrical events with median error below 14ms and
morphology accuracy with median Pearson-Correlation of 90% and median
Root-Mean-Square-Error of 0.081mv compared to the groudtruth ECG. These results
indicate that the system enables the potential of contactless, continuous and
accurate ECG monitoring
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Label Efficient Deep Learning in Medical Imaging
Recent state-of-the-art deep learning frameworks require large, fully annotated training datasets that are, depending on the objective, time-consuming to generate.
While in most fields, these labelling tasks can be parallelized massively or even outsourced, this is not the case for medical images.
Usually, only a highly trained expert is able to generate these datasets.
However, since additional manual annotation, especially for the purpose of segmentation or tracking, is typically not part of a radiologist's workflow, large and fully annotated datasets are a rare and scarce good.
In this context, a variety of frameworks are proposed in this work to solve the problems that arise due to the lack of annotated training data across different medical imaging tasks and modalities.
The first contribution as part of this thesis was to investigate weakly supervised learning on PET/CT data for the task of lesion segmentation.
Using only class labels (tumor vs. no tumor), a classifier was first trained and subsequently used to generate Class Activation Maps highlighting regions with lesions.
Based on these region proposals, final tumor segmentation could be performed with high accuracy in clinically relevant metrics.
This drastically simplifies the process of training data generation, as only class labels have to be assigned to each slice of a scan instead of a full pixel-wise segmentation.
To further reduce the time required to prepare training data, two self-supervised methods were investigated for the task of anatomical tissue segmentation and landmark detection.
To this end, as a second contribution, a state-of-the-art tracking framework based on contrastive random walks was transferred, adapted and extended to the medical imaging domain.
As contrastive learning often lacks real-time capability, a self-supervised template matching network was developed to address the task of real-time anatomical tissue tracking, yielding the third contribution of this work.
Both of these methods have in common that only during inference the object or region of interest is defined, reducing the number of required labels to as few as one and allowing adaptation to different tasks without having to re-train or access the original training data.
Despite the limited amount of labelled data, good results could be achieved for both tracking of organs across subjects as well as tissue tracking within time-series.
State-of-the-art self-supervised learning in medical imaging is usually performed on 2D slices due to the lack of training data and limited computational resources.
To exploit the three-dimensional structure of this type of data, self-supervised contrastive learning was performed on entire volumes using over 40,000 whole-body MRI scans forming the fourth contribution.
Due to this pre-training, a large number of downstream tasks could be successfully addressed using only limited labelled data.
Furthermore, the learned representations allows to visualize the entire dataset in a two-dimensional view.
To encourage research in the field of automated lesion segmentation in PET/CT image data, the autoPET challenge was organized, which represents the fifth contribution
- …