23 research outputs found
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
Can a single image processing algorithm work equally well across all phases of DCE-MRI?
Image segmentation and registration are said to be challenging when applied
to dynamic contrast enhanced MRI sequences (DCE-MRI). The contrast agent causes
rapid changes in intensity in the region of interest and elsewhere, which can
lead to false positive predictions for segmentation tasks and confound the
image registration similarity metric. While it is widely assumed that contrast
changes increase the difficulty of these tasks, to our knowledge no work has
quantified these effects. In this paper we examine the effect of training with
different ratios of contrast enhanced (CE) data on two popular tasks:
segmentation with nnU-Net and Mask R-CNN and registration using VoxelMorph and
VTN. We experimented further by strategically using the available datasets
through pretraining and fine tuning with different splits of data. We found
that to create a generalisable model, pretraining with CE data and fine tuning
with non-CE data gave the best result. This interesting find could be expanded
to other deep learning based image processing tasks with DCE-MRI and provide
significant improvements to the models performance
Generalized Zero Shot Learning For Medical Image Classification
In many real world medical image classification settings we do not have
access to samples of all possible disease classes, while a robust system is
expected to give high performance in recognizing novel test data. We propose a
generalized zero shot learning (GZSL) method that uses self supervised learning
(SSL) for: 1) selecting anchor vectors of different disease classes; and 2)
training a feature generator. Our approach does not require class attribute
vectors which are available for natural images but not for medical images. SSL
ensures that the anchor vectors are representative of each class. SSL is also
used to generate synthetic features of unseen classes. Using a simpler
architecture, our method matches a state of the art SSL based GZSL method for
natural images and outperforms all methods for medical images. Our method is
adaptable enough to accommodate class attribute vectors when they are available
for natural images
Groupwise Non-Rigid Registration with Deep Learning: An Affordable Solution Applied to 2D Cardiac Cine MRI Reconstruction
Groupwise image (GW) registration is customarily used for subsequent processing in medical imaging. However, it is computationally expensive due to repeated calculation of transformations and gradients. In this paper, we propose a deep learning (DL) architecture that achieves GW elastic registration of a 2D dynamic sequence on an affordable average GPU. Our solution, referred to as dGW, is a simplified version of the well-known U-net. In our GW solution, the image that the other images are registered to, referred to in the paper as template image, is iteratively obtained together with the registered images. Design and evaluation have been carried out using 2D cine cardiac MR slices from 2 databases respectively consisting of 89 and 41 subjects. The first database was used for training and validation with 66.6–33.3% split. The second one was used for validation (50%) and testing (50%). Additional network hyperparameters, which are—in essence—those that control the transformation smoothness degree, are obtained by means of a forward selection procedure. Our results show a 9-fold runtime reduction with respect to an optimization-based implementation; in addition, making use of the well-known structural similarity (SSIM) index we have obtained significative differences with dGW with respect to an alternative DL solution based on Voxelmorph
The Probabilistic Active Shape Model: From Model Construction to Flexible Medical Image Segmentation
Automatic processing of three-dimensional image data acquired with computed tomography or magnetic resonance imaging plays an increasingly important role in medicine. For example, the automatic
segmentation of anatomical structures in tomographic images allows to generate three-dimensional visualizations of a patient’s anatomy and thereby supports surgeons during planning of various kinds of
surgeries.
Because organs in medical images often exhibit a low contrast to adjacent structures, and because the image quality may be hampered by noise or other image acquisition artifacts, the development of segmentation algorithms that are both robust and accurate is very challenging. In order to increase the robustness, the use of model-based algorithms is mandatory, as for example algorithms that incorporate prior knowledge about an organ’s shape into the segmentation process. Recent research has proven that Statistical Shape Models are especially appropriate for robust medical image segmentation. In these models, the typical shape of an organ is learned from a set of training examples. However, Statistical Shape Models have two major disadvantages: The construction of the models is relatively difficult, and the models are often used too restrictively, such that the resulting segmentation does not delineate the organ exactly.
This thesis addresses both problems: The first part of the thesis introduces new methods for establishing correspondence between training shapes, which is a necessary prerequisite for shape model learning. The developed methods include consistent parameterization algorithms for organs with spherical and genus 1 topology, as well as a nonrigid mesh registration algorithm for shapes with arbitrary topology. The second part of the thesis presents a new shape model-based segmentation algorithm that allows for an accurate delineation of organs. In contrast to existing approaches, it is possible to integrate not only linear shape models into the algorithm, but also nonlinear shape models, which allow for a more specific description of an organ’s shape variation.
The proposed segmentation algorithm is evaluated in three applications to medical image data: Liver and vertebra segmentation in contrast-enhanced computed tomography scans, and prostate segmentation in magnetic resonance images