78 research outputs found
SkinNet: A Deep Learning Framework for Skin Lesion Segmentation
There has been a steady increase in the incidence of skin cancer worldwide,
with a high rate of mortality. Early detection and segmentation of skin lesions
are crucial for timely diagnosis and treatment, necessary to improve the
survival rate of patients. However, skin lesion segmentation is a challenging
task due to the low contrast of lesions and their high similarity in terms of
appearance, to healthy tissue. This underlines the need for an accurate and
automatic approach for skin lesion segmentation. To tackle this issue, we
propose a convolutional neural network (CNN) called SkinNet. The proposed CNN
is a modified version of U-Net. We compared the performance of our approach
with other state-of-the-art techniques, using the ISBI 2017 challenge dataset.
Our approach outperformed the others in terms of the Dice coefficient, Jaccard
index and sensitivity, evaluated on the held-out challenge test data set,
across 5-fold cross validation experiments. SkinNet achieved an average value
of 85.10, 76.67 and 93.0%, for the DC, JI, and SE, respectively.Comment: 2 pages, submitted to NSS/MIC 201
A Probabilistic Framework for Statistical Shape Models and Atlas Construction: Application to Neuroimaging
Accurate and reliable registration of shapes and multi-dimensional point sets describing the morphology/physiology of anatomical structures is a pre-requisite for constructing statistical shape models (SSMs) and atlases. Such statistical descriptions of variability across populations (regarding shape or other morphological/physiological quantities) are based on homologous correspondences across the multiple samples that comprise the training data. The notion of exact correspondence can be ambiguous when these data contain noise and outliers, missing data, or significant and abnormal variations due to pathology. But, these phenomena are common in medical image-derived data, due, for example, to inconsistencies in image quality and acquisition protocols, presence of motion artefacts, differences in pre-processing steps, and inherent variability across patient populations and demographics. This thesis therefore focuses on formulating a unified probabilistic framework for the registration of shapes and so-called \textit{generalised point sets}, which is robust to the anomalies and variations described.
Statistical analysis of shapes across large cohorts demands automatic generation of training sets (image segmentations delineating the structure of interest), as manual and semi-supervised approaches can be prohibitively time consuming. However, automated segmentation and landmarking of images often result in shapes with high levels of outliers and missing data. Consequently, a robust method for registration and correspondence estimation is required. A probabilistic group-wise registration framework for point-based representations of shapes, based on Student’s t-mixture model (TMM) and a multi-resolution extension to the same (mrTMM), are formulated to this end. The frameworks exploit the inherent robustness of Student’s t-distributions to outliers, which is lacking in existing Gaussian mixture model (GMM)-based approaches. The registration accuracy of the proposed approaches was quantitatively evaluated and shown to outperform the state-of-the-art, using synthetic and clinical data. A corresponding improvement in the quality of SSMs generated subsequently was also shown, particularly for data sets containing high levels of noise. In general, the proposed approach requires fewer user specified parameters than existing methods, whilst affording much improved robustness to outliers.
Registration of generalised point sets, which combine disparate features such as spatial positions, directional/axial data, and scalar-valued quantities, was studied next. A hybrid mixture model (HMM), combining different types of probability distributions, was formulated to facilitate the joint registration and clustering of multi-dimensional point sets of this nature. Two variants of the HMM were developed for modelling: (1) axial data; and (2) directional data. The former, based on a combination of Student’s t, Watson and Gaussian distributions, was used to register hybrid point sets comprising magnetic resonance diffusion tensor image (DTI)-derived quantities, such as voxel spatial positions (defining a region/structure of interest), associated fibre orientations, and scalar measures reflecting tissue anisotropy. The latter meanwhile, formulated using a combination of Student’s t and Von-Mises-Fisher distributions, is used for the registration of shapes represented as hybrid point sets comprising spatial positions and associated surface normal vectors. The Watson-variant of the HMM facilitates statistical analysis and group-wise comparisons of DTI data across patient populations, presented as an exemplar application of the proposed approach. The Fisher-variant of the HMM on the other hand, was used to register hybrid representations of shapes, providing substantial improvements over point-based registration approaches in terms of anatomical validity in the estimated correspondences
A Conditional Flow Variational Autoencoder for Controllable Synthesis of Virtual Populations of Anatomy
The generation of virtual populations (VPs) of anatomy is essential for
conducting in silico trials of medical devices. Typically, the generated VP
should capture sufficient variability while remaining plausible and should
reflect the specific characteristics and demographics of the patients observed
in real populations. In several applications, it is desirable to synthesise
virtual populations in a \textit{controlled} manner, where relevant covariates
are used to conditionally synthesise virtual populations that fit a specific
target population/characteristics. We propose to equip a conditional
variational autoencoder (cVAE) with normalising flows to boost the flexibility
and complexity of the approximate posterior learnt, leading to enhanced
flexibility for controllable synthesis of VPs of anatomical structures. We
demonstrate the performance of our conditional flow VAE using a data set of
cardiac left ventricles acquired from 2360 patients, with associated
demographic information and clinical measurements (used as
covariates/conditional information). The results obtained indicate the
superiority of the proposed method for conditional synthesis of virtual
populations of cardiac left ventricles relative to a cVAE. Conditional
synthesis performance was evaluated in terms of generalisation and specificity
errors and in terms of the ability to preserve clinically relevant biomarkers
in synthesised VPs, that is, the left ventricular blood pool and myocardial
volume, relative to the real observed population.Comment: Accepted at MICCAI 202
A Conditional Flow Variational Autoencoder for Controllable Synthesis of Virtual Populations of Anatomy
Partially Conditioned Generative Adversarial Networks
Generative models are undoubtedly a hot topic in Artificial Intelligence,
among which the most common type is Generative Adversarial Networks (GANs).
These architectures let one synthesise artificial datasets by implicitly
modelling the underlying probability distribution of a real-world training
dataset. With the introduction of Conditional GANs and their variants, these
methods were extended to generating samples conditioned on ancillary
information available for each sample within the dataset. From a practical
standpoint, however, one might desire to generate data conditioned on partial
information. That is, only a subset of the ancillary conditioning variables
might be of interest when synthesising data. In this work, we argue that
standard Conditional GANs are not suitable for such a task and propose a new
Adversarial Network architecture and training strategy to deal with the ensuing
problems. Experiments illustrating the value of the proposed approach in digit
and face image synthesis under partial conditioning information are presented,
showing that the proposed method can effectively outperform the standard
approach under these circumstances.Comment: 10 pages, 9 figure
A Generative Shape Compositional Framework to Synthesise Populations of Virtual Chimaeras
Generating virtual populations of anatomy that capture sufficient variability while remaining plausible is essential for conducting in-silico trials of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. Hence, missing/partially-overlapping anatomical information is often available across individuals in a population. We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets. The proposed generative model can synthesise complete whole complex shape assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We applied this framework to build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a generative shape compositional framework which comprises two components - a part-aware generative shape model which captures the variability in shape observed for each structure of interest in the training population; and a spatial composition network which assembles/composes the structures synthesised by the former into multi-part shape assemblies (viz. virtual chimaeras). We also propose a novel self supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance images available in the UK Biobank. Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity. This demonstrates the superiority of the proposed approach as the synthesised cardiac virtual populations are more plausible and capture a greater degree of variability in shape than those generated by the PCA-based shape model
CAR-Net:Unsupervised Co-Attention Guided Registration Network for Joint Registration and Structure Learning
Image registration is a fundamental building block for various applications in medical image analysis. To better explore the correlation between the fixed and moving images and improve registration performance, we propose a novel deep learning network, Co-Attention guided Registration Network (CAR-Net). CAR-Net employs a co-attention block to learn a new representation of the inputs, which drives the registration of the fixed and moving images. Experiments on UK Biobank cardiac cine-magnetic resonance image data demonstrate that CAR-Net obtains higher registration accuracy and smoother deformation fields than state-of-the-art unsupervised registration methods, while achieving comparable or better registration performance than corresponding weakly-supervised variants. In addition, our approach can provide critical structural information of the input fixed and moving images simultaneously in a completely unsupervised manner
A Generative Shape Compositional Framework: Towards Representative Populations of Virtual Heart Chimaeras
Generating virtual populations of anatomy that capture sufficient variability
while remaining plausible is essential for conducting in-silico trials of
medical devices. However, not all anatomical shapes of interest are always
available for each individual in a population. Hence,
missing/partially-overlapping anatomical information is often available across
individuals in a population. We introduce a generative shape model for complex
anatomical structures, learnable from datasets of unpaired datasets. The
proposed generative model can synthesise complete whole complex shape
assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We
applied this framework to build virtual chimaeras from databases of whole-heart
shape assemblies that each contribute samples for heart substructures.
Specifically, we propose a generative shape compositional framework which
comprises two components - a part-aware generative shape model which captures
the variability in shape observed for each structure of interest in the
training population; and a spatial composition network which assembles/composes
the structures synthesised by the former into multi-part shape assemblies (viz.
virtual chimaeras). We also propose a novel self supervised learning scheme
that enables the spatial composition network to be trained with partially
overlapping data and weak labels. We trained and validated our approach using
shapes of cardiac structures derived from cardiac magnetic resonance images
available in the UK Biobank. Our approach significantly outperforms a PCA-based
shape model (trained with complete data) in terms of generalisability and
specificity. This demonstrates the superiority of the proposed approach as the
synthesised cardiac virtual populations are more plausible and capture a
greater degree of variability in shape than those generated by the PCA-based
shape model.Comment: 15 pages, 4 figure
- …