1,177 research outputs found
Conditional Random Fields and Supervised Learning in Automated Skin Lesion Diagnosis
Many subproblems in automated skin lesion diagnosis (ASLD) canbe unified under a single generalization of assigning a label, from an predefinedset, to each pixel in an image. We first formalize this generalizationand then present two probabilistic models capable of solving it. The firstmodel is based on independent pixel labeling using maximum a-posteriori(MAP) estimation. The second model is based on conditional randomfields (CRFs), where dependencies between pixels are defined using agraph structure. Furthermore, we demonstrate how supervised learningand an appropriate training set can be used to automatically determineall model parameters. We evaluate both models\u27 ability to segment achallenging dataset consisting of 116 images and compare our results to5 previously published methods
Conditional Random Fields and Supervised Learning in Automated Skin Lesion Diagnosis
Many subproblems in automated skin lesion diagnosis (ASLD) can
be unified under a single generalization of assigning a label, from an predefined
set, to each pixel in an image. We first formalize this generalization
and then present two probabilistic models capable of solving it. The first
model is based on independent pixel labeling using maximum a-posteriori
(MAP) estimation. The second model is based on conditional random
fields (CRFs), where dependencies between pixels are defined using a
graph structure. Furthermore, we demonstrate how supervised learning
and an appropriate training set can be used to automatically determine
all model parameters. We evaluate both models' ability to segment a
challenging dataset consisting of 116 images and compare our results to
5 previously published methods
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions
Medical Image Analysis is currently experiencing a paradigm shift due to Deep
Learning. This technology has recently attracted so much interest of the
Medical Imaging community that it led to a specialized conference in `Medical
Imaging with Deep Learning' in the year 2018. This article surveys the recent
developments in this direction, and provides a critical review of the related
major aspects. We organize the reviewed literature according to the underlying
Pattern Recognition tasks, and further sub-categorize it following a taxonomy
based on human anatomy. This article does not assume prior knowledge of Deep
Learning and makes a significant contribution in explaining the core Deep
Learning concepts to the non-experts in the Medical community. Unique to this
study is the Computer Vision/Machine Learning perspective taken on the advances
of Deep Learning in Medical Imaging. This enables us to single out `lack of
appropriately annotated large-scale datasets' as the core challenge (among
other challenges) in this research direction. We draw on the insights from the
sister research fields of Computer Vision, Pattern Recognition and Machine
Learning etc.; where the techniques of dealing with such challenges have
already matured, to provide promising directions for the Medical Imaging
community to fully harness Deep Learning in the future
Generative Adversarial Networks based Skin Lesion Segmentation
Skin cancer is a serious condition that requires accurate identification and
treatment. One way to assist clinicians in this task is by using computer-aided
diagnosis (CAD) tools that can automatically segment skin lesions from
dermoscopic images. To this end, a new adversarial learning-based framework
called EGAN has been developed. This framework uses an unsupervised generative
network to generate accurate lesion masks. It consists of a generator module
with a top-down squeeze excitation-based compound scaled path and an asymmetric
lateral connection-based bottom-up path, and a discriminator module that
distinguishes between original and synthetic masks. Additionally, a
morphology-based smoothing loss is implemented to encourage the network to
create smooth semantic boundaries of lesions. The framework is evaluated on the
International Skin Imaging Collaboration (ISIC) Lesion Dataset 2018 and
outperforms the current state-of-the-art skin lesion segmentation approaches
with a Dice coefficient, Jaccard similarity, and Accuracy of 90.1%, 83.6%, and
94.5%, respectively. This represents a 2% increase in Dice Coefficient, 1%
increase in Jaccard Index, and 1% increase in Accuracy
Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation
The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
- …