1,009 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Automatic Segmentation of Dermoscopic Images by Iterative Classification
Accurate detection of the borders of skin lesions is a vital first step for computer aided diagnostic systems. This paper presents a novel automatic approach to segmentation of skin lesions that is particularly suitable for analysis of dermoscopic images. Assumptions about the image acquisition, in particular, the approximate location and color, are used to derive an automatic rule to select small seed regions, likely to correspond to samples of skin and the lesion of interest. The seed regions are used as initial training samples, and the lesion segmentation problem is treated as binary classification problem. An iterative hybrid classification strategy, based on a weighted combination of estimated posteriors of a linear and quadratic classifier, is used to update both the automatically selected training samples and the segmentation, increasing reliability and final accuracy, especially for those challenging images, where the contrast between the background skin and lesion is low
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Region Adjacency Graph Approach for Acral Melanocytic Lesion Segmentation
Malignant melanoma is among the fastest increasing malignancies in many countries. Due to its propensity to metastasize and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. In non-Caucasian populations, melanomas are frequently located in acral volar areas and their dermoscopic appearance differs from the non-acral ones. Although lesion segmentation is a natural preliminary step towards its further analysis, so far virtually no acral skin lesion segmentation method has been proposed. Our goal was to develop an effective segmentation algorithm dedicated for acral lesions
The Liver Tumor Segmentation Benchmark (LiTS)
In this work, we report the set-up and results of the Liver Tumor
Segmentation Benchmark (LITS) organized in conjunction with the IEEE
International Symposium on Biomedical Imaging (ISBI) 2016 and International
Conference On Medical Image Computing Computer Assisted Intervention (MICCAI)
2017. Twenty four valid state-of-the-art liver and liver tumor segmentation
algorithms were applied to a set of 131 computed tomography (CT) volumes with
different types of tumor contrast levels (hyper-/hypo-intense), abnormalities
in tissues (metastasectomie) size and varying amount of lesions. The submitted
algorithms have been tested on 70 undisclosed volumes. The dataset is created
in collaboration with seven hospitals and research institutions and manually
reviewed by independent three radiologists. We found that not a single
algorithm performed best for liver and tumors. The best liver segmentation
algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation
the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image
data and manual annotations continue to be publicly available through an online
evaluation system as an ongoing benchmarking resource.Comment: conferenc
Computer-Aided Assessment of Tuberculosis with Radiological Imaging: From rule-based methods to Deep Learning
Mención Internacional en el título de doctorTuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb.)
that produces pulmonary damage due to its airborne nature. This fact facilitates the disease
fast-spreading, which, according to the World Health Organization (WHO), in 2021 caused
1.2 million deaths and 9.9 million new cases.
Traditionally, TB has been considered a binary disease (latent/active) due to the limited
specificity of the traditional diagnostic tests. Such a simple model causes difficulties in the
longitudinal assessment of pulmonary affectation needed for the development of novel drugs
and to control the spread of the disease.
Fortunately, X-Ray Computed Tomography (CT) images enable capturing specific manifestations
of TB that are undetectable using regular diagnostic tests, which suffer from
limited specificity. In conventional workflows, expert radiologists inspect the CT images.
However, this procedure is unfeasible to process the thousands of volume images belonging
to the different TB animal models and humans required for a suitable (pre-)clinical trial.
To achieve suitable results, automatization of different image analysis processes is a
must to quantify TB. It is also advisable to measure the uncertainty associated with this
process and model causal relationships between the specific mechanisms that characterize
each animal model and its level of damage. Thus, in this thesis, we introduce a set of novel
methods based on the state of the art Artificial Intelligence (AI) and Computer Vision (CV).
Initially, we present an algorithm to assess Pathological Lung Segmentation (PLS) employing
an unsupervised rule-based model which was traditionally considered a needed
step before biomarker extraction. This procedure allows robust segmentation in a Mtb. infection
model (Dice Similarity Coefficient, DSC, 94%±4%, Hausdorff Distance, HD,
8.64mm±7.36mm) of damaged lungs with lesions attached to the parenchyma and affected
by respiratory movement artefacts.
Next, a Gaussian Mixture Model ruled by an Expectation-Maximization (EM) algorithm
is employed to automatically quantify the burden of Mtb.using biomarkers extracted from the
segmented CT images. This approach achieves a strong correlation (R2 ≈ 0.8) between our
automatic method and manual extraction. Consequently, Chapter 3 introduces a model to automate the identification of TB lesions
and the characterization of disease progression. To this aim, the method employs the
Statistical Region Merging algorithm to detect lesions subsequently characterized by texture
features that feed a Random Forest (RF) estimator. The proposed procedure enables a
selection of a simple but powerful model able to classify abnormal tissue.
The latest works base their methodology on Deep Learning (DL). Chapter 4 extends
the classification of TB lesions. Namely, we introduce a computational model to infer
TB manifestations present in each lung lobe of CT scans by employing the associated
radiologist reports as ground truth. We do so instead of using the classical manually delimited
segmentation masks. The model adjusts the three-dimensional architecture, V-Net, to a multitask
classification context in which loss function is weighted by homoscedastic uncertainty.
Besides, the method employs Self-Normalizing Neural Networks (SNNs) for regularization.
Our results are promising with a Root Mean Square Error of 1.14 in the number of nodules
and F1-scores above 0.85 for the most prevalent TB lesions (i.e., conglomerations, cavitations,
consolidations, trees in bud) when considering the whole lung.
In Chapter 5, we present a DL model capable of extracting disentangled information from
images of different animal models, as well as information of the mechanisms that generate
the CT volumes. The method provides the segmentation mask of axial slices from three
animal models of different species employing a single trained architecture. It also infers the
level of TB damage and generates counterfactual images. So, with this methodology, we
offer an alternative to promote generalization and explainable AI models.
To sum up, the thesis presents a collection of valuable tools to automate the quantification
of pathological lungs and moreover extend the methodology to provide more explainable
results which are vital for drug development purposes. Chapter 6 elaborates on these
conclusions.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidenta: María Jesús Ledesma Carbayo.- Secretario: David Expósito Singh.- Vocal: Clarisa Sánchez Gutiérre
Deep learning for an improved diagnostic pathway of prostate cancer in a small multi-parametric magnetic resonance data regime
Prostate Cancer (PCa) is the second most commonly diagnosed cancer among men, with an estimated incidence of 1.3 million new cases worldwide in 2018. The current diagnostic pathway of PCa relies on prostate-specific antigen (PSA) levels in serum. Nevertheless, PSA testing comes at the cost of under-detection of malignant lesions and a substantial over-diagnosis of indolent ones, leading to unnecessary invasive testing such biopsies and treatment in indolent PCa lesions.
Magnetic Resonance Imaging (MRI) is a non-invasive technique that has emerged as a valuable tool for PCa detection, staging, early screening, treatment planning and intervention. However, analysis of MRI relies on expertise, can be time-consuming, requires specialized training and in its absence suffers from inter and intra-reader variability and sub-optimal interpretations.
Deep Learning (DL) techniques have the ability to recognize complex patterns in imaging data and are able to automatize certain assessments or tasks while offering a lesser degree of subjectiveness, providing a tool that can help clinicians in their daily tasks. In spite of it, DL success has traditionally relied on the availability of large amounts of labelled data, which are rarely available in the medical field and are costly and hard to obtain due to privacy regulations of patients’ data and required specialized training, among others.
This work investigates DL algorithms specially tailored to work in a limited data regime with the final objective of improving the current prostate cancer diagnostic pathway by improving the performance of DL algorithms for PCa MRI applications in a limited data regime scenario.
In particular, this thesis starts by exploring Generative Adversarial Networks (GAN) to generate synthetic samples and their effect on tasks such as prostate capsule segmentation and PCa lesion significance classification (triage). Following, we explore the use of Auto-encoders (AEs) to exploit the data imbalance that is usually present in medical imaging datasets. Specifically, we propose a framework based on AEs to detect the presence of prostate lesions (tumours) by uniquely learning from control (healthy) data in an outlier detection-like fashion. This thesis also explores more recent DL paradigms that have shown promising results in natural images: generative and contrastive self-supervised learning (SSL). In both cases, we propose specific prostate MRI image manipulations for a PCa lesion classification downstream task and show the improvements offered by the techniques when compared with other initialization methods such as ImageNet pre-training. Finally, we explore data fusion techniques in order to leverage different data sources in the form of MRI sequences (orthogonal views) acquired by default during patient examinations and that are commonly ignored in DL systems. We show improvements in a PCa lesion significance classification when compared to a single input system (axial view)
- …