184 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Deep Learning based Novel Anomaly Detection Methods for Diabetic Retinopathy Screening
Programa Oficial de Doutoramento en Computación. 5009V01[Abstract] Computer-Aided Screening (CAS) systems are getting popularity in disease diagnosis. Modern CAS systems exploit data driven machine learning algorithms including supervised and unsupervised methods.
In medical imaging, annotating pathological samples are much harder and time consuming work than healthy samples. Therefore, there is always an abundance of healthy samples and scarcity of annotated and labelled pathological samples. Unsupervised anomaly detection algorithms
can be implemented for the development of CAS system using the largely available healthy samples, especially when disease/nodisease decision is important for screening.
This thesis proposes unsupervised machine learning methodologies for anomaly detection in retinal fundus images. A novel patchbased image reconstructor architecture for DR detection is presented, that addresses the shortcomings of standard autoencoders-based reconstructors.
Furthermore, a full-size image based anomaly map generation methodology is presented, where the potential DR lesions can be visualized at the pixel-level. Afterwards, a novel methodology is proposed to extend the patch-based architecture to a fully-convolutional
architecture for one-shot full-size image reconstruction. Finally, a novel methodology for supervised DR classification is proposed that utilizes the anomaly maps
Domain Generalization for Medical Image Analysis: A Survey
Medical Image Analysis (MedIA) has become an essential tool in medicine and
healthcare, aiding in disease diagnosis, prognosis, and treatment planning, and
recent successes in deep learning (DL) have made significant contributions to
its advances. However, DL models for MedIA remain challenging to deploy in
real-world situations, failing for generalization under the distributional gap
between training and testing samples, known as a distribution shift problem.
Researchers have dedicated their efforts to developing various DL methods to
adapt and perform robustly on unknown and out-of-distribution data
distributions. This paper comprehensively reviews domain generalization studies
specifically tailored for MedIA. We provide a holistic view of how domain
generalization techniques interact within the broader MedIA system, going
beyond methodologies to consider the operational implications on the entire
MedIA workflow. Specifically, we categorize domain generalization methods into
data-level, feature-level, model-level, and analysis-level methods. We show how
those methods can be used in various stages of the MedIA workflow with DL
equipped from data acquisition to model prediction and analysis. Furthermore,
we include benchmark datasets and applications used to evaluate these
approaches and analyze the strengths and weaknesses of various methods,
unveiling future research opportunities
Exploring variability in medical imaging
Although recent successes of deep learning and novel machine learning techniques improved the perfor-
mance of classification and (anomaly) detection in computer vision problems, the application of these
methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this
is the amount of variability that is encountered and encapsulated in human anatomy and subsequently
reflected in medical images. This fundamental factor impacts most stages in modern medical imaging
processing pipelines.
Variability of human anatomy makes it virtually impossible to build large datasets for each disease
with labels and annotation for fully supervised machine learning. An efficient way to cope with this is
to try and learn only from normal samples. Such data is much easier to collect. A case study of such
an automatic anomaly detection system based on normative learning is presented in this work. We
present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative
models, which are trained only utilising normal/healthy subjects.
However, despite the significant improvement in automatic abnormality detection systems, clinical
routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis
and localise abnormalities. Integrating human expert knowledge into the medical imaging processing
pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per-
spective of building an automated medical imaging system, it is still an open issue, to what extent
this kind of variability and the resulting uncertainty are introduced during the training of a model
and how it affects the final performance of the task. Consequently, it is very important to explore the
effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as
on the model’s performance in a specific machine learning task. A thorough investigation of this issue
is presented in this work by leveraging automated estimates for machine learning model uncertainty,
inter-observer variability and segmentation task performance in lung CT scan images.
Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging
was attempted. This state-of-the-art survey includes both conventional pattern recognition methods
and deep learning based methods. It is one of the first literature surveys attempted in the specific
research area.Open Acces
Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level
We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes
Pattern identification of biomedical images with time series: contrasting THz pulse imaging with DCE-MRIs
Objective
We provide a survey of recent advances in biomedical image analysis and classification from emergent imaging modalities such as terahertz (THz) pulse imaging (TPI) and dynamic contrast-enhanced magnetic resonance images (DCE-MRIs) and identification of their underlining commonalities.
Methods
Both time and frequency domain signal pre-processing techniques are considered: noise removal, spectral analysis, principal component analysis (PCA) and wavelet transforms. Feature extraction and classification methods based on feature vectors using the above processing techniques are reviewed. A tensorial signal processing de-noising framework suitable for spatiotemporal association between features in MRI is also discussed.
Validation
Examples where the proposed methodologies have been successful in classifying TPIs and DCE-MRIs are discussed.
Results
Identifying commonalities in the structure of such heterogeneous datasets potentially leads to a unified multi-channel signal processing framework for biomedical image analysis.
Conclusion
The proposed complex valued classification methodology enables fusion of entire datasets from a sequence of spatial images taken at different time stamps; this is of interest from the viewpoint of inferring disease proliferation. The approach is also of interest for other emergent multi-channel biomedical imaging modalities and of relevance across the biomedical signal processing community
Using machine learning and Repeated Elastic Net Technique for identification of biomarkers of early Alzheimer's disease
Alzheimer's disease is a neurodegenerative brain disease that damages neurons in the part of the brain involved in cognitive function, and early diagnosis is crucial for treatment that could slow down the progression of the disease. In the preclinical stage, the accumulation of a protein fragment called amyloid-beta outside the neurons can be associated with the early onset of Alzheimer's disease. The aim of the study was to identify biomarkers (features) for early detection of Alzheimer's disease using data from patients known to have an accumulation of amyloid-beta in their brains. 44 features from different sources were used and divided into 5 blocks of similar measurements. A baseline analysis was done with all the features combined, consisting of 172 patient assessments with complete measurements, where 49 had presence of amyloid-beta. The same patient assessments were used as the test data for block-wise analysis. This study includes exploratory analysis of the data using correlation, principal components analysis (PCA) and partial least squares regression (PLSR). The performance outcomes of five different classifiers were compared when trying to separate the two classes. Repeated Elastic Net Technique (RENT) was used for feature selection, in combination with repeated stratified k-fold validation for the acquisition of robust results. Using the selected features from the RENT analysis, the best performing classifier for the individual blocks were identified through repeated stratified k-fold validation. A final prediction of the class was computed from the prediction of each block using a performance-based weighted average. The final score based on this weighted average did not exceed the score of the baseline study. The block consisting of factors related to environment and heritage provided the highest predictive performance. In the baseline analysis with RENT, the factors related to heritage came out as important for the classification task, along with features related to cognitive tests. From the features containing information from MR-images of the brain, white matter hyperintensity and lesion measured in the occipital lobe can be considered as important for both the baseline analysis and the block-wise analysis.Alzheimers sykdom er en nevrodegenerativ hjernesykdom som skader nevroner i den delen av hjernen som er involvert i kognitiv funksjon, og tidlig diagnose er avgjørende for behandling som kan bremse utviklingen av sykdommen. I det prekliniske stadiet kan akkumulering av et proteinfragment kalt amyloid-beta utenfor nevronene assosieres med begynnelsen av Alzheimers sykdom.
Målet med dette studiet var å identifisere biomarkører (variabler) for tidlig oppdagelse av Alzheimers sykdom ved å bruke data fra pasienter som er kjent for å ha en akkumulering av amyloid-beta i hjernen. 44 variabler fra forskjellige kilder ble brukt og delt inn i 5 blokker med lignende målinger. En baseline-analyse ble gjort med alle variablene kombinert, bestående av 172 pasientvurderinger med komplette målinger, der 49 hadde tilstedeværelse av amyloid-beta. De samme pasientvurderingene ble brukt som testdata for blokkvis analyse. Dette studiet inkluderer utforskende analyse av dataene ved bruk av korrelasjon, hovedkomponentanalyse (PCA) og partiell minste kvadraters regresjon (PLSR). Ytelsen til fem forskjellige modeller for klassifisering ble sammenlignet for å skille mellom de to klassene. Repeated Elastic Net Technique (RENT) ble brukt for variabel seleksjon, i kombinasjon med gjentatt stratifisert k-fold validering for å oppnå robuste resultater.
Ved å bruke de selekterte variablene fra RENT-analysen, ble den modellen med best ytelse for de individuelle blokkene identifisert gjennom gjentatt stratifisert k-fold validering. En endelig prediksjon av klassen ble beregnet fra prediksjonen for hver blokk ved å bruke et ytelsesbasert vektet gjennomsnitt. Den endelige poengsummen basert på dette vektede gjennomsnittet oversteg ikke resultatet i baseline-analysen.
Blokken bestående av faktorer knyttet til miljø og arv ga den høyeste prediktive ytelsen. I baseline-analysen med RENT kom variablene knyttet til arv ut som viktige for klassifiseringsoppgaven, sammen med variabler knyttet til kognitive tester. Fra variablene som inneholder informasjon fra MR-bilder av hjernen, kom hyperintensitet og lesjon i hvit substans målt i occipitallappen ut som viktige både for baseline-analysen og den blokkvise analysen.M-TD
Deep learning in medical imaging and radiation therapy
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd
- …