3,260 research outputs found
Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis
In patients with coronary artery stenoses of intermediate severity, the
functional significance needs to be determined. Fractional flow reserve (FFR)
measurement, performed during invasive coronary angiography (ICA), is most
often used in clinical practice. To reduce the number of ICA procedures, we
present a method for automatic identification of patients with functionally
significant coronary artery stenoses, employing deep learning analysis of the
left ventricle (LV) myocardium in rest coronary CT angiography (CCTA). The
study includes consecutively acquired CCTA scans of 166 patients with FFR
measurements. To identify patients with a functionally significant coronary
artery stenosis, analysis is performed in several stages. First, the LV
myocardium is segmented using a multiscale convolutional neural network (CNN).
To characterize the segmented LV myocardium, it is subsequently encoded using
unsupervised convolutional autoencoder (CAE). Thereafter, patients are
classified according to the presence of functionally significant stenosis using
an SVM classifier based on the extracted and clustered encodings. Quantitative
evaluation of LV myocardium segmentation in 20 images resulted in an average
Dice coefficient of 0.91 and an average mean absolute distance between the
segmented and reference LV boundaries of 0.7 mm. Classification of patients was
evaluated in the remaining 126 CCTA scans in 50 10-fold cross-validation
experiments and resulted in an area under the receiver operating characteristic
curve of 0.74 +- 0.02. At sensitivity levels 0.60, 0.70 and 0.80, the
corresponding specificity was 0.77, 0.71 and 0.59, respectively. The results
demonstrate that automatic analysis of the LV myocardium in a single CCTA scan
acquired at rest, without assessment of the anatomy of the coronary arteries,
can be used to identify patients with functionally significant coronary artery
stenosis.Comment: This paper was submitted in April 2017 and accepted in November 2017
for publication in Medical Image Analysis. Please cite as: Zreik et al.,
Medical Image Analysis, 2018, vol. 44, pp. 72-8
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Foundation and methodologies in computer-aided diagnosis systems for breast cancer detection
Breast cancer is the most prevalent cancer that affects women all over the world. Early detection
and treatment of breast cancer could decline the mortality rate. Some issues such as technical
reasons, which related to imaging quality and human error, increase misdiagnosis of breast cancer
by radiologists. Computer-aided detection systems (CADs) are developed to overcome these
restrictions and have been studied in many imaging modalities for breast cancer detection in recent
years. The CAD systems improve radiologists’ performance in finding and discriminat- ing between
the normal and abnormal tissues. These procedures are performed only as a double reader but the
absolute decisions are still made by the radiologist. In this study, the recent CAD systems for
breast cancer detec- tion on different modalities such as mammography, ultrasound, MRI, and biopsy
histopathological images are introduced. The foundation of CAD systems generally consist of four
stages: Pre-processing, Segmentation, Fea- ture extraction, and Classification. The approaches
which applied to design different stages of CAD system are summarised. Advantages and disadvantages
of different segmentation, feature extraction and classification tech- niques are listed.
In addition, the impact of imbalanced datasets in classification outcomes and appropriate methods to
solve these issues are discussed. As well as, performance evaluation metrics for various stages of
breast cancer detection CAD systems are reviewed
Data-Driven Shape Analysis and Processing
Data-driven methods play an increasingly important role in discovering
geometric, structural, and semantic relationships between 3D shapes in
collections, and applying this analysis to support intelligent modeling,
editing, and visualization of geometric data. In contrast to traditional
approaches, a key feature of data-driven approaches is that they aggregate
information from a collection of shapes to improve the analysis and processing
of individual shapes. In addition, they are able to learn models that reason
about properties and relationships of shapes without relying on hard-coded
rules or explicitly programmed instructions. We provide an overview of the main
concepts and components of these techniques, and discuss their application to
shape classification, segmentation, matching, reconstruction, modeling and
exploration, as well as scene analysis and synthesis, through reviewing the
literature and relating the existing works with both qualitative and numerical
comparisons. We conclude our report with ideas that can inspire future research
in data-driven shape analysis and processing.Comment: 10 pages, 19 figure
Multi-Modal Learning For Adaptive Scene Understanding
Modern robotics systems typically possess sensors of different modalities. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy. Equally, when a robot navigates through an unknown environment, it is often necessary to adjust the parameters of the scene segmentation model to maintain the same level of accuracy in changing situations. This thesis explores efficient means of adaptive semantic scene segmentation in an online setting with the use of multiple sensor modalities. First, we devise a novel conditional random field(CRF) inference method for scene segmentation that incorporates global constraints, enforcing particular sets of nodes to be assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose maximum a posteriori(MAP) solution is found using a gradient-based optimization approach. These global constraints are useful, since they can encode "a priori" information about the final labeling. This new formulation also reduces the dimensionality of the original image-labeling problem. The proposed model is employed in an urban street scene understanding task. Camera data is used for the CRF based semantic segmentation while global constraints are derived from 3D laser point clouds. Second, an approach to learn CRF parameters without the need for manually labeled training data is proposed. The model parameters are estimated by optimizing a novel loss function using self supervised reference labels, obtained based on the information from camera and laser with minimum amount of human supervision. Third, an approach that can conduct the parameter optimization while increasing the model robustness to non-stationary data distributions in the long trajectories is proposed. We adopted stochastic gradient descent to achieve this goal by using a learning rate that can appropriately grow or diminish to gain adaptability to changes in the data distribution
- …