9,958 research outputs found
Histopathological image analysis : a review
Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe
A Collaborative Computer Aided Diagnosis (C-CAD) System with Eye-Tracking, Sparse Attentional Model, and Deep Learning
There are at least two categories of errors in radiology screening that can
lead to suboptimal diagnostic decisions and interventions:(i)human fallibility
and (ii)complexity of visual search. Computer aided diagnostic (CAD) tools are
developed to help radiologists to compensate for some of these errors. However,
despite their significant improvements over conventional screening strategies,
most CAD systems do not go beyond their use as second opinion tools due to
producing a high number of false positives, which human interpreters need to
correct. In parallel with efforts in computerized analysis of radiology scans,
several researchers have examined behaviors of radiologists while screening
medical images to better understand how and why they miss tumors, how they
interact with the information in an image, and how they search for unknown
pathology in the images. Eye-tracking tools have been instrumental in exploring
answers to these fundamental questions. In this paper, we aim to develop a
paradigm shift CAD system, called collaborative CAD (C-CAD), that unifies both
of the above mentioned research lines: CAD and eye-tracking. We design an
eye-tracking interface providing radiologists with a real radiology reading
room experience. Then, we propose a novel algorithm that unifies eye-tracking
data and a CAD system. Specifically, we present a new graph based clustering
and sparsification algorithm to transform eye-tracking data (gaze) into a
signal model to interpret gaze patterns quantitatively and qualitatively. The
proposed C-CAD collaborates with radiologists via eye-tracking technology and
helps them to improve diagnostic decisions. The C-CAD learns radiologists'
search efficiency by processing their gaze patterns. To do this, the C-CAD uses
a deep learning algorithm in a newly designed multi-task learning platform to
segment and diagnose cancers simultaneously.Comment: Submitted to Medical Image Analysis Journal (MedIA
Computer-Aided Detection and diagnosis for prostate cancer based on mono and multi-parametric MRI: A review
International audienceProstate cancer is the second most diagnosed cancer of men all over the world. In the last decades, new imaging techniques based on Magnetic Resonance Imaging (MRI) have been developed improving diagnosis.In practise, diagnosis can be affected by multiple factors such as observer variability and visibility and complexity of the lesions. In this regard, computer-aided detection and computer-aided diagnosis systemshave been designed to help radiologists in their clinical practice. Research on computer-aided systems specifically focused for prostate cancer is a young technology and has been part of a dynamic field ofresearch for the last ten years. This survey aims to provide a comprehensive review of the state of the art in this lapse of time, focusing on the different stages composing the work-flow of a computer-aidedsystem. We also provide a comparison between studies and a discussion about the potential avenues for future research. In addition, this paper presents a new public online dataset which is made available to theresearch community with the aim of providing a common evaluation framework to overcome some of the current limitations identified in this survey
Machine Learning on Neutron and X-Ray Scattering
Neutron and X-ray scattering represent two state-of-the-art materials
characterization techniques that measure materials' structural and dynamical
properties with high precision. These techniques play critical roles in
understanding a wide variety of materials systems, from catalysis to polymers,
nanomaterials to macromolecules, and energy materials to quantum materials. In
recent years, neutron and X-ray scattering have received a significant boost
due to the development and increased application of machine learning to
materials problems. This article reviews the recent progress in applying
machine learning techniques to augment various neutron and X-ray scattering
techniques. We highlight the integration of machine learning methods into the
typical workflow of scattering experiments. We focus on scattering problems
that faced challenge with traditional methods but addressable using machine
learning, such as leveraging the knowledge of simple materials to model more
complicated systems, learning with limited data or incomplete labels,
identifying meaningful spectra and materials' representations for learning
tasks, mitigating spectral noise, and many others. We present an outlook on a
few emerging roles machine learning may play in broad types of scattering and
spectroscopic problems in the foreseeable future.Comment: 56 pages, 12 figures. Feedback most welcom
Machine learning in acoustics: theory and applications
Acoustic data provide scientific and engineering insights in fields ranging
from biology and communications to ocean and Earth science. We survey the
recent advances and transformative potential of machine learning (ML),
including deep learning, in the field of acoustics. ML is a broad family of
techniques, which are often based in statistics, for automatically detecting
and utilizing patterns in data. Relative to conventional acoustics and signal
processing, ML is data-driven. Given sufficient training data, ML can discover
complex relationships between features and desired labels or actions, or
between features themselves. With large volumes of training data, ML can
discover models describing complex acoustic phenomena such as human speech and
reverberation. ML in acoustics is rapidly developing with compelling results
and significant future promise. We first introduce ML, then highlight ML
developments in four acoustics research areas: source localization in speech
processing, source localization in ocean acoustics, bioacoustics, and
environmental sounds in everyday scenes.Comment: Published with free access in Journal of the Acoustical Society of
America, 27 Nov. 201
Predictive Modeling of Biomedical Signals Using Controlled Spatial Transformation
An important paradigm in smart health is developing diagnosis tools and
monitoring a patient's heart activity through processing Electrocardiogram
(ECG) signals is a key example, sue to high mortality rate of heart-related
disease. However, current heart monitoring devices suffer from two important
drawbacks: i) failure in capturing inter-patient variability, and ii)
incapability of identifying heart abnormalities ahead of time to take effective
preventive and therapeutic interventions.
This paper proposed a novel predictive signal processing method to solve
these issues. We propose a two-step classification framework for ECG signals,
where a global classifier recognizes severe abnormalities by comparing the
signal against a universal reference model. The seemingly normal signals are
then passed through a personalized classifier, to recognize mild but
informative signal morphology distortions. The key idea is to develop a novel
deviation analysis based on a controlled nonlinear transformation to capture
significant deviations of the signal towards any of predefined abnormality
classes. Here, we embrace the proven but overlooked fact that certain features
of ECG signals reflect underlying cardiac abnormalities before the occurrences
of cardiac disease. The proposed method achieves a classification accuracy of
96.6% and provides a unique feature of predictive analysis by providing
warnings before critical heart conditions. In particular, the chance of
observing a severe problem (a red alarm) is raised by about 5% to 10% after
observing a yellow alarm of the same type. Although we used this methodology to
provide early precaution messages to elderly and high-risk heart-patients, the
proposed method is general and applicable to similar bio-medical signal
processing applications.Comment: 13 pages, 7 figures, 7 table
Statistical mechanics of complex neural systems and high dimensional data
Recent experimental advances in neuroscience have opened new vistas into the
immense complexity of neuronal networks. This proliferation of data challenges
us on two parallel fronts. First, how can we form adequate theoretical
frameworks for understanding how dynamical network processes cooperate across
widely disparate spatiotemporal scales to solve important computational
problems? And second, how can we extract meaningful models of neuronal systems
from high dimensional datasets? To aid in these challenges, we give a
pedagogical review of a collection of ideas and theoretical methods arising at
the intersection of statistical physics, computer science and neurobiology. We
introduce the interrelated replica and cavity methods, which originated in
statistical physics as powerful ways to quantitatively analyze large highly
heterogeneous systems of many interacting degrees of freedom. We also introduce
the closely related notion of message passing in graphical models, which
originated in computer science as a distributed algorithm capable of solving
large inference and optimization problems involving many coupled variables. We
then show how both the statistical physics and computer science perspectives
can be applied in a wide diversity of contexts to problems arising in
theoretical neuroscience and data analysis. Along the way we discuss spin
glasses, learning theory, illusions of structure in noise, random matrices,
dimensionality reduction, and compressed sensing, all within the unified
formalism of the replica method. Moreover, we review recent conceptual
connections between message passing in graphical models, and neural computation
and learning. Overall, these ideas illustrate how statistical physics and
computer science might provide a lens through which we can uncover emergent
computational functions buried deep within the dynamical complexities of
neuronal networks.Comment: 72 pages, 8 figures, iopart.cls, to appear in JSTA
Deep Learning for Embedding and Integrating Multimodal Biomedical Data
Biomedical data is being generated in extremely high throughput and high dimension by technologies in areas ranging from single-cell genomics, proteomics, and transcriptomics (cytometry, single-cell RNA and ATAC sequencing) to neuroscience and cognition (fMRI and PET) to pharmaceuticals (drug perturbations and interactions). These new and emerging technologies and the datasets they create give an unprecedented view into the workings of their respective biological entities. However, there is a large gap between the information contained in these datasets and the insights that current machine learning methods can extract from them. This is especially the case when multiple technologies can measure the same underlying biological entity or system. By separately analyzing the same system but from different views gathered by different data modalities, patterns are left unobserved if they only emerge from the multi-dimensional joint representation of all of the modalities together. Through an interdisciplinary approach that emphasizes active collaboration with data domain experts, my research has developed models for data integration, extracting important insights through the joint analysis of varied data sources. In this thesis, I discuss models that address this task of multi-modal data integration, especially generative adversarial networks (GANs) and autoencoders (AEs). My research has been focused on using both of these models in a generative way for concrete problems in cutting-edge scientific applications rather than the exclusive focus on the generation of high-resolution natural images. The research in this thesis is united around ideas of building models that can extract new knowledge from scientific data inaccessible to currently existing methods
Deep Learning in Single-Cell Analysis
Single-cell technologies are revolutionizing the entire field of biology. The
large volumes of data generated by single-cell technologies are
high-dimensional, sparse, heterogeneous, and have complicated dependency
structures, making analyses using conventional machine learning approaches
challenging and impractical. In tackling these challenges, deep learning often
demonstrates superior performance compared to traditional machine learning
methods. In this work, we give a comprehensive survey on deep learning in
single-cell analysis. We first introduce background on single-cell technologies
and their development, as well as fundamental concepts of deep learning
including the most popular deep architectures. We present an overview of the
single-cell analytic pipeline pursued in research applications while noting
divergences due to data sources or specific applications. We then review seven
popular tasks spanning through different stages of the single-cell analysis
pipeline, including multimodal integration, imputation, clustering, spatial
domain identification, cell-type deconvolution, cell segmentation, and
cell-type annotation. Under each task, we describe the most recent developments
in classical and deep learning methods and discuss their advantages and
disadvantages. Deep learning tools and benchmark datasets are also summarized
for each task. Finally, we discuss the future directions and the most recent
challenges. This survey will serve as a reference for biologists and computer
scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi
- …