1,711 research outputs found
Intestinal Parasites Classification Using Deep Belief Networks
Currently, approximately billion people are infected by intestinal
parasites worldwide. Diseases caused by such infections constitute a public
health problem in most tropical countries, leading to physical and mental
disorders, and even death to children and immunodeficient individuals. Although
subjected to high error rates, human visual inspection is still in charge of
the vast majority of clinical diagnoses. In the past years, some works
addressed intelligent computer-aided intestinal parasites classification, but
they usually suffer from misclassification due to similarities between
parasites and fecal impurities. In this paper, we introduce Deep Belief
Networks to the context of automatic intestinal parasites classification.
Experiments conducted over three datasets composed of eggs, larvae, and
protozoa provided promising results, even considering unbalanced classes and
also fecal impurities
A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community
In recent years, deep learning (DL), a re-branding of neural networks (NNs),
has risen to the top in numerous areas, namely computer vision (CV), speech
recognition, natural language processing, etc. Whereas remote sensing (RS)
possesses a number of unique challenges, primarily related to sensors and
applications, inevitably RS draws from many of the same theories as CV; e.g.,
statistics, fusion, and machine learning, to name a few. This means that the RS
community should be aware of, if not at the leading edge of, of advancements
like DL. Herein, we provide the most comprehensive survey of state-of-the-art
RS DL research. We also review recent new developments in the DL field that can
be used in DL for RS. Namely, we focus on theories, tools and challenges for
the RS community. Specifically, we focus on unsolved challenges and
opportunities as it relates to (i) inadequate data sets, (ii)
human-understandable solutions for modelling physical phenomena, (iii) Big
Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and
learning algorithms for spectral, spatial and temporal data, (vi) transfer
learning, (vii) an improved theoretical understanding of DL systems, (viii)
high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote
Sensin
Automatic speech feature extraction using a convolutional restricted boltzmann machine
A dissertation submitted to the Faculty of Science, University of
the Witwatersrand, in fulfillment of the requirements for the degree
of Master of Science
2017Restricted Boltzmann Machines (RBMs) are a statistical learning concept that can
be interpreted as Arti cial Neural Networks. They are capable of learning, in an
unsupervised fashion, a set of features with which to describe a data set. Connected
in series RBMs form a model called a Deep Belief Network (DBN), learning abstract
feature combinations from lower layers. Convolutional RBMs (CRBMs) are a variation
on the RBM architecture in which the learned features are kernels that are convolved
across spatial portions of the input data to generate feature maps identifying if a feature
is detected in a portion of the input data. Features extracted from speech audio data
by a trained CRBM have recently been shown to compete with the state of the art
for a number of speaker identi cation tasks. This project implements a similar CRBM
architecture in order to verify previous work, as well as gain insight into Digital Signal
Processing (DSP), Generative Graphical Models, unsupervised pre-training of Arti cial
Neural Networks, and Machine Learning classi cation tasks. The CRBM architecture
is trained on the TIMIT speech corpus and the learned features veri ed by using them
to train a linear classi er on tasks such as speaker genetic sex classi cation and speaker
identi cation. The implementation is quantitatively proven to successfully learn and
extract a useful feature representation for the given classi cation tasksMT 201
Steering in computational science: mesoscale modelling and simulation
This paper outlines the benefits of computational steering for high
performance computing applications. Lattice-Boltzmann mesoscale fluid
simulations of binary and ternary amphiphilic fluids in two and three
dimensions are used to illustrate the substantial improvements which
computational steering offers in terms of resource efficiency and time to
discover new physics. We discuss details of our current steering
implementations and describe their future outlook with the advent of
computational grids.Comment: 40 pages, 11 figures. Accepted for publication in Contemporary
Physic
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …