5,415 research outputs found
Multi-Sensor Fuzzy Data Fusion Using Sensors with Different Characteristics
This paper proposes a new approach to multi-sensor data fusion. It suggests
that aggregation of data from multiple sensors can be done more efficiently
when we consider information about sensors' different characteristics. Similar
to most research on effective sensors' characteristics, especially in control
systems, our focus is on sensors' accuracy and frequency response. A rule-based
fuzzy system is presented for fusion of raw data obtained from the sensors that
have complement characteristics in accuracy and bandwidth. Furthermore, a fuzzy
predictor system is suggested aiming for extreme accuracy which is a common
need in highly sensitive applications. Advantages of our proposed sensor fusion
system are shown by simulation of a control system utilizing the fusion system
for output estimation.Comment: CSI Journal in Computer Science and Engineering, published 2019
(First Submission 2010
HMS-Net: Hierarchical Multi-scale Sparsity-invariant Network for Sparse Depth Completion
Dense depth cues are important and have wide applications in various computer
vision tasks. In autonomous driving, LIDAR sensors are adopted to acquire depth
measurements around the vehicle to perceive the surrounding environments.
However, depth maps obtained by LIDAR are generally sparse because of its
hardware limitation. The task of depth completion attracts increasing
attention, which aims at generating a dense depth map from an input sparse
depth map. To effectively utilize multi-scale features, we propose three novel
sparsity-invariant operations, based on which, a sparsity-invariant multi-scale
encoder-decoder network (HMS-Net) for handling sparse inputs and sparse feature
maps is also proposed. Additional RGB features could be incorporated to further
improve the depth completion performance. Our extensive experiments and
component analysis on two public benchmarks, KITTI depth completion benchmark
and NYU-depth-v2 dataset, demonstrate the effectiveness of the proposed
approach. As of Aug. 12th, 2018, on KITTI depth completion leaderboard, our
proposed model without RGB guidance ranks first among all peer-reviewed methods
without using RGB information, and our model with RGB guidance ranks second
among all RGB-guided methods.Comment: IEEE Trans. on Image Processin
Machine learning based hyperspectral image analysis: A survey
Hyperspectral sensors enable the study of the chemical properties of scene
materials remotely for the purpose of identification, detection, and chemical
composition analysis of objects in the environment. Hence, hyperspectral images
captured from earth observing satellites and aircraft have been increasingly
important in agriculture, environmental monitoring, urban planning, mining, and
defense. Machine learning algorithms due to their outstanding predictive power
have become a key tool for modern hyperspectral image analysis. Therefore, a
solid understanding of machine learning techniques have become essential for
remote sensing researchers and practitioners. This paper reviews and compares
recent machine learning-based hyperspectral image analysis methods published in
literature. We organize the methods by the image analysis task and by the type
of machine learning algorithm, and present a two-way mapping between the image
analysis tasks and the types of machine learning algorithms that can be applied
to them. The paper is comprehensive in coverage of both hyperspectral image
analysis tasks and machine learning algorithms. The image analysis tasks
considered are land cover classification, target detection, unmixing, and
physical parameter estimation. The machine learning algorithms covered are
Gaussian models, linear regression, logistic regression, support vector
machines, Gaussian mixture model, latent linear models, sparse linear models,
Gaussian mixture models, ensemble learning, directed graphical models,
undirected graphical models, clustering, Gaussian processes, Dirichlet
processes, and deep learning. We also discuss the open challenges in the field
of hyperspectral image analysis and explore possible future directions
Bayesian Extensions of Kernel Least Mean Squares
The kernel least mean squares (KLMS) algorithm is a computationally efficient
nonlinear adaptive filtering method that "kernelizes" the celebrated (linear)
least mean squares algorithm. We demonstrate that the least mean squares
algorithm is closely related to the Kalman filtering, and thus, the KLMS can be
interpreted as an approximate Bayesian filtering method. This allows us to
systematically develop extensions of the KLMS by modifying the underlying
state-space and observation models. The resulting extensions introduce many
desirable properties such as "forgetting", and the ability to learn from
discrete data, while retaining the computational simplicity and time complexity
of the original algorithm.Comment: 7 pages, 4 fiure
Radiological images and machine learning: trends, perspectives, and prospects
The application of machine learning to radiological images is an increasingly
active research area that is expected to grow in the next five to ten years.
Recent advances in machine learning have the potential to recognize and
classify complex patterns from different radiological imaging modalities such
as x-rays, computed tomography, magnetic resonance imaging and positron
emission tomography imaging. In many applications, machine learning based
systems have shown comparable performance to human decision-making. The
applications of machine learning are the key ingredients of future clinical
decision making and monitoring systems. This review covers the fundamental
concepts behind various machine learning techniques and their applications in
several radiological imaging areas, such as medical image segmentation, brain
function studies and neurological disease diagnosis, as well as computer-aided
systems, image registration, and content-based image retrieval systems.
Synchronistically, we will briefly discuss current challenges and future
directions regarding the application of machine learning in radiological
imaging. By giving insight on how take advantage of machine learning powered
applications, we expect that clinicians can prevent and diagnose diseases more
accurately and efficiently.Comment: 13 figure
Monotonic Calibrated Interpolated Look-Up Tables
Real-world machine learning applications may require functions that are
fast-to-evaluate and interpretable. In particular, guaranteed monotonicity of
the learned function can be critical to user trust. We propose meeting these
goals for low-dimensional machine learning problems by learning flexible,
monotonic functions using calibrated interpolated look-up tables. We extend the
structural risk minimization framework of lattice regression to train monotonic
look-up tables by solving a convex problem with appropriate linear inequality
constraints. In addition, we propose jointly learning interpretable
calibrations of each feature to normalize continuous features and handle
categorical or missing data, at the cost of making the objective non-convex. We
address large-scale learning through parallelization, mini-batching, and
propose random sampling of additive regularizer terms. Case studies with
real-world problems with five to sixteen features and thousands to millions of
training samples demonstrate the proposed monotonic functions can achieve
state-of-the-art accuracy on practical problems while providing greater
transparency to users.Comment: To appear (with minor revisions), Journal Machine Learning Research
201
Linked Component Analysis from Matrices to High Order Tensors: Applications to Biomedical Data
With the increasing availability of various sensor technologies, we now have
access to large amounts of multi-block (also called multi-set,
multi-relational, or multi-view) data that need to be jointly analyzed to
explore their latent connections. Various component analysis methods have
played an increasingly important role for the analysis of such coupled data. In
this paper, we first provide a brief review of existing matrix-based (two-way)
component analysis methods for the joint analysis of such data with a focus on
biomedical applications. Then, we discuss their important extensions and
generalization to multi-block multiway (tensor) data. We show how constrained
multi-block tensor decomposition methods are able to extract similar or
statistically dependent common features that are shared by all blocks, by
incorporating the multiway nature of data. Special emphasis is given to the
flexible common and individual feature analysis of multi-block data with the
aim to simultaneously extract common and individual latent components with
desired properties and types of diversity. Illustrative examples are given to
demonstrate their effectiveness for biomedical data analysis.Comment: 20 pages, 11 figures, Proceedings of the IEEE, 201
Sparse Deep Nonnegative Matrix Factorization
Nonnegative matrix factorization is a powerful technique to realize dimension
reduction and pattern recognition through single-layer data representation
learning. Deep learning, however, with its carefully designed hierarchical
structure, is able to combine hidden features to form more representative
features for pattern recognition. In this paper, we proposed sparse deep
nonnegative matrix factorization models to analyze complex data for more
accurate classification and better feature interpretation. Such models are
designed to learn localized features or generate more discriminative
representations for samples in distinct classes by imposing -norm penalty
on the columns of certain factors. By extending one-layer model into
multi-layer one with sparsity, we provided a hierarchical way to analyze big
data and extract hidden features intuitively due to nonnegativity. We adopted
the Nesterov's accelerated gradient algorithm to accelerate the computing
process with the convergence rate of after steps iteration. We
also analyzed the computing complexity of our framework to demonstrate their
efficiency. To improve the performance of dealing with linearly inseparable
data, we also considered to incorporate popular nonlinear functions into this
framework and explored their performance. We applied our models onto two
benchmarking image datasets, demonstrating our models can achieve competitive
or better classification performance and produce intuitive interpretations
compared with the typical NMF and competing multi-layer models.Comment: 13 pages, 8 figure
Generic Image Classification Approaches Excel on Face Recognition
The main finding of this work is that the standard image classification
pipeline, which consists of dictionary learning, feature encoding, spatial
pyramid pooling and linear classification, outperforms all state-of-the-art
face recognition methods on the tested benchmark datasets (we have tested on
AR, Extended Yale B, the challenging FERET, and LFW-a datasets). This
surprising and prominent result suggests that those advances in generic image
classification can be directly applied to improve face recognition systems. In
other words, face recognition may not need to be viewed as a separate object
classification problem.
While recently a large body of residual based face recognition methods focus
on developing complex dictionary learning algorithms, in this work we show that
a dictionary of randomly extracted patches (even from non-face images) can
achieve very promising results using the image classification pipeline. That
means, the choice of dictionary learning methods may not be important. Instead,
we find that learning multiple dictionaries using different low-level image
features often improve the final classification accuracy. Our proposed face
recognition approach offers the best reported results on the widely-used face
recognition benchmark datasets. In particular, on the challenging FERET and
LFW-a datasets, we improve the best reported accuracies in the literature by
about 20% and 30% respectively.Comment: 10 page
Learning Power Spectrum Maps from Quantized Power Measurements
Power spectral density (PSD) maps providing the distribution of RF power
across space and frequency are constructed using power measurements collected
by a network of low-cost sensors. By introducing linear compression and
quantization to a small number of bits, sensor measurements can be communicated
to the fusion center with minimal bandwidth requirements. Strengths of data-
and model-driven approaches are combined to develop estimators capable of
incorporating multiple forms of spectral and propagation prior information
while fitting the rapid variations of shadow fading across space. To this end,
novel nonparametric and semiparametric formulations are investigated. It is
shown that PSD maps can be obtained using support vector machine-type solvers.
In addition to batch approaches, an online algorithm attuned to real-time
operation is developed. Numerical tests assess the performance of the novel
algorithms.Comment: Submitted Jun. 201
- …