10,792 research outputs found
Soft-in soft-output detection in the presence of parametric uncertainty via the Bayesian EM algorithm
We investigate the application of the Bayesian expectation-maximization (BEM) technique to the design of soft-in soft-out (SISO) detection algorithms for wireless communication systems operating over channels affected by parametric uncertainty. First, the BEM algorithm is described in detail and its relationship with the well-known expectation-maximization (EM) technique is explained. Then, some of its applications are illustrated. In particular, the problems of SISO detection of spread spectrum, single-carrier and multicarrier space-time block coded signals are analyzed. Numerical results show that BEM-based detectors perform closely to the maximum likelihood (ML) receivers endowed with perfect channel state information as long as channel variations are not too fast
Cluster, Classify, Regress: A General Method For Learning Discountinous Functions
This paper presents a method for solving the supervised learning problem in
which the output is highly nonlinear and discontinuous. It is proposed to solve
this problem in three stages: (i) cluster the pairs of input-output data
points, resulting in a label for each point; (ii) classify the data, where the
corresponding label is the output; and finally (iii) perform one separate
regression for each class, where the training data corresponds to the subset of
the original input-output pairs which have that label according to the
classifier. It has not yet been proposed to combine these 3 fundamental
building blocks of machine learning in this simple and powerful fashion. This
can be viewed as a form of deep learning, where any of the intermediate layers
can itself be deep. The utility and robustness of the methodology is
illustrated on some toy problems, including one example problem arising from
simulation of plasma fusion in a tokamak.Comment: 12 files,6 figure
Robust detection, isolation and accommodation for sensor failures
The objective is to extend the recent advances in robust control system design of multivariable systems to sensor failure detection, isolation, and accommodation (DIA), and estimator design. This effort provides analysis tools to quantify the trade-off between performance robustness and DIA sensitivity, which are to be used to achieve higher levels of performance robustness for given levels of DIA sensitivity. An innovations-based DIA scheme is used. Estimators, which depend upon a model of the process and process inputs and outputs, are used to generate these innovations. Thresholds used to determine failure detection are computed based on bounds on modeling errors, noise properties, and the class of failures. The applicability of the newly developed tools are demonstrated on a multivariable aircraft turbojet engine example. A new concept call the threshold selector was developed. It represents a significant and innovative tool for the analysis and synthesis of DiA algorithms. The estimators were made robust by introduction of an internal model and by frequency shaping. The internal mode provides asymptotically unbiased filter estimates.The incorporation of frequency shaping of the Linear Quadratic Gaussian cost functional modifies the estimator design to make it suitable for sensor failure DIA. The results are compared with previous studies which used thresholds that were selcted empirically. Comparison of these two techniques on a nonlinear dynamic engine simulation shows improved performance of the new method compared to previous technique
Model-Based Deep Learning
Signal processing, communications, and control have traditionally relied on
classical statistical modeling techniques. Such model-based methods utilize
mathematical formulations that represent the underlying physics, prior
information and additional domain knowledge. Simple classical models are useful
but sensitive to inaccuracies and may lead to poor performance when real
systems display complex or dynamic behavior. On the other hand, purely
data-driven approaches that are model-agnostic are becoming increasingly
popular as datasets become abundant and the power of modern deep learning
pipelines increases. Deep neural networks (DNNs) use generic architectures
which learn to operate from data, and demonstrate excellent performance,
especially for supervised problems. However, DNNs typically require massive
amounts of data and immense computational resources, limiting their
applicability for some signal processing scenarios. We are interested in hybrid
techniques that combine principled mathematical models with data-driven systems
to benefit from the advantages of both approaches. Such model-based deep
learning methods exploit both partial domain knowledge, via mathematical
structures designed for specific problems, as well as learning from limited
data. In this article we survey the leading approaches for studying and
designing model-based deep learning systems. We divide hybrid
model-based/data-driven systems into categories based on their inference
mechanism. We provide a comprehensive review of the leading approaches for
combining model-based algorithms with deep learning in a systematic manner,
along with concrete guidelines and detailed signal processing oriented examples
from recent literature. Our aim is to facilitate the design and study of future
systems on the intersection of signal processing and machine learning that
incorporate the advantages of both domains
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
- …