2,992 research outputs found
Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint
Inspired by ideas taken from the machine learning literature, new
regularization techniques have been recently introduced in linear system
identification. In particular, all the adopted estimators solve a regularized
least squares problem, differing in the nature of the penalty term assigned to
the impulse response. Popular choices include atomic and nuclear norms (applied
to Hankel matrices) as well as norms induced by the so called stable spline
kernels. In this paper, a comparative study of estimators based on these
different types of regularizers is reported. Our findings reveal that stable
spline kernels outperform approaches based on atomic and nuclear norms since
they suitably embed information on impulse response stability and smoothness.
This point is illustrated using the Bayesian interpretation of regularization.
We also design a new class of regularizers defined by "integral" versions of
stable spline/TC kernels. Under quite realistic experimental conditions, the
new estimators outperform classical prediction error methods also when the
latter are equipped with an oracle for model order selection
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Bayesian and regularization approaches to multivariable linear system identification: the role of rank penalties
Recent developments in linear system identification have proposed the use of
non-parameteric methods, relying on regularization strategies, to handle the
so-called bias/variance trade-off. This paper introduces an impulse response
estimator which relies on an -type regularization including a
rank-penalty derived using the log-det heuristic as a smooth approximation to
the rank function. This allows to account for different properties of the
estimated impulse response (e.g. smoothness and stability) while also
penalizing high-complexity models. This also allows to account and enforce
coupling between different input-output channels in MIMO systems. According to
the Bayesian paradigm, the parameters defining the relative weight of the two
regularization terms as well as the structure of the rank penalty are estimated
optimizing the marginal likelihood. Once these hyperameters have been
estimated, the impulse response estimate is available in closed form.
Experiments show that the proposed method is superior to the estimator relying
on the "classic" -regularization alone as well as those based in atomic
and nuclear norm.Comment: to appear in IEEE Conference on Decision and Control, 201
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
- …