12,063 research outputs found
Zero attracting recursive least squares algorithms
The l1-norm sparsity constraint is a widely used
technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the
system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach
A unified framework for solving a general class of conditional and robust set-membership estimation problems
In this paper we present a unified framework for solving a general class of
problems arising in the context of set-membership estimation/identification
theory. More precisely, the paper aims at providing an original approach for
the computation of optimal conditional and robust projection estimates in a
nonlinear estimation setting where the operator relating the data and the
parameter to be estimated is assumed to be a generic multivariate polynomial
function and the uncertainties affecting the data are assumed to belong to
semialgebraic sets. By noticing that the computation of both the conditional
and the robust projection optimal estimators requires the solution to min-max
optimization problems that share the same structure, we propose a unified
two-stage approach based on semidefinite-relaxation techniques for solving such
estimation problems. The key idea of the proposed procedure is to recognize
that the optimal functional of the inner optimization problems can be
approximated to any desired precision by a multivariate polynomial function by
suitably exploiting recently proposed results in the field of parametric
optimization. Two simulation examples are reported to show the effectiveness of
the proposed approach.Comment: Accpeted for publication in the IEEE Transactions on Automatic
Control (2014
Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation
Volterra and polynomial regression models play a major role in nonlinear
system identification and inference tasks. Exciting applications ranging from
neuroscience to genome-wide association analysis build on these models with the
additional requirement of parsimony. This requirement has high interpretative
value, but unfortunately cannot be met by least-squares based or kernel
regression methods. To this end, compressed sampling (CS) approaches, already
successful in linear regression settings, can offer a viable alternative. The
viability of CS for sparse Volterra and polynomial models is the core theme of
this work. A common sparse regression task is initially posed for the two
models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type
algorithm is developed for sparse polynomial regressions. The identifiability
of polynomial models is critically challenged by dimensionality. However,
following the CS principle, when these models are sparse, they could be
recovered by far fewer measurements. To quantify the sufficient number of
measurements for a given level of sparsity, restricted isometry properties
(RIP) are investigated in commonly met polynomial regression settings,
generalizing known results for their linear counterparts. The merits of the
novel (weighted) adaptive CS algorithms to sparse polynomial modeling are
verified through synthetic as well as real data tests for genotype-phenotype
analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Distributed Compressive CSIT Estimation and Feedback for FDD Multi-user Massive MIMO Systems
To fully utilize the spatial multiplexing gains or array gains of massive
MIMO, the channel state information must be obtained at the transmitter side
(CSIT). However, conventional CSIT estimation approaches are not suitable for
FDD massive MIMO systems because of the overwhelming training and feedback
overhead. In this paper, we consider multi-user massive MIMO systems and deploy
the compressive sensing (CS) technique to reduce the training as well as the
feedback overhead in the CSIT estimation. The multi-user massive MIMO systems
exhibits a hidden joint sparsity structure in the user channel matrices due to
the shared local scatterers in the physical propagation environment. As such,
instead of naively applying the conventional CS to the CSIT estimation, we
propose a distributed compressive CSIT estimation scheme so that the compressed
measurements are observed at the users locally, while the CSIT recovery is
performed at the base station jointly. A joint orthogonal matching pursuit
recovery algorithm is proposed to perform the CSIT recovery, with the
capability of exploiting the hidden joint sparsity in the user channel
matrices. We analyze the obtained CSIT quality in terms of the normalized mean
absolute error, and through the closed-form expressions, we obtain simple
insights into how the joint channel sparsity can be exploited to improve the
CSIT recovery performance.Comment: 16 double-column pages, accepted for publication in IEEE Transactions
on Signal Processin
- …