20,906 research outputs found
Support vector machines framework for linear signal processing
This paper presents a support vector machines (SVM) framework to deal with linear signal processing (LSP) problems. The approach relies on three basic steps for model building: (1) identifying the suitable base of the Hilbert signal space in the model, (2) using a robust cost function, and (3) minimizing a constrained, regularized functional by means of the method of Lagrange multipliers. Recently, autoregressive moving average (ARMA) system identification and non-parametric spectral analysis have been formulated under this framework. The generalized, yet simple, formulation of SVM LSP problems is particularized here for three different issues: parametric spectral estimation, stability of Infinite Impulse Response filters using the gamma structure, and complex ARMA models for communication applications. The good performance shown on these different domains suggests that other signal processing problems can be stated from this SVM framework.Publicad
An Iterative Receiver for OFDM With Sparsity-Based Parametric Channel Estimation
In this work we design a receiver that iteratively passes soft information
between the channel estimation and data decoding stages. The receiver
incorporates sparsity-based parametric channel estimation. State-of-the-art
sparsity-based iterative receivers simplify the channel estimation problem by
restricting the multipath delays to a grid. Our receiver does not impose such a
restriction. As a result it does not suffer from the leakage effect, which
destroys sparsity. Communication at near capacity rates in high SNR requires a
large modulation order. Due to the close proximity of modulation symbols in
such systems, the grid-based approximation is of insufficient accuracy. We show
numerically that a state-of-the-art iterative receiver with grid-based sparse
channel estimation exhibits a bit-error-rate floor in the high SNR regime. On
the contrary, our receiver performs very close to the perfect channel state
information bound for all SNR values. We also demonstrate both theoretically
and numerically that parametric channel estimation works well in dense
channels, i.e., when the number of multipath components is large and each
individual component cannot be resolved.Comment: Major revision, accepted for IEEE Transactions on Signal Processin
Linear system identification using stable spline kernels and PLQ penalties
The classical approach to linear system identification is given by parametric
Prediction Error Methods (PEM). In this context, model complexity is often
unknown so that a model order selection step is needed to suitably trade-off
bias and variance. Recently, a different approach to linear system
identification has been introduced, where model order determination is avoided
by using a regularized least squares framework. In particular, the penalty term
on the impulse response is defined by so called stable spline kernels. They
embed information on regularity and BIBO stability, and depend on a small
number of parameters which can be estimated from data. In this paper, we
provide new nonsmooth formulations of the stable spline estimator. In
particular, we consider linear system identification problems in a very broad
context, where regularization functionals and data misfits can come from a rich
set of piecewise linear quadratic functions. Moreover, our anal- ysis includes
polyhedral inequality constraints on the unknown impulse response. For any
formulation in this class, we show that interior point methods can be used to
solve the system identification problem, with complexity O(n3)+O(mn2) in each
iteration, where n and m are the number of impulse response coefficients and
measurements, respectively. The usefulness of the framework is illustrated via
a numerical experiment where output measurements are contaminated by outliers.Comment: 8 pages, 2 figure
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Outlier robust system identification: a Bayesian kernel-based approach
In this paper, we propose an outlier-robust regularized kernel-based method
for linear system identification. The unknown impulse response is modeled as a
zero-mean Gaussian process whose covariance (kernel) is given by the recently
proposed stable spline kernel, which encodes information on regularity and
exponential stability. To build robustness to outliers, we model the
measurement noise as realizations of independent Laplacian random variables.
The identification problem is cast in a Bayesian framework, and solved by a new
Markov Chain Monte Carlo (MCMC) scheme. In particular, exploiting the
representation of the Laplacian random variables as scale mixtures of
Gaussians, we design a Gibbs sampler which quickly converges to the target
distribution. Numerical simulations show a substantial improvement in the
accuracy of the estimates over state-of-the-art kernel-based methods.Comment: 5 figure
Recommended from our members
Measuring Half-Lives Using A Non-Parametric Bootstrap Approach
In this paper we extend the Murray and Papell (2002) study by using a non-parametric
bootstrap approach which allows for non-normality, and focusing on quarterly real
exchange rate in twenty OECD countries in the post-1973 floating period. We run
Augmented Dickey-Fuller (ADF) regressions, and estimate the half-lives (and confidence
intervals) from the corresponding impulse response functions. Further, we use an
approximately median-unbiased estimator of the autoregressive parameters, and report
the implied point estimates and confidence intervals. We find that accounting for nonnormality
results in even higher estimates of the degree of persistence of PPP deviations,
but, as in Murray and Papell (2002), the confidence intervals are so wide that no strong
conclusions are warranted on the existence of a PPP puzzle
Modeling model uncertainty
Recently there has been much interest in studying monetary policy under model uncertainty. We develop methods to analyze different sources of uncertainty in one coherent structure useful for policy decisions. We show how to estimate the size of the uncertainty based on time series data, and incorporate this uncertainty in policy optimization. We propose two different approaches to modeling model uncertainty. The first is model error modeling, which imposes additional structure on the errors of an estimated model, and builds a statistical description of the uncertainty around a model. The second is set membership identification, which uses a deterministic approach to find a set of models consistent with data and prior assumptions. The center of this set becomes a benchmark model, and the radius measures model uncertainty. Using both approaches, we compute the robust monetary policy under different model uncertainty specifications in a small model of the US economy. JEL Classification: E52, C32, D81estimation, Model uncertainty, monetary policy
Estimation of Sparse MIMO Channels with Common Support
We consider the problem of estimating sparse communication channels in the
MIMO context. In small to medium bandwidth communications, as in the current
standards for OFDM and CDMA communication systems (with bandwidth up to 20
MHz), such channels are individually sparse and at the same time share a common
support set. Since the underlying physical channels are inherently
continuous-time, we propose a parametric sparse estimation technique based on
finite rate of innovation (FRI) principles. Parametric estimation is especially
relevant to MIMO communications as it allows for a robust estimation and
concise description of the channels. The core of the algorithm is a
generalization of conventional spectral estimation methods to multiple input
signals with common support. We show the application of our technique for
channel estimation in OFDM (uniformly/contiguous DFT pilots) and CDMA downlink
(Walsh-Hadamard coded schemes). In the presence of additive white Gaussian
noise, theoretical lower bounds on the estimation of SCS channel parameters in
Rayleigh fading conditions are derived. Finally, an analytical spatial channel
model is derived, and simulations on this model in the OFDM setting show the
symbol error rate (SER) is reduced by a factor 2 (0 dB of SNR) to 5 (high SNR)
compared to standard non-parametric methods - e.g. lowpass interpolation.Comment: 12 pages / 7 figures. Submitted to IEEE Transactions on Communicatio
- âŠ