2,762 research outputs found
Dynamics and sparsity in latent threshold factor models: A study in multivariate EEG signal processing
We discuss Bayesian analysis of multivariate time series with dynamic factor
models that exploit time-adaptive sparsity in model parametrizations via the
latent threshold approach. One central focus is on the transfer responses of
multiple interrelated series to underlying, dynamic latent factor processes.
Structured priors on model hyper-parameters are key to the efficacy of dynamic
latent thresholding, and MCMC-based computation enables model fitting and
analysis. A detailed case study of electroencephalographic (EEG) data from
experimental psychiatry highlights the use of latent threshold extensions of
time-varying vector autoregressive and factor models. This study explores a
class of dynamic transfer response factor models, extending prior Bayesian
modeling of multiple EEG series and highlighting the practical utility of the
latent thresholding concept in multivariate, non-stationary time series
analysis.Comment: 27 pages, 13 figures, link to external web site for supplementary
animated figure
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Dynamic Decomposition of Spatiotemporal Neural Signals
Neural signals are characterized by rich temporal and spatiotemporal dynamics
that reflect the organization of cortical networks. Theoretical research has
shown how neural networks can operate at different dynamic ranges that
correspond to specific types of information processing. Here we present a data
analysis framework that uses a linearized model of these dynamic states in
order to decompose the measured neural signal into a series of components that
capture both rhythmic and non-rhythmic neural activity. The method is based on
stochastic differential equations and Gaussian process regression. Through
computer simulations and analysis of magnetoencephalographic data, we
demonstrate the efficacy of the method in identifying meaningful modulations of
oscillatory signals corrupted by structured temporal and spatiotemporal noise.
These results suggest that the method is particularly suitable for the analysis
and interpretation of complex temporal and spatiotemporal neural signals
Kernel-based system identification from noisy and incomplete input-output data
In this contribution, we propose a kernel-based method for the identification
of linear systems from noisy and incomplete input-output datasets. We model the
impulse response of the system as a Gaussian process whose covariance matrix is
given by the recently introduced stable spline kernel. We adopt an empirical
Bayes approach to estimate the posterior distribution of the impulse response
given the data. The noiseless and missing data samples, together with the
kernel hyperparameters, are estimated maximizing the joint marginal likelihood
of the input and output measurements. To compute the marginal-likelihood
maximizer, we build a solution scheme based on the Expectation-Maximization
method. Simulations on a benchmark dataset show the effectiveness of the
method.Comment: 16 pages, submitted to IEEE Conference on Decision and Control 201
- …