7,661 research outputs found
An Extended Kalman Filter for Data-enabled Predictive Control
The literature dealing with data-driven analysis and control problems has
significantly grown in the recent years. Most of the recent literature deals
with linear time-invariant systems in which the uncertainty (if any) is assumed
to be deterministic and bounded; relatively little attention has been devoted
to stochastic linear time-invariant systems. As a first step in this direction,
we propose to equip the recently introduced Data-enabled Predictive Control
algorithm with a data-based Extended Kalman Filter to make use of additional
available input-output data for reducing the effect of noise, without
increasing the computational load of the optimization procedure
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Exploiting the theory of state space models, we derive the exact expressions
of the information transfer, as well as redundant and synergistic transfer, for
coupled Gaussian processes observed at multiple temporal scales. All of the
terms, constituting the frameworks known as interaction information
decomposition and partial information decomposition, can thus be analytically
obtained for different time scales from the parameters of the VAR model that
fits the processes. We report the application of the proposed methodology
firstly to benchmark Gaussian systems, showing that this class of systems may
generate patterns of information decomposition characterized by mainly
redundant or synergistic information transfer persisting across multiple time
scales or even by the alternating prevalence of redundant and synergistic
source interaction depending on the time scale. Then, we apply our method to an
important topic in neuroscience, i.e., the detection of causal interactions in
human epilepsy networks, for which we show the relevance of partial information
decomposition to the detection of multiscale information transfer spreading
from the seizure onset zone
Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite-Sum Structure
Stochastic optimization algorithms with variance reduction have proven
successful for minimizing large finite sums of functions. Unfortunately, these
techniques are unable to deal with stochastic perturbations of input data,
induced for example by data augmentation. In such cases, the objective is no
longer a finite sum, and the main candidate for optimization is the stochastic
gradient descent method (SGD). In this paper, we introduce a variance reduction
approach for these settings when the objective is composite and strongly
convex. The convergence rate outperforms SGD with a typically much smaller
constant factor, which depends on the variance of gradient estimates only due
to perturbations on a single example.Comment: Advances in Neural Information Processing Systems (NIPS), Dec 2017,
Long Beach, CA, United State
- …