1,435 research outputs found

    Optimal Hankel norm identification of dynamical systems

    Get PDF
    The problem of optimal approximate system identification is addressed with a newly defined measure of misfit between observed time series and linear time-invariant models. The behavioral framework is used as a suitable axiomatic setting for a non-parametric introduction of system complexity, and allows for a notion of misfit of dynamical systems that is independent of system representations. The misfit function introduced here is characterized in terms of the induced norm of a Hankel operator associated with the data and a co-inner kernel representation of a model. Sets of Pareto-optimal models are defined as feasible trade-offs between complexity and misfit of models, and it is shown how all Pareto-optimal models are characterized as exact models of compressed data sets obtained from Hankel-norm approximations of data matrices. This leads to new conceptual algorithms for optimal approximate identification of time serie

    Learning Linear Dynamical Systems via Spectral Filtering

    Full text link
    We present an efficient and practical algorithm for the online prediction of discrete-time linear dynamical systems with a symmetric transition matrix. We circumvent the non-convex optimization problem using improper learning: carefully overparameterize the class of LDSs by a polylogarithmic factor, in exchange for convexity of the loss functions. From this arises a polynomial-time algorithm with a near-optimal regret guarantee, with an analogous sample complexity bound for agnostic learning. Our algorithm is based on a novel filtering technique, which may be of independent interest: we convolve the time series with the eigenvectors of a certain Hankel matrix.Comment: Published as a conference paper at NIPS 201

    A new bound of the ℒ2[0, T]-induced norm and applications to model reduction

    Get PDF
    We present a simple bound on the finite horizon ℒ2/[0, T]-induced norm of a linear time-invariant (LTI), not necessarily stable system which can be efficiently computed by calculating the ℋ∞ norm of a shifted version of the original operator. As an application, we show how to use this bound to perform model reduction of unstable systems over a finite horizon. The technique is illustrated with a non-trivial physical example relevant to the appearance of time-irreversible phenomena in statistical physics

    Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint

    Full text link
    Inspired by ideas taken from the machine learning literature, new regularization techniques have been recently introduced in linear system identification. In particular, all the adopted estimators solve a regularized least squares problem, differing in the nature of the penalty term assigned to the impulse response. Popular choices include atomic and nuclear norms (applied to Hankel matrices) as well as norms induced by the so called stable spline kernels. In this paper, a comparative study of estimators based on these different types of regularizers is reported. Our findings reveal that stable spline kernels outperform approaches based on atomic and nuclear norms since they suitably embed information on impulse response stability and smoothness. This point is illustrated using the Bayesian interpretation of regularization. We also design a new class of regularizers defined by "integral" versions of stable spline/TC kernels. Under quite realistic experimental conditions, the new estimators outperform classical prediction error methods also when the latter are equipped with an oracle for model order selection

    Maximum Entropy Vector Kernels for MIMO system identification

    Full text link
    Recent contributions have framed linear system identification as a nonparametric regularized inverse problem. Relying on 2\ell_2-type regularization which accounts for the stability and smoothness of the impulse response to be estimated, these approaches have been shown to be competitive w.r.t classical parametric methods. In this paper, adopting Maximum Entropy arguments, we derive a new 2\ell_2 penalty deriving from a vector-valued kernel; to do so we exploit the structure of the Hankel matrix, thus controlling at the same time complexity, measured by the McMillan degree, stability and smoothness of the identified models. As a special case we recover the nuclear norm penalty on the squared block Hankel matrix. In contrast with previous literature on reweighted nuclear norm penalties, our kernel is described by a small number of hyper-parameters, which are iteratively updated through marginal likelihood maximization; constraining the structure of the kernel acts as a (hyper)regularizer which helps controlling the effective degrees of freedom of our estimator. To optimize the marginal likelihood we adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be significantly computationally cheaper than other first and second order off-the-shelf optimization methods. The paper also contains an extensive comparison with many state-of-the-art methods on several Monte-Carlo studies, which confirms the effectiveness of our procedure
    corecore