405 research outputs found
Regularized System Identification
This open access book provides a comprehensive treatment of recent developments in kernel-based identification that are of interest to anyone engaged in learning dynamic systems from data. The reader is led step by step into understanding of a novel paradigm that leverages the power of machine learning without losing sight of the system-theoretical principles of black-box identification. The authors’ reformulation of the identification problem in the light of regularization theory not only offers new insight on classical questions, but paves the way to new and powerful algorithms for a variety of linear and nonlinear problems. Regression methods such as regularization networks and support vector machines are the basis of techniques that extend the function-estimation problem to the estimation of dynamic models. Many examples, also from real-world applications, illustrate the comparative advantages of the new nonparametric approach with respect to classic parametric prediction error methods. The challenges it addresses lie at the intersection of several disciplines so Regularized System Identification will be of interest to a variety of researchers and practitioners in the areas of control systems, machine learning, statistics, and data science. This is an open access book
Identification of stable models via nonparametric prediction error methods
A new Bayesian approach to linear system identification has been proposed in
a series of recent papers. The main idea is to frame linear system
identification as predictor estimation in an infinite dimensional space, with
the aid of regularization/Bayesian techniques. This approach guarantees the
identification of stable predictors based on the prediction error minimization.
Unluckily, the stability of the predictors does not guarantee the stability of
the impulse response of the system. In this paper we propose and compare
various techniques to address this issue. Simulations results comparing these
techniques will be provided.Comment: number of pages = 6, number of figures =
Kernel-based Impulse Response Identification with Side-Information on Steady-State Gain
In this paper, we consider the problem of system identification when
side-information is available on the steady-state (or DC) gain of the system.
We formulate a general nonparametric identification method as an
infinite-dimensional constrained convex program over the reproducing kernel
Hilbert space (RKHS) of stable impulse responses. The objective function of
this optimization problem is the empirical loss regularized with the norm of
RKHS, and the constraint is considered for enforcing the integration of the
steady-state gain side-information. The proposed formulation addresses both the
discrete-time and continuous-time cases. We show that this program has a unique
solution obtained by solving an equivalent finite-dimensional convex
optimization. This solution has a closed-form when the empirical loss and
regularization functions are quadratic and exact side-information is
considered. We perform extensive numerical comparisons to verify the efficiency
of the proposed identification methodology
Absolute integrability of Mercer kernels is only sufficient for RKHS stability
Reproducing kernel Hilbert spaces (RKHSs) are special Hilbert spaces in
one-to-one correspondence with positive definite maps called kernels. They are
widely employed in machine learning to reconstruct unknown functions from
sparse and noisy data. In the last two decades, a subclass known as stable
RKHSs has been also introduced in the setting of linear system identification.
Stable RKHSs contain only absolutely integrable impulse responses over the
positive real line. Hence, they can be adopted as hypothesis spaces to estimate
linear, time-invariant and BIBO stable dynamic systems from input-output data.
Necessary and sufficient conditions for RKHS stability are available in the
literature and it is known that kernel absolute integrability implies
stability. Working in discrete-time, in a recent work we have proved that this
latter condition is only sufficient. Working in continuous-time, it is the
purpose of this note to prove that the same result holds also for Mercer
kernels
The Harmonic Analysis of Kernel Functions
Kernel-based methods have been recently introduced for linear system
identification as an alternative to parametric prediction error methods.
Adopting the Bayesian perspective, the impulse response is modeled as a
non-stationary Gaussian process with zero mean and with a certain kernel (i.e.
covariance) function. Choosing the kernel is one of the most challenging and
important issues. In the present paper we introduce the harmonic analysis of
this non-stationary process, and argue that this is an important tool which
helps in designing such kernel. Furthermore, this analysis suggests also an
effective way to approximate the kernel, which allows to reduce the
computational burden of the identification procedure
Kernel-Based Identification with Frequency Domain Side-Information
In this paper, we discuss the problem of system identification when frequency
domain side information is available on the system. Initially, we consider the
case where the prior knowledge is provided as being the \Hcal_{\infty}-norm
of the system bounded by a given scalar. This framework provides the
opportunity of considering various forms of side information such as the
dissipativity of the system as well as other forms of frequency domain prior
knowledge. We propose a nonparametric identification method for estimating the
impulse response of the system under the given side information. The estimation
problem is formulated as an optimization in a reproducing kernel Hilbert space
(RKHS) endowed with a stable kernel. The corresponding objective function
consists of a term for minimizing the fitting error, and a regularization term
defined based on the norm of the impulse response in the employed RKHS. To
guarantee the desired frequency domain features defined based on the prior
knowledge, suitable constraints are imposed on the estimation problem. The
resulting optimization has an infinite-dimensional feasible set with an
infinite number of constraints. We show that this problem is a well-defined
convex program with a unique solution. We propose a heuristic that tightly
approximates this unique solution. The proposed approach is equivalent to
solving a finite-dimensional convex quadratically constrained quadratic
program. The efficiency of the discussed method is verified by several
numerical examples
- …