3 research outputs found

    N2SID: Nuclear Norm Subspace Identification

    Full text link
    The identification of multivariable state space models in innovation form is solved in a subspace identification framework using convex nuclear norm optimization. The convex optimization approach allows to include constraints on the unknown matrices in the data-equation characterizing subspace identification methods, such as the lower triangular block-Toeplitz of weighting matrices constructed from the Markov parameters of the unknown observer. The classical use of instrumental variables to remove the influence of the innovation term on the data equation in subspace identification is avoided. The avoidance of the instrumental variable projection step has the potential to improve the accuracy of the estimated model predictions, especially for short data length sequences. This is illustrated using a data set from the DaSIy library. An efficient implementation in the framework of the Alternating Direction Method of Multipliers (ADMM) is presented that is used in the validation study

    Maximum Entropy Vector Kernels for MIMO system identification

    Full text link
    Recent contributions have framed linear system identification as a nonparametric regularized inverse problem. Relying on â„“2\ell_2-type regularization which accounts for the stability and smoothness of the impulse response to be estimated, these approaches have been shown to be competitive w.r.t classical parametric methods. In this paper, adopting Maximum Entropy arguments, we derive a new â„“2\ell_2 penalty deriving from a vector-valued kernel; to do so we exploit the structure of the Hankel matrix, thus controlling at the same time complexity, measured by the McMillan degree, stability and smoothness of the identified models. As a special case we recover the nuclear norm penalty on the squared block Hankel matrix. In contrast with previous literature on reweighted nuclear norm penalties, our kernel is described by a small number of hyper-parameters, which are iteratively updated through marginal likelihood maximization; constraining the structure of the kernel acts as a (hyper)regularizer which helps controlling the effective degrees of freedom of our estimator. To optimize the marginal likelihood we adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be significantly computationally cheaper than other first and second order off-the-shelf optimization methods. The paper also contains an extensive comparison with many state-of-the-art methods on several Monte-Carlo studies, which confirms the effectiveness of our procedure
    corecore