3,700 research outputs found
System identification with missing data via nuclear norm regularization
The application of nuclear norm regularization to system identification was recently shown to be a useful method for identifying low order linear models. In this paper, we consider nuclear norm regularization for identification of LTI systems with missing data under a total squared error constraint. The missing data problem is of ongoing interest because the need to analyze incomplete data sets arises frequently in diverse fields such as chemistry, psychometrics and satellite imaging. By casting the system identification as a convex optimization problem, nuclear norm regularization can be applied to identify the system in one step, i.e., without imputation of the missing data. Our exploratory work makes use of experimental data sets taken from an open system identification database, DaISy, to compare the proposed method named NucID to the standard techniques N4SID, prediction error minimization and expectation conditional maximization via linear regression. NucID is found to consistently identify systems with missing data within the imposed error tolerance, a task at which the standard methods sometimes fail, and to be particularly effective when the data is missing with patterns, e.g., on multi-rate systems, where it clearly outperforms existing procedures
Reweighted nuclear norm regularization: A SPARSEVA approach
The aim of this paper is to develop a method to estimate high order FIR and
ARX models using least squares with re-weighted nuclear norm regularization.
Typically, the choice of the tuning parameter in the reweighting scheme is
computationally expensive, hence we propose the use of the SPARSEVA (SPARSe
Estimation based on a VAlidation criterion) framework to overcome this problem.
Furthermore, we suggest the use of the prediction error criterion (PEC) to
select the tuning parameter in the SPARSEVA algorithm. Numerical examples
demonstrate the veracity of this method which has close ties with the
traditional technique of cross validation, but using much less computations.Comment: This paper is accepted and will be published in The Proceedings of
the 17th IFAC Symposium on System Identification (SYSID 2015), Beijing,
China, 201
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Robust Subspace System Identification via Weighted Nuclear Norm Optimization
Subspace identification is a classical and very well studied problem in
system identification. The problem was recently posed as a convex optimization
problem via the nuclear norm relaxation. Inspired by robust PCA, we extend this
framework to handle outliers. The proposed framework takes the form of a convex
optimization problem with an objective that trades off fit, rank and sparsity.
As in robust PCA, it can be problematic to find a suitable regularization
parameter. We show how the space in which a suitable parameter should be sought
can be limited to a bounded open set of the two dimensional parameter space. In
practice, this is very useful since it restricts the parameter space that is
needed to be surveyed.Comment: Submitted to the IFAC World Congress 201
Matrix Completion on Graphs
The problem of finding the missing values of a matrix given a few of its
entries, called matrix completion, has gathered a lot of attention in the
recent years. Although the problem under the standard low rank assumption is
NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number
of observed entries is sufficiently large. In this work, we introduce a novel
matrix completion model that makes use of proximity information about rows and
columns by assuming they form communities. This assumption makes sense in
several real-world problems like in recommender systems, where there are
communities of people sharing preferences, while products form clusters that
receive similar ratings. Our main goal is thus to find a low-rank solution that
is structured by the proximities of rows and columns encoded by graphs. We
borrow ideas from manifold learning to constrain our solution to be smooth on
these graphs, in order to implicitly force row and column proximities. Our
matrix recovery model is formulated as a convex non-smooth optimization
problem, for which a well-posed iterative scheme is provided. We study and
evaluate the proposed matrix completion on synthetic and real data, showing
that the proposed structured low-rank recovery model outperforms the standard
matrix completion model in many situations.Comment: Version of NIPS 2014 workshop "Out of the Box: Robustness in High
Dimension
- …