25 research outputs found
Segmentation of ARX systems through SDP-relaxation techniques
Segmentation of ARX models can be formulated as a combinato-
rial minimization problem in terms of the â„“0-norm of the param-
eter variations and the â„“2-loss of the prediction error. A typical
approach to compute an approximate solution to such a prob-
lem is based on â„“1-relaxation. Unfortunately, evaluation of the
level of accuracy of the â„“1-relaxation in approximating the opti-
mal solution of the original combinatorial problem is not easy to
accomplish. In this poster, an alternative approach is proposed
which provides an attractive solution for the â„“0-norm minimiza-
tion problem associated with segmentation of ARX models
An ADMM Algorithm for a Class of Total Variation Regularized Estimation Problems
We present an alternating augmented Lagrangian method for convex optimization
problems where the cost function is the sum of two terms, one that is separable
in the variable blocks, and a second that is separable in the difference
between consecutive variable blocks. Examples of such problems include Fused
Lasso estimation, total variation denoising, and multi-period portfolio
optimization with transaction costs. In each iteration of our method, the first
step involves separately optimizing over each variable block, which can be
carried out in parallel. The second step is not separable in the variables, but
can be carried out very efficiently. We apply the algorithm to segmentation of
data based on changes inmean (l_1 mean filtering) or changes in variance (l_1
variance filtering). In a numerical example, we show that our implementation is
around 10000 times faster compared with the generic optimization solver SDPT3
Sparse Iterative Learning Control with Application to a Wafer Stage: Achieving Performance, Resource Efficiency, and Task Flexibility
Trial-varying disturbances are a key concern in Iterative Learning Control
(ILC) and may lead to inefficient and expensive implementations and severe
performance deterioration. The aim of this paper is to develop a general
framework for optimization-based ILC that allows for enforcing additional
structure, including sparsity. The proposed method enforces sparsity in a
generalized setting through convex relaxations using norms. The
proposed ILC framework is applied to the optimization of sampling sequences for
resource efficient implementation, trial-varying disturbance attenuation, and
basis function selection. The framework has a large potential in control
applications such as mechatronics, as is confirmed through an application on a
wafer stage.Comment: 12 pages, 14 figure
Dynamic network identification from non-stationary vector autoregressive time series
Author's accepted manuscript (postprint).© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.acceptedVersio