4 research outputs found

    On a class of optimization-based robust estimators

    Full text link
    We consider in this paper the problem of estimating a parameter matrix from observations which are affected by two types of noise components: (i) a sparse noise sequence which, whenever nonzero can have arbitrarily large amplitude (ii) and a dense and bounded noise sequence of "moderate" amount. This is termed a robust regression problem. To tackle it, a quite general optimization-based framework is proposed and analyzed. When only the sparse noise is present, a sufficient bound is derived on the number of nonzero elements in the sparse noise sequence that can be accommodated by the estimator while still returning the true parameter matrix. While almost all the restricted isometry-based bounds from the literature are not verifiable, our bound can be easily computed through solving a convex optimization problem. Moreover, empirical evidence tends to suggest that it is generally tight. If in addition to the sparse noise sequence, the training data are affected by a bounded dense noise, we derive an upper bound on the estimation error.Comment: To appear in IEEE Transactions on Automatic Contro

    On the exact minimization of saturated loss functions for robust regression and subspace estimation

    Get PDF
    This paper deals with robust regression and subspace estimation and more precisely with the problem of minimizing a saturated loss function. In particular, we focus on computational complexity issues and show that an exact algorithm with polynomial time-complexity with respect to the number of data can be devised for robust regression and subspace estimation. This result is obtained by adopting a classification point of view and relating the problems to the search for a linear model that can approximate the maximal number of points with a given error. Approximate variants of the algorithms based on ramdom sampling are also discussed and experiments show that it offers an accuracy gain over the traditional RANSAC for a similar algorithmic simplicity.Comment: Pattern Recognition Letters, Elsevier, 201

    Analysis of A Nonsmooth Optimization Approach to Robust Estimation

    Full text link
    In this paper, we consider the problem of identifying a linear map from measurements which are subject to intermittent and arbitarily large errors. This is a fundamental problem in many estimation-related applications such as fault detection, state estimation in lossy networks, hybrid system identification, robust estimation, etc. The problem is hard because it exhibits some intrinsic combinatorial features. Therefore, obtaining an effective solution necessitates relaxations that are both solvable at a reasonable cost and effective in the sense that they can return the true parameter vector. The current paper discusses a nonsmooth convex optimization approach and provides a new analysis of its behavior. In particular, it is shown that under appropriate conditions on the data, an exact estimate can be recovered from data corrupted by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure

    Subspace clustering through parametric representation and sparse optimization

    No full text
    International audienceWe consider the problem of recovering a finite number of linear subspaces from a collection of unlabeled data points that lie in the union of the subspaces. The data are such that it is not known which data point originates from which subspace. To address this challenge, we show that the clustering problem is amenable to a sparse optimization problem. Considering a candidate subspace and the distances of the data points to that subspace, the foundation of the proposed method lies in the maximization of the number of zero distances. This can be relaxed into a convex optimization. Efficiency of the relaxation can be significantly increased by solving a sequence of reweighted convex optimization problems
    corecore