862 research outputs found

    Analysis of A Nonsmooth Optimization Approach to Robust Estimation

    Full text link
    In this paper, we consider the problem of identifying a linear map from measurements which are subject to intermittent and arbitarily large errors. This is a fundamental problem in many estimation-related applications such as fault detection, state estimation in lossy networks, hybrid system identification, robust estimation, etc. The problem is hard because it exhibits some intrinsic combinatorial features. Therefore, obtaining an effective solution necessitates relaxations that are both solvable at a reasonable cost and effective in the sense that they can return the true parameter vector. The current paper discusses a nonsmooth convex optimization approach and provides a new analysis of its behavior. In particular, it is shown that under appropriate conditions on the data, an exact estimate can be recovered from data corrupted by a large (even infinite) number of gross errors.Comment: 17 pages, 9 figure

    Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses

    Get PDF
    This paper proposes a simple approach to derive efficient error bounds for learning multiple components with sparsity-inducing regularization. We show that for such regularization schemes, known decompositions of the Rademacher complexity over the components can be used in a more efficient manner to result in tighter bounds without too much effort. We give examples of application to switching regression and center-based clustering/vector quantization. Then, the complete workflow is illustrated on the problem of subspace clustering, for which decomposition results were not previously available. For all these problems, the proposed approach yields risk bounds with mild dependencies on the number of components and completely removes this dependence for nonconvex regularization schemes that could not be handled by previous methods

    Fitting Jump Models

    Get PDF
    We describe a new framework for fitting jump models to a sequence of data. The key idea is to alternate between minimizing a loss function to fit multiple model parameters, and minimizing a discrete loss function to determine which set of model parameters is active at each data point. The framework is quite general and encompasses popular classes of models, such as hidden Markov models and piecewise affine models. The shape of the chosen loss functions to minimize determine the shape of the resulting jump model.Comment: Accepted for publication in Automatic

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201

    Hybrid System Identification of Manual Tracking Submovements in Parkinson\u27s Disease

    Get PDF
    Seemingly smooth motions in manual tracking, (e.g., following a moving target with a joystick input) are actually sequences of submovements: short, open-loop motions that have been previously learned. In Parkinson\u27s disease, a neurodegenerative movement disorder, characterizations of motor performance can yield insight into underlying neurological mechanisms and therefore into potential treatment strategies. We focus on characterizing submovements through Hybrid System Identification, in which the dynamics of each submovement, the mode sequence and timing, and switching mechanisms are all unknown. We describe an initialization that provides a mode sequence and estimate of the dynamics of submovements, then apply hybrid optimization techniques based on embedding to solve a constrained nonlinear program. We also use the existing geometric approach for hybrid system identification to analyze our model and explain the deficits and advantages of each. These methods are applied to data gathered from subjects with Parkinson\u27s disease (on and off L-dopa medication) and from age-matched control subjects, and the results compared across groups demonstrating robust differences. Lastly, we develop a scheme to estimate the switching mechanism of the modeled hybrid system by using the principle of maximum margin separating hyperplane, which is a convex optimization problem, over the affine parameters describing the switching surface and provide a means o characterizing when too many or too few parameters are hypothesized to lie in the switching surface

    Piecewise smooth system identification in reproducing kernel Hilbert space

    Get PDF
    International audienceThe paper extends the recent approach of Ohlsson and Ljung for piecewise affine system identification to the nonlinear case while taking a clustering point of view. In this approach, the problem is cast as the minimization of a convex cost function implementing a trade-off between the fit to the data and a sparsity prior on the number of pieces. Here, we consider the nonlinear case of piecewise smooth system identification without prior knowledge on the type of nonlinearities involved. This is tackled by simultaneously learning a collection of local models from a reproducing kernel Hilbert space via the minimization of a convex functional, for which we prove a representer theorem that provides the explicit form of the solution. An example of application to piecewise smooth system identification shows that both the mode and the nonlinear local models can be accurately estimated
    • …
    corecore