28 research outputs found

    Accuracy guarantees for L1-recovery

    Full text link
    We discuss two new methods of recovery of sparse signals from noisy observation based on â„“1\ell_1- minimization. They are closely related to the well-known techniques such as Lasso and Dantzig Selector. However, these estimators come with efficiently verifiable guaranties of performance. By optimizing these bounds with respect to the method parameters we are able to construct the estimators which possess better statistical properties than the commonly used ones. We also show how these techniques allow to provide efficiently computable accuracy bounds for Lasso and Dantzig Selector. We link our performance estimations to the well known results of Compressive Sensing and justify our proposed approach with an oracle inequality which links the properties of the recovery algorithms and the best estimation performance when the signal support is known. We demonstrate how the estimates can be computed using the Non-Euclidean Basis Pursuit algorithm

    Who are Taxable?—Basic Problems in Definition Under the Illinois Retailers’ Occupation Tax Act

    Get PDF
    Identification of non-linear systems is a challenge, due to the richness of both model structures and estimation approaches. As a case study, in this paper we test a number of methods on a data set collected from an electrical circuit at the Free University of Brussels. These methods are based on black box and grey box model structures or on a mixture of them, which are all implemented in a forthcoming Matlab toolbox. The results of this case study illustrate the importance of the use of custom (user defined) regressors in a black box model. Based on physical knowledge or on insights gained through experience, such custom regressors allow to build efficient models with a relatively simple model structure.

    “Traditional” Resource Uses and Activities: Articulating Values and Examining Conflicts in Alaska

    Get PDF
    The paper describes additions to the MATLAB system identification toolbox, that handle also the estimation of nonlinear models. Both structured grey-box models and general, flexible black-box models are covered. The idea is that the look and feel of the syntax, and the graphical user interface should be as close as possible to the linear case

    Linear programming problems for L1L_1 optimal frontier estimation

    Get PDF
    We propose new optimal estimators for the Lipschitz frontier of a set of points. They are defined as kernel estimators being sufficiently regular, covering all the points and whose associated support is of smallest surface. The estimators are written as linear combinations of kernel functions applied to the points of the sample. The coefficients of the linear combination are then computed by solving related linear programming problem. The L1L_1 error between the estimated and the true frontier function with a known Lipschitz constant is shown to be almost surely converging to zero, and the rate of convergence is proved to be optimal

    Optimisation stochastique Ă  grande Ă©chelle

    No full text
    L objet de cette thèse est l étude d algorithmes itératifs permettant de résoudre des problèmes d optimisation convexe avec ou sans contraintes fonctionnelles, des problèmes de résolutions d inégalités variationnelles à opérateur monotone et des problèmes de recherche de point selle. Ces problèmes sont envisagés lorsque la dimension de l espace de recherche est grande et lorsque les valeurs des différentes fonctions étudiées et leur sous/sur-gradients ne sont pas connues exactement et ne sont accessibles qu au travers d un oracle stochastique.Les algorithmes que nous étudions sont des adaptations au cas stochastique de deux algorithmes : le premier inspiré de la méthode de descente en miroir de Nemirovski et Yudin et le second, de l algorithme d extrapolation duale de Nesterov. Pour chacun de ces deux algorithmes, nous donnons des bornes pour l espérance et pour les déviations modérées de l erreur d approximation sous différentes hypothèses de régularité pour tous les problèmes sans contraintes fonctionnelles envisagées et nous donnons des versions adaptatives de ces algorithmes qui permettent de s affranchir de connaître certains paramètres de ces problèmes non accessibles en pratique. Enfin nous montrons comment, à l aide d un algorithme auxiliaire inspiré de la méthode de Newton et des résultats obtenus lors de la résolution des problèmes de recherche de point selle, il est possible de résoudre des problèmes d optimisation sous contraintes fonctionnelles.In this thesis we study iterative algorithms in order to solve constrained and unconstrained convex optimization problems, variational inequalities with monotone operators and saddle point problems. We study these problems when the dimension of the search space is high and when the values of the functions of interest are unknown and we just can deal with a stochastic oracle.The algorithms we study are stochastic adaptation of two algorithms : the first one is a variant of the mirror descent algorithm proposed by Nemirovski and Yudin and the second one a variant of the dual extrapolation algorithm by Nesterov. For both of them, we provide bounds for the expected value and bounds for moderate deviations of the approximation error with different regularity hypothesis for all the unconstrained problems we study and we propose adaptative versions of the algorithms in order to get rid of the knowledge of some parameters depending on the problem studied and unavailable in practice. At last we show how to solve constrained stochastic optimization problems thanks to an auxiliary algorithm inspired by the Newton descent one and thanks to the results we obtained for the saddle point problems.GRENOBLE1-BU Sciences (384212103) / SudocSudocFranceF

    Linear Programming Problems for Frontier Estimation

    Get PDF
    We propose new estimates for the frontier of a set of points. They are de ned as kernel estimates covering all the points and whose associated support is of smallest surface. The estimates are written as linear combinations of kernel functions applied to the points of the sample. The coefficients of the linear combination are then computed by solving a linear programming problem. In the general case, the solution of the optimization problem is sparse, that is, only a few coefficients are non zero. The corresponding points play the role of support vectors in the statistical learning theory. The L 1 error between the estimated and the true frontiers is shown to be almost surely converging to zero, and the rate of convergence is provided. The behaviour of the estimates on finite sample situations is illustrated on some simulations
    corecore