54 research outputs found

    Regularization of inverse problems with adaptive discrepancy terms: application to multispectral data

    Get PDF
    International audiencen this paper, a general framework for the inversion of a linear operator in the case where one seeks several components from several observations is presented. The estimation is done by minimizing a functional balancing discrepancy terms by regularization terms. The regularization terms are adapted norms that enforce the desired properties of each component. The main focus of this paper is the definition of the discrepancy terms. Classically, these are quadratic. We present novel discrepancy terms adapt to the observations. They rely on adaptive projections that emphasize important information in the observations. Iterative algorithms to minimize the functionals with adaptive discrepancy terms are derived and their convergence and stability is studied. The methods obtained are compared for the problem of reconstruction of astrophysical maps from multifrequency observations of the Cosmic Microwave Background. We show the added flexibility provided by the adaptive discrepancy terms

    Recovery and convergence rate of the Frank-Wolfe Algorithm for the m-EXACT-SPARSE Problem

    Full text link
    We study the properties of the Frank-Wolfe algorithm to solve the m-EXACT-SPARSE reconstruction problem, where a signal y must be expressed as a sparse linear combination of a predefined set of atoms, called dictionary. We prove that when the signal is sparse enough with respect to the coherence of the dictionary, then the iterative process implemented by the Frank-Wolfe algorithm only recruits atoms from the support of the signal, that is the smallest set of atoms from the dictionary that allows for a perfect reconstruction of y. We also prove that under this same condition, there exists an iteration beyond which the algorithm converges exponentially

    Generalized Subspace Pursuit and an application to sparse Poisson denoising

    No full text
    International audienceWe present a generalization of Subspace Pursuit, which seeks the k-sparse vector that minimizes a generic cost function. We introduce the Restricted Diagonal Property, which much like RIP in the classical setting, enables to control the convergence of Generalized Subspace Pursuit (GSP). To tackle the problem of Poisson denoising, we propose to use GSP together with the Moreau-Yosida approximation of the Poisson likelihood. Experiments were conducted on synthetic, exact sparse and natural images corrupted by Poisson noise. We study the influence of the different parameters and show that our approach performs better than Subspace Pursuit or l1-relaxed methods and compares favorably to state-of-art methods

    A greedy approach to sparse poisson denoising

    No full text
    International audienceIn this paper we propose a greedy method combined with the Moreau-Yosida regularization of the Poisson likelihood in order to restore images corrupted by Poisson noise. The regularization provides us with a data fidelity term with nice properties which we minimize under sparsity constraints. To do so, we use a greedy method based on a generalization of the well-known CoSaMP algorithm. We introduce a new convergence analysis of the algorithm which extends it use outside of the usual scope of convex functions. We provide numerical experiments which show the soundness of the method compared to the convex 1 -norm relaxation of the problem

    Stochastic Low-Rank Kernel Learning for Regression

    Full text link
    We present a novel approach to learn a kernel-based regression function. It is based on the useof conical combinations of data-based parameterized kernels and on a new stochastic convex optimization procedure of which we establish convergence guarantees. The overall learning procedure has the nice properties that a) the learned conical combination is automatically designed to perform the regression task at hand and b) the updates implicated by the optimization procedure are quite inexpensive. In order to shed light on the appositeness of our learning strategy, we present empirical results from experiments conducted on various benchmark datasets.Comment: International Conference on Machine Learning (ICML'11), Bellevue (Washington) : United States (2011

    Optimal Computational Trade-Off of Inexact Proximal Methods (short version)

    No full text
    International audienceIn this paper, we investigate the trade-off between convergence rate and computational cost when minimizing a composite functional with proximal-gradient methods, which are popular optimisation tools in machine learning. We consider the case when the proximity operator is approximated via an iterative procedure, which yields algorithms with two nested loops. We show that the strategy minimizing the computational cost to reach a desired accuracy in finite time is to keep the number of inner iterations constant, which differs from the strategy indicated by a convergence rate analysis

    Empirical Bernstein Inequalities for U-Statistics

    No full text
    International audienceWe present original empirical Bernstein inequalities for U-statistics with bounded symmetric kernels q. They are expressed with respect to empirical estimates of either the variance of q or the conditional variance that appears in the Bernstein-type inequality for U-statistics derived by Arcones. Our result subsumes other existing empirical Bernstein inequalities, as it reduces to them when U-statistics of order 1 are considered. In addition, it is based on a rather direct argument using two applications of the same (non-empirical) Bernstein inequality for U-statistics. We discuss potential applications of our new inequalities, especially in the realm of learning ranking/scoring functions. In the process, we exhibit an efficient pro- cedure to compute the variance estimates for the special case of bipartite ranking that rests on a sorting argument. We also argue that our results may provide test set bounds and particularly interesting empirical racing algorithms for the problem of online learning of scoring functions
    • …
    corecore