398 research outputs found

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Adaptive Regularized Submodular Maximization

    Get PDF
    In this paper, we study the problem of maximizing the difference between an adaptive submodular (revenue) function and a non-negative modular (cost) function. The input of our problem is a set of n items, where each item has a particular state drawn from some known prior distribution The revenue function g is defined over items and states, and the cost function c is defined over items, i.e., each item has a fixed cost. The state of each item is unknown initially and one must select an item in order to observe its realized state. A policy ? specifies which item to pick next based on the observations made so far. Denote by g_{avg}(?) the expected revenue of ? and let c_{avg}(?) denote the expected cost of ?. Our objective is to identify the best policy ?^o ? arg max_? g_{avg}(?)-c_{avg}(?) under a k-cardinality constraint. Since our objective function can take on both negative and positive values, the existing results of submodular maximization may not be applicable. To overcome this challenge, we develop a series of effective solutions with performance guarantees. Let ?^o denote the optimal policy. For the case when g is adaptive monotone and adaptive submodular, we develop an effective policy ?^l such that g_{avg}(?^l) - c_{avg}(?^l) ? (1-1/e-?)g_{avg}(?^o) - c_{avg}(?^o), using only O(n?^{-2}log ?^{-1}) value oracle queries. For the case when g is adaptive submodular, we present a randomized policy ?^r such that g_{avg}(?^r) - c_{avg}(?^r) ? 1/eg_{avg}(?^o) - c_{avg}(?^o)
    • …
    corecore