18,041 research outputs found

    Nonparametric estimation by convex programming

    Full text link
    The problem we concentrate on is as follows: given (1) a convex compact set XX in Rn{\mathbb{R}}^n, an affine mapping xA(x)x\mapsto A(x), a parametric family {pμ()}\{p_{\mu}(\cdot)\} of probability densities and (2) NN i.i.d. observations of the random variable ω\omega, distributed with the density pA(x)()p_{A(x)}(\cdot) for some (unknown) xXx\in X, estimate the value gTxg^Tx of a given linear form at xx. For several families {pμ()}\{p_{\mu}(\cdot)\} with no additional assumptions on XX and AA, we develop computationally efficient estimation routines which are minimax optimal, within an absolute constant factor. We then apply these routines to recovering xx itself in the Euclidean norm.Comment: Published in at http://dx.doi.org/10.1214/08-AOS654 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Max-sum diversity via convex programming

    Get PDF
    Diversity maximization is an important concept in information retrieval, computational geometry and operations research. Usually, it is a variant of the following problem: Given a ground set, constraints, and a function f()f(\cdot) that measures diversity of a subset, the task is to select a feasible subset SS such that f(S)f(S) is maximized. The \emph{sum-dispersion} function f(S)=x,ySd(x,y)f(S) = \sum_{x,y \in S} d(x,y), which is the sum of the pairwise distances in SS, is in this context a prominent diversification measure. The corresponding diversity maximization is the \emph{max-sum} or \emph{sum-sum diversification}. Many recent results deal with the design of constant-factor approximation algorithms of diversification problems involving sum-dispersion function under a matroid constraint. In this paper, we present a PTAS for the max-sum diversification problem under a matroid constraint for distances d(,)d(\cdot,\cdot) of \emph{negative type}. Distances of negative type are, for example, metric distances stemming from the 2\ell_2 and 1\ell_1 norm, as well as the cosine or spherical, or Jaccard distance which are popular similarity metrics in web and image search

    Optimal control and convex programming

    Get PDF
    Computational scheme for optimal control problem based on convex programming metho

    Robust Camera Location Estimation by Convex Programming

    Full text link
    33D structure recovery from a collection of 22D images requires the estimation of the camera locations and orientations, i.e. the camera motion. For large, irregular collections of images, existing methods for the location estimation part, which can be formulated as the inverse problem of estimating nn locations t1,t2,,tn\mathbf{t}_1, \mathbf{t}_2, \ldots, \mathbf{t}_n in R3\mathbb{R}^3 from noisy measurements of a subset of the pairwise directions titjtitj\frac{\mathbf{t}_i - \mathbf{t}_j}{\|\mathbf{t}_i - \mathbf{t}_j\|}, are sensitive to outliers in direction measurements. In this paper, we firstly provide a complete characterization of well-posed instances of the location estimation problem, by presenting its relation to the existing theory of parallel rigidity. For robust estimation of camera locations, we introduce a two-step approach, comprised of a pairwise direction estimation method robust to outliers in point correspondences between image pairs, and a convex program to maintain robustness to outlier directions. In the presence of partially corrupted measurements, we empirically demonstrate that our convex formulation can even recover the locations exactly. Lastly, we demonstrate the utility of our formulations through experiments on Internet photo collections.Comment: 10 pages, 6 figures, 3 table

    Highly Robust Error Correction by Convex Programming

    Get PDF
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as Ax where A ∈ ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well

    Extended Formulations in Mixed-integer Convex Programming

    Full text link
    We present a unifying framework for generating extended formulations for the polyhedral outer approximations used in algorithms for mixed-integer convex programming (MICP). Extended formulations lead to fewer iterations of outer approximation algorithms and generally faster solution times. First, we observe that all MICP instances from the MINLPLIB2 benchmark library are conic representable with standard symmetric and nonsymmetric cones. Conic reformulations are shown to be effective extended formulations themselves because they encode separability structure. For mixed-integer conic-representable problems, we provide the first outer approximation algorithm with finite-time convergence guarantees, opening a path for the use of conic solvers for continuous relaxations. We then connect the popular modeling framework of disciplined convex programming (DCP) to the existence of extended formulations independent of conic representability. We present evidence that our approach can yield significant gains in practice, with the solution of a number of open instances from the MINLPLIB2 benchmark library.Comment: To be presented at IPCO 201

    Highly robust error correction by convex programming

    Full text link
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x in R^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g. quantization errors). We show that if one encodes the information as Ax where A is a suitable m by n coding matrix (m >= n), there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occur upon transmission (or equivalently as if one has an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well.Comment: 23 pages, 2 figure
    corecore