673 research outputs found

    Random projections for linear programming

    Get PDF
    Random projections are random linear maps, sampled from appropriate distributions, that approx- imately preserve certain geometrical invariants so that the approximation improves as the dimension of the space grows. The well-known Johnson-Lindenstrauss lemma states that there are random ma- trices with surprisingly few rows that approximately preserve pairwise Euclidean distances among a set of points. This is commonly used to speed up algorithms based on Euclidean distances. We prove that these matrices also preserve other quantities, such as the distance to a cone. We exploit this result to devise a probabilistic algorithm to solve linear programs approximately. We show that this algorithm can approximately solve very large randomly generated LP instances. We also showcase its application to an error correction coding problem.Comment: 26 pages, 1 figur

    Optimality of the Johnson-Lindenstrauss Lemma

    Full text link
    For any integers d,nβ‰₯2d, n \geq 2 and 1/(min⁑{n,d})0.4999<Ξ΅<11/({\min\{n,d\}})^{0.4999} < \varepsilon<1, we show the existence of a set of nn vectors XβŠ‚RdX\subset \mathbb{R}^d such that any embedding f:Xβ†’Rmf:X\rightarrow \mathbb{R}^m satisfying βˆ€x,y∈X,Β (1βˆ’Ξ΅)βˆ₯xβˆ’yβˆ₯22≀βˆ₯f(x)βˆ’f(y)βˆ₯22≀(1+Ξ΅)βˆ₯xβˆ’yβˆ₯22 \forall x,y\in X,\ (1-\varepsilon)\|x-y\|_2^2\le \|f(x)-f(y)\|_2^2 \le (1+\varepsilon)\|x-y\|_2^2 must have m=Ξ©(Ξ΅βˆ’2lg⁑n). m = \Omega(\varepsilon^{-2} \lg n). This lower bound matches the upper bound given by the Johnson-Lindenstrauss lemma [JL84]. Furthermore, our lower bound holds for nearly the full range of Ξ΅\varepsilon of interest, since there is always an isometric embedding into dimension min⁑{d,n}\min\{d, n\} (either the identity map, or projection onto span(X)\mathop{span}(X)). Previously such a lower bound was only known to hold against linear maps ff, and not for such a wide range of parameters Ξ΅,n,d\varepsilon, n, d [LN16]. The best previously known lower bound for general ff was m=Ξ©(Ξ΅βˆ’2lg⁑n/lg⁑(1/Ξ΅))m = \Omega(\varepsilon^{-2}\lg n/\lg(1/\varepsilon)) [Wel74, Lev83, Alo03], which is suboptimal for any Ξ΅=o(1)\varepsilon = o(1).Comment: v2: simplified proof, also added reference to Lev8

    Impossibility of dimension reduction in the nuclear norm

    Full text link
    Let S1\mathsf{S}_1 (the Schatten--von Neumann trace class) denote the Banach space of all compact linear operators T:β„“2β†’β„“2T:\ell_2\to \ell_2 whose nuclear norm βˆ₯Tβˆ₯S1=βˆ‘j=1βˆžΟƒj(T)\|T\|_{\mathsf{S}_1}=\sum_{j=1}^\infty\sigma_j(T) is finite, where {Οƒj(T)}j=1∞\{\sigma_j(T)\}_{j=1}^\infty are the singular values of TT. We prove that for arbitrarily large n∈Nn\in \mathbb{N} there exists a subset CβŠ†S1\mathcal{C}\subseteq \mathsf{S}_1 with ∣C∣=n|\mathcal{C}|=n that cannot be embedded with bi-Lipschitz distortion O(1)O(1) into any no(1)n^{o(1)}-dimensional linear subspace of S1\mathsf{S}_1. C\mathcal{C} is not even a O(1)O(1)-Lipschitz quotient of any subset of any no(1)n^{o(1)}-dimensional linear subspace of S1\mathsf{S}_1. Thus, S1\mathsf{S}_1 does not admit a dimension reduction result \'a la Johnson and Lindenstrauss (1984), which complements the work of Harrow, Montanaro and Short (2011) on the limitations of quantum dimension reduction under the assumption that the embedding into low dimensions is a quantum channel. Such a statement was previously known with S1\mathsf{S}_1 replaced by the Banach space β„“1\ell_1 of absolutely summable sequences via the work of Brinkman and Charikar (2003). In fact, the above set C\mathcal{C} can be taken to be the same set as the one that Brinkman and Charikar considered, viewed as a collection of diagonal matrices in S1\mathsf{S}_1. The challenge is to demonstrate that C\mathcal{C} cannot be faithfully realized in an arbitrary low-dimensional subspace of S1\mathsf{S}_1, while Brinkman and Charikar obtained such an assertion only for subspaces of S1\mathsf{S}_1 that consist of diagonal operators (i.e., subspaces of β„“1\ell_1). We establish this by proving that the Markov 2-convexity constant of any finite dimensional linear subspace XX of S1\mathsf{S}_1 is at most a universal constant multiple of log⁑dim(X)\sqrt{\log \mathrm{dim}(X)}
    • …
    corecore