18,332 research outputs found

    Extended formulations from communication protocols in output-efficient time

    Full text link
    Deterministic protocols are well-known tools to obtain extended formulations, with many applications to polytopes arising in combinatorial optimization. Although constructive, those tools are not output-efficient, since the time needed to produce the extended formulation also depends on the number of rows of the slack matrix (hence, on the exact description in the original space). We give general sufficient conditions under which those tools can be implemented as to be output-efficient, showing applications to e.g.~Yannakakis' extended formulation for the stable set polytope of perfect graphs, for which, to the best of our knowledge, an efficient construction was previously not known. For specific classes of polytopes, we give also a direct, efficient construction of extended formulations arising from protocols. Finally, we deal with extended formulations coming from unambiguous non-deterministic protocols

    Exponential Lower Bounds for Polytopes in Combinatorial Optimization

    Get PDF
    We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.Comment: 19 pages, 4 figures. This version of the paper will appear in the Journal of the ACM. The earlier conference version in STOC'12 had the title "Linear vs. Semidefinite Extended Formulations: Exponential Separation and Strong Lower Bounds

    Small Extended Formulation for Knapsack Cover Inequalities from Monotone Circuits

    Full text link
    Initially developed for the min-knapsack problem, the knapsack cover inequalities are used in the current best relaxations for numerous combinatorial optimization problems of covering type. In spite of their widespread use, these inequalities yield linear programming (LP) relaxations of exponential size, over which it is not known how to optimize exactly in polynomial time. In this paper we address this issue and obtain LP relaxations of quasi-polynomial size that are at least as strong as that given by the knapsack cover inequalities. For the min-knapsack cover problem, our main result can be stated formally as follows: for any Δ>0\varepsilon >0, there is a (1/Δ)O(1)nO(log⁥n)(1/\varepsilon)^{O(1)}n^{O(\log n)}-size LP relaxation with an integrality gap of at most 2+Δ2+\varepsilon, where nn is the number of items. Prior to this work, there was no known relaxation of subexponential size with a constant upper bound on the integrality gap. Our construction is inspired by a connection between extended formulations and monotone circuit complexity via Karchmer-Wigderson games. In particular, our LP is based on O(log⁥2n)O(\log^2 n)-depth monotone circuits with fan-in~22 for evaluating weighted threshold functions with nn inputs, as constructed by Beimel and Weinreb. We believe that a further understanding of this connection may lead to more positive results complementing the numerous lower bounds recently proved for extended formulations.Comment: 21 page

    Adaptation and learning over networks for nonlinear system modeling

    Full text link
    In this chapter, we analyze nonlinear filtering problems in distributed environments, e.g., sensor networks or peer-to-peer protocols. In these scenarios, the agents in the environment receive measurements in a streaming fashion, and they are required to estimate a common (nonlinear) model by alternating local computations and communications with their neighbors. We focus on the important distinction between single-task problems, where the underlying model is common to all agents, and multitask problems, where each agent might converge to a different model due to, e.g., spatial dependencies or other factors. Currently, most of the literature on distributed learning in the nonlinear case has focused on the single-task case, which may be a strong limitation in real-world scenarios. After introducing the problem and reviewing the existing approaches, we describe a simple kernel-based algorithm tailored for the multitask case. We evaluate the proposal on a simulated benchmark task, and we conclude by detailing currently open problems and lines of research.Comment: To be published as a chapter in `Adaptive Learning Methods for Nonlinear System Modeling', Elsevier Publishing, Eds. D. Comminiello and J.C. Principe (2018
    • 

    corecore