8,018 research outputs found

    Average case polyhedral complexity of the maximum stable set problem

    Full text link
    We study the minimum number of constraints needed to formulate random instances of the maximum stable set problem via linear programs (LPs), in two distinct models. In the uniform model, the constraints of the LP are not allowed to depend on the input graph, which should be encoded solely in the objective function. There we prove a 2Ω(n/logn)2^{\Omega(n/ \log n)} lower bound with probability at least 122n1 - 2^{-2^n} for every LP that is exact for a randomly selected set of instances; each graph on at most n vertices being selected independently with probability p2(n/42)+np \geq 2^{-\binom{n/4}{2}+n}. In the non-uniform model, the constraints of the LP may depend on the input graph, but we allow weights on the vertices. The input graph is sampled according to the G(n, p) model. There we obtain upper and lower bounds holding with high probability for various ranges of p. We obtain a super-polynomial lower bound all the way from p=Ω(log6+ε/n)p = \Omega(\log^{6+\varepsilon} / n) to p=o(1/logn)p = o (1 / \log n). Our upper bound is close to this as there is only an essentially quadratic gap in the exponent, which currently also exists in the worst-case model. Finally, we state a conjecture that would close this gap, both in the average-case and worst-case models

    Average Case Polyhedral Complexity of the Maximum Stable Set Problem

    Get PDF
    We study the minimum number of constraints needed to formulate random instances of the maximum stable set problem via LPs (more precisely, linear extended formulations), in two distinct models. In the uniform model, the constraints of the LP are not allowed to depend on the input graph, which should be encoded solely in the objective function. There we prove a 2 Ω(n/ log n) lower bound with probability at least 1-2-2n for every LP that is exact for a randomly selected set of instances; each graph on at most n vertices being selected independently with probability p > 2≥(n/42 )+n. In the non-uniform model, the constraints of the LP may depend on the input graph, but we allow weights on the vertices. The input graph is sampled according to the G(n, p) model. There we obtain upper and lower bounds holding with high probability for various ranges of p. We obtain a super-polynomial lower bound all the way from p = Ω(log6+εn/n ) to p = O( 1/log n ). Our upper bound is close to this as there is only an essentially quadratic gap in the exponent, which also exists in the worst-case model. Finally, we state a conjecture that would close this gap, both in the average-case and worst-case models.SCOPUS: cp.pinfo:eu-repo/semantics/publishe

    The matching polytope does not admit fully-polynomial size relaxation schemes

    Full text link
    The groundbreaking work of Rothvo{\ss} [arxiv:1311.2369] established that every linear program expressing the matching polytope has an exponential number of inequalities (formally, the matching polytope has exponential extension complexity). We generalize this result by deriving strong bounds on the polyhedral inapproximability of the matching polytope: for fixed 0<ε<10 < \varepsilon < 1, every polyhedral (1+ε/n)(1 + \varepsilon / n)-approximation requires an exponential number of inequalities, where nn is the number of vertices. This is sharp given the well-known ρ\rho-approximation of size O((nρ/(ρ1)))O(\binom{n}{\rho/(\rho-1)}) provided by the odd-sets of size up to ρ/(ρ1)\rho/(\rho-1). Thus matching is the first problem in PP, whose natural linear encoding does not admit a fully polynomial-size relaxation scheme (the polyhedral equivalent of an FPTAS), which provides a sharp separation from the polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets mentioned above. Our approach reuses ideas from Rothvo{\ss} [arxiv:1311.2369], however the main lower bounding technique is different. While the original proof is based on the hyperplane separation bound (also called the rectangle corruption bound), we employ the information-theoretic notion of common information as introduced in Braun and Pokutta [http://eccc.hpi-web.de/report/2013/056/], which allows to analyze perturbations of slack matrices. It turns out that the high extension complexity for the matching polytope stem from the same source of hardness as for the correlation polytope: a direct sum structure.Comment: 21 pages, 3 figure

    Computing the vertices of tropical polyhedra using directed hypergraphs

    Get PDF
    We establish a characterization of the vertices of a tropical polyhedron defined as the intersection of finitely many half-spaces. We show that a point is a vertex if, and only if, a directed hypergraph, constructed from the subdifferentials of the active constraints at this point, admits a unique strongly connected component that is maximal with respect to the reachability relation (all the other strongly connected components have access to it). This property can be checked in almost linear-time. This allows us to develop a tropical analogue of the classical double description method, which computes a minimal internal representation (in terms of vertices) of a polyhedron defined externally (by half-spaces or hyperplanes). We provide theoretical worst case complexity bounds and report extensive experimental tests performed using the library TPLib, showing that this method outperforms the other existing approaches.Comment: 29 pages (A4), 10 figures, 1 table; v2: Improved algorithm in section 5 (using directed hypergraphs), detailed appendix; v3: major revision of the article (adding tropical hyperplanes, alternative method by arrangements, etc); v4: minor revisio

    Sensitivity Analysis for Mirror-Stratifiable Convex Functions

    Get PDF
    This paper provides a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coined "mirror-stratifiable". These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this "mirror-stratifiable" structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms. Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata.We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations, allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions

    The Value-of-Information in Matching with Queues

    Full text link
    We consider the problem of \emph{optimal matching with queues} in dynamic systems and investigate the value-of-information. In such systems, the operators match tasks and resources stored in queues, with the objective of maximizing the system utility of the matching reward profile, minus the average matching cost. This problem appears in many practical systems and the main challenges are the no-underflow constraints, and the lack of matching-reward information and system dynamics statistics. We develop two online matching algorithms: Learning-aided Reward optimAl Matching (LRAM\mathtt{LRAM}) and Dual-LRAM\mathtt{LRAM} (DRAM\mathtt{DRAM}) to effectively resolve both challenges. Both algorithms are equipped with a learning module for estimating the matching-reward information, while DRAM\mathtt{DRAM} incorporates an additional module for learning the system dynamics. We show that both algorithms achieve an O(ϵ+δr)O(\epsilon+\delta_r) close-to-optimal utility performance for any ϵ>0\epsilon>0, while DRAM\mathtt{DRAM} achieves a faster convergence speed and a better delay compared to LRAM\mathtt{LRAM}, i.e., O(δz/ϵ+log(1/ϵ)2))O(\delta_{z}/\epsilon + \log(1/\epsilon)^2)) delay and O(δz/ϵ)O(\delta_z/\epsilon) convergence under DRAM\mathtt{DRAM} compared to O(1/ϵ)O(1/\epsilon) delay and convergence under LRAM\mathtt{LRAM} (δr\delta_r and δz\delta_z are maximum estimation errors for reward and system dynamics). Our results reveal that information of different system components can play very different roles in algorithm performance and provide a systematic way for designing joint learning-control algorithms for dynamic systems
    corecore