424 research outputs found

    Superpolynomial smoothed complexity of 3-FLIP in Local Max-Cut

    Full text link
    We construct a graph with nn vertices where the smoothed runtime of the 3-FLIP algorithm for the 3-Opt Local Max-Cut problem can be as large as 2Ω(n)2^{\Omega(\sqrt{n})}. This provides the first example where a local search algorithm for the Max-Cut problem can fail to be efficient in the framework of smoothed analysis. We also give a new construction of graphs where the runtime of the FLIP algorithm for the Local Max-Cut problem is 2Ω(n)2^{\Omega(n)} for any pivot rule. This graph is much smaller and has a simpler structure than previous constructions.Comment: 18 pages, 3 figure

    Node-Max-Cut and the Complexity of Equilibrium in Linear Weighted Congestion Games

    Get PDF
    In this work, we seek a more refined understanding of the complexity of local optimum computation for Max-Cut and pure Nash equilibrium (PNE) computation for congestion games with weighted players and linear latency functions. We show that computing a PNE of linear weighted congestion games is PLS-complete either for very restricted strategy spaces, namely when player strategies are paths on a series-parallel network with a single origin and destination, or for very restricted latency functions, namely when the latency on each resource is equal to the congestion. Our results reveal a remarkable gap regarding the complexity of PNE in congestion games with weighted and unweighted players, since in case of unweighted players, a PNE can be easily computed by either a simple greedy algorithm (for series-parallel networks) or any better response dynamics (when the latency is equal to the congestion). For the latter of the results above, we need to show first that computing a local optimum of a natural restriction of Max-Cut, which we call Node-Max-Cut, is PLS-complete. In Node-Max-Cut, the input graph is vertex-weighted and the weight of each edge is equal to the product of the weights of its endpoints. Due to the very restricted nature of Node-Max-Cut, the reduction requires a careful combination of new gadgets with ideas and techniques from previous work. We also show how to compute efficiently a (1+?)-approximate equilibrium for Node-Max-Cut, if the number of different vertex weights is constant

    Minimum Stable Cut and Treewidth

    Get PDF
    A stable or locally-optimal cut of a graph is a cut whose weight cannot be increased by changing the side of a single vertex. Equivalently, a cut is stable if all vertices have the (weighted) majority of their neighbors on the other side. Finding a stable cut is a prototypical PLS-complete problem that has been studied in the context of local search and of algorithmic game theory. In this paper we study Min Stable Cut, the problem of finding a stable cut of minimum weight, which is closely related to the Price of Anarchy of the Max Cut game. Since this problem is NP-hard, we study its complexity on graphs of low treewidth, low degree, or both. We begin by showing that the problem remains weakly NP-hard on severely restricted trees, so bounding treewidth alone cannot make it tractable. We match this hardness with a pseudo-polynomial DP algorithm solving the problem in time (?? W)^{O(tw)}n^{O(1)}, where tw is the treewidth, ? the maximum degree, and W the maximum weight. On the other hand, bounding ? is also not enough, as the problem is NP-hard for unweighted graphs of bounded degree. We therefore parameterize Min Stable Cut by both tw and ? and obtain an FPT algorithm running in time 2^{O(?tw)}(n+log W)^{O(1)}. Our main result for the weighted problem is to provide a reduction showing that both aforementioned algorithms are essentially optimal, even if we replace treewidth by pathwidth: if there exists an algorithm running in (nW)^{o(pw)} or 2^{o(?pw)}(n+log W)^{O(1)}, then the ETH is false. Complementing this, we show that we can, however, obtain an FPT approximation scheme parameterized by treewidth, if we consider almost-stable solutions, that is, solutions where no single vertex can unilaterally increase the weight of its incident cut edges by more than a factor of (1+?). Motivated by these mostly negative results, we consider Unweighted Min Stable Cut. Here our results already imply a much faster exact algorithm running in time ?^{O(tw)}n^{O(1)}. We show that this is also probably essentially optimal: an algorithm running in n^{o(pw)} would contradict the ETH

    Combinatorial Optimization

    Get PDF
    Combinatorial Optimization is an active research area that developed from the rich interaction among many mathematical areas, including combinatorics, graph theory, geometry, optimization, probability, theoretical computer science, and many others. It combines algorithmic and complexity analysis with a mature mathematical foundation and it yields both basic research and applications in manifold areas such as, for example, communications, economics, traffic, network design, VLSI, scheduling, production, computational biology, to name just a few. Through strong inner ties to other mathematical fields it has been contributing to and benefiting from areas such as, for example, discrete and convex geometry, convex and nonlinear optimization, algebraic and topological methods, geometry of numbers, matroids and combinatorics, and mathematical programming. Moreover, with respect to applications and algorithmic complexity, Combinatorial Optimization is an essential link between mathematics, computer science and modern applications in data science, economics, and industry

    Streaming beyond sketching for Maximum Directed Cut

    Full text link
    We give an O~(n)\widetilde{O}(\sqrt{n})-space single-pass 0.4830.483-approximation streaming algorithm for estimating the maximum directed cut size (Max-DICUT\textsf{Max-DICUT}) in a directed graph on nn vertices. This improves over an O(logn)O(\log n)-space 4/9<0.454/9 < 0.45 approximation algorithm due to Chou, Golovnev, Velusamy (FOCS 2020), which was known to be optimal for o(n)o(\sqrt{n})-space algorithms. Max-DICUT\textsf{Max-DICUT} is a special case of a constraint satisfaction problem (CSP). In this broader context, our work gives the first CSP for which algorithms with O~(n)\widetilde{O}(\sqrt{n}) space can provably outperform o(n)o(\sqrt{n})-space algorithms on general instances. Previously, this was shown in the restricted case of bounded-degree graphs in a previous work of the authors (SODA 2023). Prior to that work, the only algorithms for any CSP were based on generalizations of the O(logn)O(\log n)-space algorithm for Max-DICUT\textsf{Max-DICUT}, and were in particular so-called "sketching" algorithms. In this work, we demonstrate that more sophisticated streaming algorithms can outperform these algorithms even on general instances. Our algorithm constructs a "snapshot" of the graph and then applies a result of Feige and Jozeph (Algorithmica, 2015) to approximately estimate the Max-DICUT\textsf{Max-DICUT} value from this snapshot. Constructing this snapshot is easy for bounded-degree graphs and the main contribution of our work is to construct this snapshot in the general setting. This involves some delicate sampling methods as well as a host of "continuity" results on the Max-DICUT\textsf{Max-DICUT} behaviour in graphs.Comment: 57 pages, 2 figure

    On streaming approximation algorithms for constraint satisfaction problems

    Full text link
    In this thesis, we explore streaming algorithms for approximating constraint satisfaction problems (CSPs). The setup is roughly the following: A computer has limited memory space, sees a long "stream" of local constraints on a set of variables, and tries to estimate how many of the constraints may be simultaneously satisfied. The past ten years have seen a number of works in this area, and this thesis includes both expository material and novel contributions. Throughout, we emphasize connections to the broader theories of CSPs, approximability, and streaming models, and highlight interesting open problems. The first part of our thesis is expository: We present aspects of previous works that completely characterize the approximability of specific CSPs like Max-Cut and Max-Dicut with n\sqrt{n}-space streaming algorithm (on nn-variable instances), while characterizing the approximability of all CSPs in n\sqrt n space in the special case of "composable" (i.e., sketching) algorithms, and of a particular subclass of CSPs with linear-space streaming algorithms. In the second part of the thesis, we present two of our own joint works. We begin with a work with Madhu Sudan and Santhoshini Velusamy in which we prove linear-space streaming approximation-resistance for all ordering CSPs (OCSPs), which are "CSP-like" problems maximizing over sets of permutations. Next, we present joint work with Joanna Boyland, Michael Hwang, Tarun Prasad, and Santhoshini Velusamy in which we investigate the n\sqrt n-space streaming approximability of symmetric Boolean CSPs with negations. We give explicit n\sqrt n-space sketching approximability ratios for several families of CSPs, including Max-kkAND; develop simpler optimal sketching approximation algorithms for threshold predicates; and show that previous lower bounds fail to characterize the n\sqrt n-space streaming approximability of Max-33AND.Comment: Harvard College senior thesis; 119 pages plus references; abstract shortened for arXiv; formatted with Dissertate template (feel free to copy!); exposits papers arXiv:2105.01782 (APPROX 2021) and arXiv:2112.06319 (APPROX 2022

    A simple and sharper proof of the hypergraph Moore bound

    Full text link
    The hypergraph Moore bound is an elegant statement that characterizes the extremal trade-off between the girth - the number of hyperedges in the smallest cycle or even cover (a subhypergraph with all degrees even) and size - the number of hyperedges in a hypergraph. For graphs (i.e., 22-uniform hypergraphs), a bound tight up to the leading constant was proven in a classical work of Alon, Hoory and Linial [AHL02]. For hypergraphs of uniformity k>2k>2, an appropriate generalization was conjectured by Feige [Fei08]. The conjecture was settled up to an additional log4k+1n\log^{4k+1} n factor in the size in a recent work of Guruswami, Kothari and Manohar [GKM21]. Their argument relies on a connection between the existence of short even covers and the spectrum of a certain randomly signed Kikuchi matrix. Their analysis, especially for the case of odd kk, is significantly complicated. In this work, we present a substantially simpler and shorter proof of the hypergraph Moore bound. Our key idea is the use of a new reweighted Kikuchi matrix and an edge deletion step that allows us to drop several involved steps in [GKM21]'s analysis such as combinatorial bucketing of rows of the Kikuchi matrix and the use of the Schudy-Sviridenko polynomial concentration. Our simpler proof also obtains tighter parameters: in particular, the argument gives a new proof of the classical Moore bound of [AHL02] with no loss (the proof in [GKM21] loses a log3n\log^3 n factor), and loses only a single logarithmic factor for all k>2k>2-uniform hypergraphs. As in [GKM21], our ideas naturally extend to yield a simpler proof of the full trade-off for strongly refuting smoothed instances of constraint satisfaction problems with similarly improved parameters

    Balancing and Sequencing of Mixed Model Assembly Lines

    Get PDF
    Assembly lines are cost efficient production systems that mass produce identical products. Due to customer demand, manufacturers use mixed model assembly lines to produce customized products that are not identical. To stay efficient, management decisions for the line such as number of workers and assembly task assignment to stations need to be optimized to increase throughput and decrease cost. In each station, the work to be done depends on the exact product configuration, and is not consistent across all products. In this dissertation, a mixed model line balancing integer program (IP) that considers parallel workers, zoning, task assignment, and ergonomic constraints with the objective of minimizing the number of workers is proposed. Upon observing the limitation of the IP, a Constraint Programming (CP) model that is based on CPLEX CP Optimizer is developed to solve larger assembly line balancing problems. Data from an automotive OEM are used to assess the performance of both the MIP and CP models. Using the OEM data, we show that the CP model outperforms the IP model for bigger problems. A sensitivity analysis is done to assess the cost of enforcing some of the constraint on the computation complexity and the amount of violations to these constraints once they are disabled. Results show that some of the constraints are helpful in reducing the computation time. Specifically, the assignment constraints in which decision variables are fixed or bounded result in a smaller search space. Finally, since the line balance for mixed model is based on task duration averages, we propose a mixed model sequencing model that minimize the number of overload situation that might occur due to variability in tasks times by providing an optimal production sequence. We consider the skip-policy to manage overload situations and allow interactions between stations via workers swimming. An IP model formulation is proposed and a GRASP solution heuristic is developed to solve the problem. Data from the literature are used to assess the performance of the developed heuristic and to show the benefit of swimming in reducing work overload situations
    corecore