14,022 research outputs found

    A Nearly Quadratic Bound for the Decision Tree Complexity of k-SUM

    Get PDF
    We show that the k-SUM problem can be solved by a linear decision tree of depth O(n^2 log^2 n),improving the recent bound O(n^3 log^3 n) of Cardinal et al. Our bound depends linearly on k, and allows us to conclude that the number of linear queries required to decide the n-dimensional Knapsack or SubsetSum problems is only O(n^3 log n), improving the currently best known bounds by a factor of n. Our algorithm extends to the RAM model, showing that the k-SUM problem can be solved in expected polynomial time, for any fixed k, with the above bound on the number of linear queries. Our approach relies on a new point-location mechanism, exploiting "Epsilon-cuttings" that are based on vertical decompositions in hyperplane arrangements in high dimensions. A major side result of the analysis in this paper is a sharper bound on the complexity of the vertical decomposition of such an arrangement (in terms of its dependence on the dimension). We hope that this study will reveal further structural properties of vertical decompositions in hyperplane arrangements

    Computational Geometry Column 42

    Get PDF
    A compendium of thirty previously published open problems in computational geometry is presented.Comment: 7 pages; 72 reference

    Improved Bounds for 3SUM, kk-SUM, and Linear Degeneracy

    Get PDF
    Given a set of nn real numbers, the 3SUM problem is to decide whether there are three of them that sum to zero. Until a recent breakthrough by Gr{\o}nlund and Pettie [FOCS'14], a simple Θ(n2)\Theta(n^2)-time deterministic algorithm for this problem was conjectured to be optimal. Over the years many algorithmic problems have been shown to be reducible from the 3SUM problem or its variants, including the more generalized forms of the problem, such as kk-SUM and kk-variate linear degeneracy testing (kk-LDT). The conjectured hardness of these problems have become extremely popular for basing conditional lower bounds for numerous algorithmic problems in P. In this paper, we show that the randomized 44-linear decision tree complexity of 3SUM is O(n3/2)O(n^{3/2}), and that the randomized (2k2)(2k-2)-linear decision tree complexity of kk-SUM and kk-LDT is O(nk/2)O(n^{k/2}), for any odd k3k\ge 3. These bounds improve (albeit randomized) the corresponding O(n3/2logn)O(n^{3/2}\sqrt{\log n}) and O(nk/2logn)O(n^{k/2}\sqrt{\log n}) decision tree bounds obtained by Gr{\o}nlund and Pettie. Our technique includes a specialized randomized variant of fractional cascading data structure. Additionally, we give another deterministic algorithm for 3SUM that runs in O(n2loglogn/logn)O(n^2 \log\log n / \log n ) time. The latter bound matches a recent independent bound by Freund [Algorithmica 2017], but our algorithm is somewhat simpler, due to a better use of word-RAM model

    Low-Sensitivity Functions from Unambiguous Certificates

    Get PDF
    We provide new query complexity separations against sensitivity for total Boolean functions: a power 33 separation between deterministic (and even randomized or quantum) query complexity and sensitivity, and a power 2.222.22 separation between certificate complexity and sensitivity. We get these separations by using a new connection between sensitivity and a seemingly unrelated measure called one-sided unambiguous certificate complexity (UCminUC_{min}). We also show that UCminUC_{min} is lower-bounded by fractional block sensitivity, which means we cannot use these techniques to get a super-quadratic separation between bs(f)bs(f) and s(f)s(f). We also provide a quadratic separation between the tree-sensitivity and decision tree complexity of Boolean functions, disproving a conjecture of Gopalan, Servedio, Tal, and Wigderson (CCC 2016). Along the way, we give a power 1.221.22 separation between certificate complexity and one-sided unambiguous certificate complexity, improving the power 1.1281.128 separation due to G\"o\"os (FOCS 2015). As a consequence, we obtain an improved Ω(log1.22n)\Omega(\log^{1.22} n) lower-bound on the co-nondeterministic communication complexity of the Clique vs. Independent Set problem.Comment: 25 pages. This version expands the results and adds Pooya Hatami and Avishay Tal as author

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith

    Delay Minimizing User Association in Cellular Networks via Hierarchically Well-Separated Trees

    Full text link
    We study downlink delay minimization within the context of cellular user association policies that map mobile users to base stations. We note the delay minimum user association problem fits within a broader class of network utility maximization and can be posed as a non-convex quadratic program. This non-convexity motivates a split quadratic objective function that captures the original problem's inherent tradeoff: association with a station that provides the highest signal-to-interference-plus-noise ratio (SINR) vs. a station that is least congested. We find the split-term formulation is amenable to linearization by embedding the base stations in a hierarchically well-separated tree (HST), which offers a linear approximation with constant distortion. We provide a numerical comparison of several problem formulations and find that with appropriate optimization parameter selection, the quadratic reformulation produces association policies with sum delays that are close to that of the original network utility maximization. We also comment on the more difficult problem when idle base stations (those without associated users) are deactivated.Comment: 6 pages, 5 figures. Submitted on 2013-10-03 to the 2015 IEEE International Conference on Communications (ICC). Accepted on 2015-01-09 to the 2015 IEEE International Conference on Communications (ICC
    corecore