65 research outputs found

    The lattice of arithmetic progressions

    Full text link
    In this paper we investigate properties of the lattice LnL_n of subsets of [n]={1,…,n}[n] = \{1,\ldots,n\} that are arithmetic progressions, under the inclusion order. For n≥4n\geq 4, this poset is not graded and thus not semimodular. We start by deriving properties of the number pnkp_{nk} of arithmetic progressions of length kk in [n][n]. Next, we look at the set of chains in Ln′=Ln∖{∅,[n]}L_n' = L_n\setminus\{\emptyset,[n]\} and study the order complex Δn\Delta_n of Ln′L_n'. Third, we determine the set of coatoms in LnL_n to give a general formula for the value of μn\mu_n evaluated at an arbitrary interval of LnL_n. In each of these three sections, we give an independent proof of the fact that for n≥2n\geq 2, μn(Ln)=μ(n−1)\mu_n(L_n) = \mu(n-1), where μn\mu_n is the M\"obius function of LnL_n and μ\mu is the classical (number-theoretic) M\"obius function. We conclude by computing the homology groups of Δn\Delta_n, providing yet another explanation for the formula of the M\"obius function of LnL_n.Comment: 8 pages, 1 figure, 2 table

    On point sets with many unit distances in few directions

    Get PDF
    We study the problem of the maximum number of unit distances among n points in the plane under the additional restriction that we count only those unit distances that occur in a xed set of k directions taking the maximum over all sets of n points and all sets of k directions We prove that for xed k and suciently large n n k the extremal sets are essentially sections of lattices bounded by edges parallel to the k directions and of equal lengt

    On Near-Linear-Time Algorithms for Dense Subset Sum

    Get PDF
    In the Subset Sum problem we are given a set of nn positive integers XX and a target tt and are asked whether some subset of XX sums to tt. Natural parameters for this problem that have been studied in the literature are nn and tt as well as the maximum input number mxX\rm{mx}_X and the sum of all input numbers ΣX\Sigma_X. In this paper we study the dense case of Subset Sum, where all these parameters are polynomial in nn. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time nO(1)n^{O(1)}. Our main question is: When can dense Subset Sum be solved in near-linear time O~(n)\tilde{O}(n)? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters n,t,mxX,ΣXn,t,\rm{mx}_X,\Sigma_X for which dense Subset Sum is in time O~(n)\tilde{O}(n). For notational convenience we assume without loss of generality that t≥mxXt \ge \rm{mx}_X (as larger numbers can be ignored) and t≤ΣX/2t \le \Sigma_X/2 (using symmetry). Then our dichotomy reads as follows: - By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP'91], we show that Subset Sum is in near-linear time O~(n)\tilde{O}(n) if t≫mxXΣX/n2t \gg \rm{mx}_X \Sigma_X/n^2. - We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with t≪mxXΣX/n2t \ll \rm{mx}_X \Sigma_X/n^2, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds

    Sister Beiter and Kloosterman: a tale of cyclotomic coefficients and modular inverses

    Full text link
    For a fixed prime pp, the maximum coefficient (in absolute value) M(p)M(p) of the cyclotomic polynomial Φpqr(x)\Phi_{pqr}(x), where rr and qq are free primes satisfying r>q>pr>q>p exists. Sister Beiter conjectured in 1968 that M(p)≤(p+1)/2M(p)\le(p+1)/2. In 2009 Gallot and Moree showed that M(p)≥2p(1−ϵ)/3M(p)\ge 2p(1-\epsilon)/3 for every pp sufficiently large. In this article Kloosterman sums (`cloister man sums') and other tools from the distribution of modular inverses are applied to quantify the abundancy of counter-examples to Sister Beiter's conjecture and sharpen the above lower bound for M(p)M(p).Comment: 2 figures; 15 page

    Top-k-Convolution and the Quest for Near-Linear Output-Sensitive Subset Sum

    Get PDF
    In the classical Subset Sum problem we are given a set XX and a target tt, and the task is to decide whether there exists a subset of XX which sums to tt. A recent line of research has resulted in O~(t)\tilde{O}(t)-time algorithms, which are (near-)optimal under popular complexity-theoretic assumptions. On the other hand, the standard dynamic programming algorithm runs in time O(n⋅∣S(X,t)∣)O(n \cdot |\mathcal{S}(X,t)|), where S(X,t)\mathcal{S}(X,t) is the set of all subset sums of XX that are smaller than tt. Furthermore, all known pseudopolynomial algorithms actually solve a stronger task, since they actually compute the whole set S(X,t)\mathcal{S}(X,t). As the aforementioned two running times are incomparable, in this paper we ask whether one can achieve the best of both worlds: running time O~(∣S(X,t)∣)\tilde{O}(|\mathcal{S}(X,t)|). In particular, we ask whether S(X,t)\mathcal{S}(X,t) can be computed in near-linear time in the output-size. Using a diverse toolkit containing techniques such as color coding, sparse recovery, and sumset estimates, we make considerable progress towards this question and design an algorithm running in time O~(∣S(X,t)∣4/3)\tilde{O}(|\mathcal{S}(X,t)|^{4/3}). Central to our approach is the study of top-kk-convolution, a natural problem of independent interest: given sparse polynomials with non-negative coefficients, compute the lowest kk non-zero monomials of their product. We design an algorithm running in time O~(k4/3)\tilde{O}(k^{4/3}), by a combination of sparse convolution and sumset estimates considered in Additive Combinatorics. Moreover, we provide evidence that going beyond some of the barriers we have faced requires either an algorithmic breakthrough or possibly new techniques from Additive Combinatorics on how to pass from information on restricted sumsets to information on unrestricted sumsets

    A characterization of class groups via sets of lengths

    Full text link
    Let HH be a Krull monoid with class group GG such that every class contains a prime divisor. Then every nonunit a∈Ha \in H can be written as a finite product of irreducible elements. If a=u_1⋅…⋅u_ka=u\_1 \cdot \ldots \cdot u\_k, with irreducibles u_1,…u_k∈Hu\_1, \ldots u\_k \in H, then kk is called the length of the factorization and the set L(a)\mathsf L (a) of all possible kk is called the set of lengths of aa. It is well-known that the system L(H)={L(a)∣a∈H}\mathcal L (H) = \{\mathsf L (a) \mid a \in H \} depends only on the class group GG. In the present paper we study the inverse question asking whether or not the system L(H)\mathcal L (H) is characteristic for the class group. Consider a further Krull monoid H′H' with class group G′G' such that every class contains a prime divisor and suppose that L(H)=L(H′)\mathcal L (H) = \mathcal L (H'). We show that, if one of the groups GG and G′G' is finite and has rank at most two, then GG and G′G' are isomorphic (apart from two well-known pairings).Comment: The current version is close to the one to appear in J. Korean Math. Soc., yet it contains a detailed proof of Proposition 2.4. The content of Chapter 4 of the first version had been split off and is presented in ' A characterization of Krull monoids for which sets of lengths are (almost) arithmetical progressions' by the same authors (see hal-01976941 and arXiv:1901.03506

    {SETH}-Based Lower Bounds for Subset Sum and Bicriteria Path

    Get PDF
    Subset-Sum and k-SAT are two of the most extensively studied problems in computer science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. One of the most intriguing open problems in this area is to base the hardness of one of these problems on the other. Our main result is a tight reduction from k-SAT to Subset-Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial O∗(T)O^{*}(T)-time algorithm for Subset-Sum on nn numbers and target TT cannot be improved to time T1−ε⋅2o(n)T^{1-\varepsilon}\cdot 2^{o(n)} for any ε>0\varepsilon>0, unless the Strong Exponential Time Hypothesis (SETH) fails. This is one of the strongest known connections between any two of the core problems of fine-grained complexity. As a corollary, we prove a "Direct-OR" theorem for Subset-Sum under SETH, offering a new tool for proving conditional lower bounds: It is now possible to assume that deciding whether one out of NN given instances of Subset-Sum is a YES instance requires time (NT)1−o(1)(N T)^{1-o(1)}. As an application of this corollary, we prove a tight SETH-based lower bound for the classical Bicriteria s,t-Path problem, which is extensively studied in Operations Research. We separate its complexity from that of Subset-Sum: On graphs with mm edges and edge lengths bounded by LL, we show that the O(Lm)O(Lm) pseudo-polynomial time algorithm by Joksch from 1966 cannot be improved to O~(L+m)\tilde{O}(L+m), in contrast to a recent improvement for Subset Sum (Bringmann, SODA 2017)
    • …
    corecore