865 research outputs found

    A constant-time algorithm for middle levels Gray codes

    Get PDF
    For any integer n≥1n\geq 1 a middle levels Gray code is a cyclic listing of all nn-element and (n+1)(n+1)-element subsets of {1,2,…,2n+1}\{1,2,\ldots,2n+1\} such that any two consecutive subsets differ in adding or removing a single element. The question whether such a Gray code exists for any n≥1n\geq 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. M\"utze. Proof of the middle levels conjecture. Proc. London Math. Soc., 112(4):677--713, 2016]. In a follow-up paper [T. M\"utze and J. Nummenpalo. An efficient algorithm for computing a middle levels Gray code. To appear in ACM Transactions on Algorithms, 2018] this existence proof was turned into an algorithm that computes each new set in the Gray code in time O(n)\mathcal{O}(n) on average. In this work we present an algorithm for computing a middle levels Gray code in optimal time and space: each new set is generated in time O(1)\mathcal{O}(1) on average, and the required space is O(n)\mathcal{O}(n)

    An Efficient generic algorithm for the generation of unlabelled cycles

    Get PDF
    In this report we combine two recent generation algorithms to obtain a new algorithm for the generation of unlabelled cycles. Sawada's algorithm lists all k-ary unlabelled cycles with fixed content, that is, the number of occurences of each symbol is fixed and given a priori. The other algorithm, by the authors, generates all multisets of objects with given total size n from any admissible unlabelled class A. By admissible we mean that the class can be specificied using atomic classes, disjoints unions, products, sequences, (multi)sets, etc. The resulting algorithm, which is the main contribution of this paper, generates all cycles of objects with given total size n from any admissible class A. Given the generic nature of the algorithm, it is suitable for inclusion in combinatorial libraries and for rapid prototyping. The new algorithm incurs constant amortized time per generated cycle, the constant only depending in the class A to which the objects in the cycle belong.Postprint (published version

    Efficient indexing of necklaces and irreducible polynomials over finite fields

    Full text link
    We study the problem of indexing irreducible polynomials over finite fields, and give the first efficient algorithm for this problem. Specifically, we show the existence of poly(n, log q)-size circuits that compute a bijection between {1, ... , |S|} and the set S of all irreducible, monic, univariate polynomials of degree n over a finite field F_q. This has applications in pseudorandomness, and answers an open question of Alon, Goldreich, H{\aa}stad and Peralta[AGHP]. Our approach uses a connection between irreducible polynomials and necklaces ( equivalence classes of strings under cyclic rotation). Along the way, we give the first efficient algorithm for indexing necklaces of a given length over a given alphabet, which may be of independent interest

    All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs

    Full text link
    We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms.Comment: 10 page
    • …
    corecore