80,157 research outputs found

    A Quantitative Study of Pure Parallel Processes

    Full text link
    In this paper, we study the interleaving -- or pure merge -- operator that most often characterizes parallelism in concurrency theory. This operator is a principal cause of the so-called combinatorial explosion that makes very hard - at least from the point of view of computational complexity - the analysis of process behaviours e.g. by model-checking. The originality of our approach is to study this combinatorial explosion phenomenon on average, relying on advanced analytic combinatorics techniques. We study various measures that contribute to a better understanding of the process behaviours represented as plane rooted trees: the number of runs (corresponding to the width of the trees), the expected total size of the trees as well as their overall shape. Two practical outcomes of our quantitative study are also presented: (1) a linear-time algorithm to compute the probability of a concurrent run prefix, and (2) an efficient algorithm for uniform random sampling of concurrent runs. These provide interesting responses to the combinatorial explosion problem

    Efficient intra- and inter-night linking of asteroid detections using kd-trees

    Get PDF
    The Panoramic Survey Telescope And Rapid Response System (Pan-STARRS) under development at the University of Hawaii's Institute for Astronomy is creating the first fully automated end-to-end Moving Object Processing System (MOPS) in the world. It will be capable of identifying detections of moving objects in our solar system and linking those detections within and between nights, attributing those detections to known objects, calculating initial and differentially-corrected orbits for linked detections, precovering detections when they exist, and orbit identification. Here we describe new kd-tree and variable-tree algorithms that allow fast, efficient, scalable linking of intra and inter-night detections. Using a pseudo-realistic simulation of the Pan-STARRS survey strategy incorporating weather, astrometric accuracy and false detections we have achieved nearly 100% efficiency and accuracy for intra-night linking and nearly 100% efficiency for inter-night linking within a lunation. At realistic sky-plane densities for both real and false detections the intra-night linking of detections into `tracks' currently has an accuracy of 0.3%. Successful tests of the MOPS on real source detections from the Spacewatch asteroid survey indicate that the MOPS is capable of identifying asteroids in real data.Comment: Accepted to Icaru

    A constant-time algorithm for middle levels Gray codes

    Get PDF
    For any integer n1n\geq 1 a middle levels Gray code is a cyclic listing of all nn-element and (n+1)(n+1)-element subsets of {1,2,,2n+1}\{1,2,\ldots,2n+1\} such that any two consecutive subsets differ in adding or removing a single element. The question whether such a Gray code exists for any n1n\geq 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. M\"utze. Proof of the middle levels conjecture. Proc. London Math. Soc., 112(4):677--713, 2016]. In a follow-up paper [T. M\"utze and J. Nummenpalo. An efficient algorithm for computing a middle levels Gray code. To appear in ACM Transactions on Algorithms, 2018] this existence proof was turned into an algorithm that computes each new set in the Gray code in time O(n)\mathcal{O}(n) on average. In this work we present an algorithm for computing a middle levels Gray code in optimal time and space: each new set is generated in time O(1)\mathcal{O}(1) on average, and the required space is O(n)\mathcal{O}(n)

    Structured Learning of Tree Potentials in CRF for Image Segmentation

    Full text link
    We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields (CRFs) and decision trees. In the literature, the potential functions of CRFs are mostly defined as a linear combination of some pre-defined parametric models, and then methods like structured support vector machines (SSVMs) are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests---ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework. In this fashion, we easily achieve nonlinear learning of potential functions on both unary and pairwise terms in CRFs. Moreover, we learn class-wise decision trees for each object that appears in the image. Due to the rich structure and flexibility of decision trees, our approach is powerful in modelling complex data likelihoods and label relationships. The resulting optimization problem is very challenging because it can have exponentially many variables and constraints. We show that this challenging optimization can be efficiently solved by combining a modified column generation and cutting-planes techniques. Experimental results on both binary (Graz-02, Weizmann horse, Oxford flower) and multi-class (MSRC-21, PASCAL VOC 2012) segmentation datasets demonstrate the power of the learned nonlinear nonparametric potentials.Comment: 10 pages. Appearing in IEEE Transactions on Neural Networks and Learning System

    Efficient computation of middle levels Gray codes

    Get PDF
    For any integer n1n\geq 1 a middle levels Gray code is a cyclic listing of all bitstrings of length 2n+12n+1 that have either nn or n+1n+1 entries equal to 1 such that any two consecutive bitstrings in the list differ in exactly one bit. The question whether such a Gray code exists for every n1n\geq 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. M\"utze. Proof of the middle levels conjecture. Proc. London Math. Soc., 112(4):677--713, 2016]. In this work we provide the first efficient algorithm to compute a middle levels Gray code. For a given bitstring, our algorithm computes the next \ell bitstrings in the Gray code in time O(n(1+n))\mathcal{O}(n\ell(1+\frac{n}{\ell})), which is O(n)\mathcal{O}(n) on average per bitstring provided that =Ω(n)\ell=\Omega(n)

    Polynomial tuning of multiparametric combinatorial samplers

    Full text link
    Boltzmann samplers and the recursive method are prominent algorithmic frameworks for the approximate-size and exact-size random generation of large combinatorial structures, such as maps, tilings, RNA sequences or various tree-like structures. In their multiparametric variants, these samplers allow to control the profile of expected values corresponding to multiple combinatorial parameters. One can control, for instance, the number of leaves, profile of node degrees in trees or the number of certain subpatterns in strings. However, such a flexible control requires an additional non-trivial tuning procedure. In this paper, we propose an efficient polynomial-time, with respect to the number of tuned parameters, tuning algorithm based on convex optimisation techniques. Finally, we illustrate the efficiency of our approach using several applications of rational, algebraic and P\'olya structures including polyomino tilings with prescribed tile frequencies, planar trees with a given specific node degree distribution, and weighted partitions.Comment: Extended abstract, accepted to ANALCO2018. 20 pages, 6 figures, colours. Implementation and examples are available at [1] https://github.com/maciej-bendkowski/boltzmann-brain [2] https://github.com/maciej-bendkowski/multiparametric-combinatorial-sampler

    Binary Decision Diagrams: from Tree Compaction to Sampling

    Full text link
    Any Boolean function corresponds with a complete full binary decision tree. This tree can in turn be represented in a maximally compact form as a direct acyclic graph where common subtrees are factored and shared, keeping only one copy of each unique subtree. This yields the celebrated and widely used structure called reduced ordered binary decision diagram (ROBDD). We propose to revisit the classical compaction process to give a new way of enumerating ROBDDs of a given size without considering fully expanded trees and the compaction step. Our method also provides an unranking procedure for the set of ROBDDs. As a by-product we get a random uniform and exhaustive sampler for ROBDDs for a given number of variables and size

    Uniform random sampling of planar graphs in linear time

    Get PDF
    This article introduces new algorithms for the uniform random generation of labelled planar graphs. Its principles rely on Boltzmann samplers, as recently developed by Duchon, Flajolet, Louchard, and Schaeffer. It combines the Boltzmann framework, a suitable use of rejection, a new combinatorial bijection found by Fusy, Poulalhon and Schaeffer, as well as a precise analytic description of the generating functions counting planar graphs, which was recently obtained by Gim\'enez and Noy. This gives rise to an extremely efficient algorithm for the random generation of planar graphs. There is a preprocessing step of some fixed small cost. Then, the expected time complexity of generation is quadratic for exact-size uniform sampling and linear for approximate-size sampling. This greatly improves on the best previously known time complexity for exact-size uniform sampling of planar graphs with nn vertices, which was a little over O(n7)O(n^7).Comment: 55 page
    corecore