769 research outputs found

    Triangular dissections, aperiodic tilings and Jones algebras

    Get PDF
    The Brattelli diagram associated with a given bicolored Dynkin-Coxeter graph of type AnA_n determines planar fractal sets obtained by infinite dissections of a given triangle. All triangles appearing in the dissection process have angles that are multiples of π/(n+1).\pi/ (n+1). There are usually several possible infinite dissections compatible with a given nn but a given one makes use of n/2n/2 triangle types if nn is even. Jones algebra with index [4 cos2πn+1]1[ 4 \ \cos^2{\pi \over n+1}]^{-1} (values of the discrete range) act naturally on vector spaces associated with those fractal sets. Triangles of a given type are always congruent at each step of the dissection process. In the particular case n=4n=4, there are isometric and the whole structure lead, after proper inflation, to aperiodic Penrose tilings. The ``tilings'' associated with other values of the index are discussed and shown to be encoded by equivalence classes of infinite sequences (with appropriate constraints) using n/2n/2 digits (if nn is even) and generalizing the Fibonacci numbers.Comment: 14 pages. Revised version. 18 Postcript figures, a 500 kb uuencoded file called images.uu available by mosaic or gopher from gopher://cpt.univ-mrs.fr/11/preprints/94/fundamental-interactions/94-P.302

    Stochastic Analysis of a Churn-Tolerant Structured Peer-to-Peer Scheme

    Full text link
    We present and analyze a simple and general scheme to build a churn (fault)-tolerant structured Peer-to-Peer (P2P) network. Our scheme shows how to "convert" a static network into a dynamic distributed hash table(DHT)-based P2P network such that all the good properties of the static network are guaranteed with high probability (w.h.p). Applying our scheme to a cube-connected cycles network, for example, yields a O(logN)O(\log N) degree connected network, in which every search succeeds in O(logN)O(\log N) hops w.h.p., using O(logN)O(\log N) messages, where NN is the expected stable network size. Our scheme has an constant storage overhead (the number of nodes responsible for servicing a data item) and an O(logN)O(\log N) overhead (messages and time) per insertion and essentially no overhead for deletions. All these bounds are essentially optimal. While DHT schemes with similar guarantees are already known in the literature, this work is new in the following aspects: (1) It presents a rigorous mathematical analysis of the scheme under a general stochastic model of churn and shows the above guarantees; (2) The theoretical analysis is complemented by a simulation-based analysis that validates the asymptotic bounds even in moderately sized networks and also studies performance under changing stable network size; (3) The presented scheme seems especially suitable for maintaining dynamic structures under churn efficiently. In particular, we show that a spanning tree of low diameter can be efficiently maintained in constant time and logarithmic number of messages per insertion or deletion w.h.p. Keywords: P2P Network, DHT Scheme, Churn, Dynamic Spanning Tree, Stochastic Analysis

    Ricci-flat cubic graphs with girth five

    Full text link
    We classify all connected, simple, 3-regular graphs with girth at least 5 that are Ricci-flat. We use the definition of Ricci curvature on graphs given in Lin-Lu-Yau, Tohoku Math., 2011, which is a variation of Ollivier, J. Funct. Anal., 2009. A graph is Ricci-flat, if it has vanishing Ricci curvature on all edges. We show, that the only Ricci-flat cubic graphs with girth at least 5 are the Petersen graph, the Triplex and the dodecahedral graph. This will correct the classification in Lin-Lu-Yau, Comm. Anal. Geom., 2014, that misses the Triplex

    Parallel Algorithms for Bayesian Networks Structure Learning with Applications in Systems Biology

    Get PDF
    The expression levels of thousands to tens of thousands of genes in a living cell are controlled by internal and external cues which act in a combinatorial manner that can be modeled as a network. High-throughput technologies, such as DNA-microarrays and next generation sequencing, allow for the measurement of gene expression levels on a whole-genome scale. In recent years, a wealth of microarray data probing gene expression under various biological conditions has been accumulated in public repositories, which facilitates uncovering the underlying transcriptional networks (gene networks). Due to the high data dimensionality and inherent complexity of gene interactions, this task inevitably requires automated computational approaches. Various models have been proposed for learning gene networks, with Bayesian networks (BNs) showing promise for the task. However, BN structure learning is an NP-hard problem and both exact and heuristic methods are computationally intensive with limited ability to produce large networks. To address these issues, we developed a set of parallel algorithms. First, we present a communication efficient parallel algorithm for exact BN structure learning, which is work-optimal provided that 2^n \u3e p.log(p), where n is the total number of variables, and p is the number of processors. This algorithm has space complexity within 1.41 of the optimal. Our empirical results demonstrate near perfect scaling on up to 2,048 processors. We further extend this work to the case of bounded node in-degree, where a limit d on the number of parents per variable is imposed. We characterize the algorithm\u27s run-time behavior as a function of d, establishing the range [n/3 - log(mn), ceil(n/2)) of values for d where it affects performance. Consequently, two plateaus regions are identified: for d \u3c n/3 - log(mn), where the run-time complexity remains the same as for d=1, and for d \u3e= ceil(n/2), where the run-time complexity remains the same as for d=n-1. Finally, we present a parallel heuristic approach for large-scale BN learning. This approach aims to combine the precision of exact learning with the scalability of heuristic methods. Our empirical results demonstrate good scaling on various high performance platforms. The quality of the learned networks for both exact and heuristic methods are evaluated using synthetically generated expression data. The biological relevance of the networks learned by the exact algorithm is assessed by applying it to the carotenoid biosynthesis pathway in Arabidopsis thaliana

    Percolation on fitness landscapes: effects of correlation, phenotype, and incompatibilities

    Full text link
    We study how correlations in the random fitness assignment may affect the structure of fitness landscapes. We consider three classes of fitness models. The first is a continuous phenotype space in which individuals are characterized by a large number of continuously varying traits such as size, weight, color, or concentrations of gene products which directly affect fitness. The second is a simple model that explicitly describes genotype-to-phenotype and phenotype-to-fitness maps allowing for neutrality at both phenotype and fitness levels and resulting in a fitness landscape with tunable correlation length. The third is a class of models in which particular combinations of alleles or values of phenotypic characters are "incompatible" in the sense that the resulting genotypes or phenotypes have reduced (or zero) fitness. This class of models can be viewed as a generalization of the canonical Bateson-Dobzhansky-Muller model of speciation. We also demonstrate that the discrete NK model shares some signature properties of models with high correlations. Throughout the paper, our focus is on the percolation threshold, on the number, size and structure of connected clusters, and on the number of viable genotypes.Comment: 31 pages, 4 figures, 1 tabl

    An Open Logic Approach to EPM

    Get PDF
    open2noEPM is a high operative and didactic versatile tool and new application areas are envisaged continuously. In turn, this new awareness has allowed to enlarge our panorama for neurocognitive system EPM is a high operative and didactic versatile tool and new application areas are envisaged continuosly. In turn, this new awareness has allowed to enlarge our panorama for neurocognitive system behavior understanding, and to develop information conservation and regeneration systems in a numeric self-reflexive/reflective evolutive reference framework. Unfortunately, a logically closed model cannot cope with ontological uncertainty by itself; it needs a complementary logical aperture operational support extension. To achieve this goal, it is possible to use two coupled irreducible information management subsystems, based on the following ideal coupled irreducible asymptotic dichotomy: "Information Reliable Predictability" and "Information Reliable Unpredictability" subsystems. To behave realistically, overall system must guarantee both Logical Closure and Logical Aperture, both fed by environmental "noise" (better… from what human beings call "noise"). So, a natural operating point can emerge as a new Trans-disciplinary Reality Level, out of the Interaction of Two Complementary Irreducible Information Management Subsystems within their environment. In this way, it is possible to extend the traditional EPM approach in order to profit by both classic EPM intrinsic Self-Reflexive Functional Logical Closure and new numeric CICT Self-Reflective Functional Logical Aperture. EPM can be thought as a reliable starting subsystem to initialize a process of continuous self-organizing and self-logic learning refinement. understanding, and to develop information conservation and regeneration systems in a numeric self-reflexive/reflective evolutive reference framework. Unfortunately, a logically closed model cannot cope with ontological uncertainty by itself; it needs a complementary logical aperture operational support extension. To achieve this goal, it is possible to use two coupled irreducible information management subsystems, based on the following ideal coupled irreducible asymptotic dichotomy: "Information Reliable Predictability" and "Information Reliable Unpredictability" subsystems. To behave realistically, overall system must guarantee both Logical Closure and Logical Aperture, both fed by environmental "noise" (better… from what human beings call "noise"). So, a natural operating point can emerge as a new Trans-disciplinary Reality Level, out of the Interaction of Two Complementary Irreducible Information Management Subsystems within their environment. In this way, it is possible to extend the traditional EPM approach in order to profit by both classic EPM intrinsic Self-Reflexive Functional Logical Closure and new numeric CICT Self-Reflective Functional Logical Aperture. EPM can be thought as a reliable starting subsystem to initialize a process of continuous self-organizing and self-logic learning refinement.Fiorini, Rodolfo; Degiacomo, PieroFiorini, Rodolfo; Degiacomo, Pier
    corecore