60 research outputs found

    Properties and algorithms of the hyper-star graph and its related graphs

    Get PDF
    The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant

    Alternately-twisted cube as an interconnection network.

    Get PDF
    by Wong Yiu Chung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1991.Bibliography: leaves [100]-[101]AcknowledgementAbstractChapter 1. --- Introduction --- p.1-1Chapter 2. --- Alternately-Twisted Cube: Definition & Graph-Theoretic Properties --- p.2-1Chapter 2.1. --- Construction --- p.2-1Chapter 2.2. --- Topological Properties --- p.2-12Chapter 2.2.1. --- "Node Degree, Link Count & Diameter" --- p.2-12Chapter 2.2.2. --- Node Symmetry --- p.2-13Chapter 2.2.3. --- Sub cube Partitioning --- p.2-18Chapter 2.2.4. --- Distinct Paths --- p.2-23Chapter 2.2.5. --- Embedding other networks --- p.2-24Chapter 2.2.5.1. --- Rings --- p.2-25Chapter 2.2.5.2. --- Grids --- p.2-29Chapter 2.2.5.3. --- Binary Trees --- p.2-35Chapter 2.2.5.4. --- Hypercubes --- p.2-42Chapter 2.2.6. --- Summary of Comparison with the Hypercube --- p.2-44Chapter 3. --- Network Properties --- p.3-1Chapter 3.1. --- Routing Algorithms --- p.3-1Chapter 3.2. --- Message Transmission: Static Analysis --- p.3-5Chapter 3.3. --- Message Transmission: Dynamic Analysis --- p.3-13Chapter 3.4. --- Broadcasting --- p.3-17Chapter 4. --- Parallel Processing on the Alternately-Twisted Cube --- p.4-1Chapter 4.1. --- Ascend/Descend class algorithms --- p.4-1Chapter 4.2. --- Combining class algorithms --- p.4-7Chapter 4.3. --- Numerical algorithms --- p.4-8Chapter 5. --- "Summary, Comparison & Conclusion" --- p.5-1Chapter 5.1. --- Summary --- p.5-1Chapter 5.2. --- Comparison with other hypercube-like networks --- p.5-2Chapter 5.3. --- Conclusion --- p.5-7Chapter 5.4. --- Possible future research --- p.5-7Bibliograph

    High Performance Software Reconfiguration in the Context of Distributed Systems and Interconnection Networks.

    Get PDF
    Designed algorithms that are useful for developing protocols and supporting tools for fault tolerance, dynamic load balancing, and distributing monitoring in loosely coupled multi-processor systems. Four efficient algorithms are developed to learn network topology and reconfigure distributed application programs in execution using the available tools for replication and process migration. The first algorithm provides techniques for transparent software reconfiguration based on process migration in the context of quadtree embeddings in Hypercubes. Our novel approach provides efficient reconfiguration for some classes of faults that may be identified easily. We provide a theoretical characterization to use graph matching, quadratic assignment, and a variety of branch and bound techniques to recover from general faults at run-time and maintain load balance. The second algorithm provides distributed recognition of articulation points, biconnected components, and bridges. Since the removal of an articulation point disconnects the network, knowledge about it may be used for selective replication. We have obtained the most efficient distributed algorithms with linear message complexity for the recognition of these properties. The third algorithm is an optimal linear message complexity distributed solution for recognizing graph planarity which is one of the most celebrated problems in graph theory and algorithm design. Recently, efficient shortest path algorithms are developed for planar graphs whose efficient recognition itself was left open. Our algorithm also leads to designing efficient distributed algorithm to recognize outer-planar graphs with applications in Hamiltonian path, shortest path routing and graph coloring. It is shown that efficient routing of information and distributing the stack needed for for planarity testing permit local computations leading to an efficient distributed algorithm. The fourth algorithm provides software redundancy techniques to provide fault tolerance to program structures. We consider the problem of mapping replicated program structures to provide efficient communication between modules in multiple replicas. We have obtained an optimal mapping of 2-replicated binary trees into hypercubes. For replication numbers greater than two, we provide efficient heuristic simulation results to provide efficient support for both \u27N-version programming\u27 and \u27Recovery block\u27 approaches for software replication

    Small-world interconnection networks for large parallel computer systems

    Get PDF
    The use of small-world graphs as interconnection networks of multicomputers is proposed and analysed in this work. Small-world interconnection networks are constructed by adding (or modifying) edges to an underlying local graph. Graphs with a rich local structure but with a large diameter are shown to be the most suitable candidates for the underlying graph. Generation models based on random and deterministic wiring processes are proposed and analysed. For the random case basic properties such as degree, diameter, average length and bisection width are analysed, and the results show that a fast transition from a large diameter to a small diameter is experienced when the number of new edges introduced is increased. Random traffic analysis on these networks is undertaken, and it is shown that although the average latency experiences a similar reduction, networks with a small number of shortcuts have a tendency to saturate as most of the traffic flows through a small number of links. An analysis of the congestion of the networks corroborates this result and provides away of estimating the minimum number of shortcuts required to avoid saturation. To overcome these problems deterministic wiring is proposed and analysed. A Linear Feedback Shift Register is used to introduce shortcuts in the LFSR graphs. A simple routing algorithm has been constructed for the LFSR and extended with a greedy local optimisation technique. It has been shown that a small search depth gives good results and is less costly to implement than a full shortest path algorithm. The Hilbert graph on the other hand provides some additional characteristics, such as support for incremental expansion, efficient layout in two dimensional space (using two layers), and a small fixed degree of four. Small-world hypergraphs have also been studied. In particular incomplete hypermeshes have been introduced and analysed and it has been shown that they outperform the complete traditional implementations under a constant pinout argument. Since it has been shown that complete hypermeshes outperform the mesh, the torus, low dimensional m-ary d-cubes (with and without bypass channels), and multi-stage interconnection networks (when realistic decision times are accounted for and with a constant pinout), it follows that incomplete hypermeshes outperform them as well

    Codes correcteurs d’erreur quantique topologiques au-delà de la dimension 2

    Get PDF
    Error correction is the set of techniques used in order to store, process and transmit information reliably in a noisy context. The classical theory of error correction is based on encoding classical information redundantly. A major endeavor of the theory is to find optimal trade-offs between redundancy, which we try to minimize, and noise tolerance, which we try to maximize. The quantum theory of error correction cannot directly imitate the redundant schemes of the classical theory because it has to cope with the no-cloning theorem: quantum information cannot be copied. Quantum error correction is nonetheless possible by spreading the information on more quantum memory elements than would be necessary. In quantum information theory, dilution of the information replaces redundancy since copying is forbidden by the laws of quantum mechanics. Besides this conceptual difference, quantum error correction inherits a lot from its classical counterpart. In this PhD thesis, we are concerned with a class of quantum error correcting codes whose classical counterpart was defined in 1961 by Gallager [Gal62]. At that time, quantum information was not even a research domain yet. This class is the family of low density parity check (LDPC) codes. Informally, a code is said to be LDPC if the constraints imposed to ensure redundancy in the classical setting or dilution in the quantum setting are local. More precisely, this PhD thesis focuses on a subset of the LDPC quantum error correcting codes: the homological quantum error correcting codes. These codes take their name from the mathematical field of homology, whose objects of study are sequences of linear maps such that the kernel of a map contains the image of its left neighbour. Originally introduced to study the topology of geometric shapes, homology theory now encompasses more algebraic branches as well, where the focus is more abstract and combinatorial. The same is true of homological codes: they were introduced in 1997 by Kitaev [Kit03] with a quantum code that has the shape of a torus. They now form a vast family of quantum LDPC codes, some more inspired from geometry than others. Homological quantum codes were designed from spherical, Euclidean and hyperbolic geometries, from 2-dimensional, 3-dimensional and 4- dimensional objects, from objects with increasing and unbounded dimension and from hypergraph or homological products. After introducing some general quantum information concepts in the first chapter of this manuscript, we focus in the two following ones on families of quantum codes based on 4-dimensional hyperbolic objects. We highlight the interplay between their geometric side, given by manifolds, and their combinatorial side, given by abstract polytopes. We use both sides to analyze the corresponding quantum codes. In the fourth and last chapter we analyze a family of quantum codes based on spherical objects of arbitrary dimension. To have more flexibility in the design of quantum codes, we use combinatorial objects that realize this spherical geometry: hypercube complexes. This setting allows us to introduce a new link between classical and quantum error correction where classical codes are used to introduce homology in hypercube complexes.La mémoire quantique est constituée de matériaux présentant des effets quantiques comme la superposition. C’est cette possibilité de superposition qui distingue l’élément élémentaire de mémoire quantique, le qubit, de son analogue classique, le bit. Contrairement à un bit classique, un qubit peut être dans un état différent de l’état 0 et de l’état 1. Une difficulté majeure de la réalisation physique de mémoire quantique est la nécessité d’isoler le système utilisé de son environnement. En effet l’interaction d’un système quantique avec son environnement entraine un phénomène appelé décohérence qui se traduit par des erreurs sur l’état du système quantique. Dit autrement, à cause de la décohérence, il est possible que les qubits ne soient pas dans l’état dans lequel il est prévu qu’ils soient. Lorsque ces erreurs s’accumulent le résultat d’un calcul quantique a de grandes chances de ne pas être le résultat attendu. La correction d’erreur quantique est un ensemble de techniques permettant de protéger l’information quantique de ces erreurs. Elle consiste à réaliser un compromis entre le nombre de qubits et leur qualité. Plus précisément un code correcteur d’erreur permet à partir de N qubits physiques bruités de simuler un nombre plus petit K de qubits logiques, c’est-à-dire virtuels, moins bruités. La famille de codes la plus connue est sans doute celle découverte par le physicien Alexei Kitaev: le code torique. Cette construction peut être généralisée à des formes géométriques (variétés) autres qu’un tore. En 2014, Larry Guth et Alexander Lubotzky proposent une famille de code définie à partir de variétés hyperboliques de dimension 4 et montrent que cette famille fournit un compromis intéressant entre le nombre K de qubits logiques et le nombre d’erreurs qu’elle permet de corriger. Dans cette thèse, nous sommes partis de la construction de Guth et Lubotzky et en avons donné une version plus explicite et plus régulière. Pour définir un pavage régulier de l’espace hyperbolique de dimension 4, nous utilisons le groupe de symétrie de symbole de Schläfli {4, 3, 3, 5}. Nous en donnons la représentation matricielle correspondant au modèle de l’hyperboloïde et à un hypercube centré sur l’origine et dont les faces sont orthogonales aux quatre axes de coordonnée. Cette construction permet d’obtenir une famille de codes quantiques encodant un nombre de qubits logiques proportionnel au nombre de qubits physiques et dont la distance minimale croît au moins comme N0.1. Bien que ces paramètres soient également ceux de la construction de Guth et Lubotzky, la régularité de cette construction permet de construire explicitement des exemples de taille raisonnable et d’envisager des algorithmes de décodage qui exploitent cette régularité. Dans un second chapitre nous considérons une famille de codes quantiques hyperboliques 4D de symbole de Schläfli {5, 3, 3, 5}. Après avoir énoncé une façon de prendre le quotient des groupes correspondant en conservant la structure locale du groupe, nous construisons les matrices de parité correspondant à des codes quantiques ayant 144, 720, 9792, 18 000 et 90 000 qubits physiques. Nous appliquons un algorithme de Belief Propagation au décodage de ces codes et analysons les résultats numériquement. Dans un troisième et dernier chapitre nous définissons une nouvelle famille de codes quantiques à partir de cubes de dimension arbitrairement grande. En prenant le quotient d’un cube de dimension n par un code classique de paramètres [n, k, d] et en identifiant les qubits physiques avec les faces de dimension p du polytope quotient ainsi défini, on obtient un code quantique. Cette famille de codes quantiques a l’originalité de considérer des quotients par des codes classiques. En cela elle s’éloigne de la topologie et appartient plutôt à la famille des codes homologiques

    Designing peer-to-peer overlays:a small-world perspective

    Get PDF
    The Small-World phenomenon, well known under the phrase "six degrees of separation", has been for a long time under the spotlight of investigation. The fact that our social network is closely-knitted and that any two people are linked by a short chain of acquaintances was confirmed by the experimental psychologist Stanley Milgram in the sixties. However, it was only after the seminal work of Jon Kleinberg in 2000 that it was understood not only why such networks exist, but also why it is possible to efficiently navigate in these networks. This proved to be a highly relevant discovery for peer-to-peer systems, since they share many fundamental similarities with the social networks; in particular the fact that the peer-to-peer routing solely relies on local decisions, without the possibility to invoke global knowledge. In this thesis we show how peer-to-peer system designs that are inspired by Small-World principles can address and solve many important problems, such as balancing the peer load, reducing high maintenance cost, or efficiently disseminating data in large-scale systems. We present three peer-to-peer approaches, namely Oscar, Gravity, and Fuzzynet, whose concepts stem from the design of navigable Small-World networks. Firstly, we introduce a novel theoretical model for building peer-to-peer systems which supports skewed node distributions and still preserves all desired properties of Kleinberg's Small-World networks. With such a model we set a reference base for the design of data-oriented peer-to-peer systems which are characterized by non-uniform distribution of keys as well as skewed query or access patterns. Based on this theoretical model we introduce Oscar, an overlay which uses a novel scalable network sampling technique for network construction, for which we provide a rigorous theoretical analysis. The simulations of our system validate the developed theory and evaluate Oscar's performance under typical conditions encountered in real-life large-scale networked systems, including participant heterogeneity, faults, as well as skewed and dynamic load-distributions. Furthermore, we show how by utilizing Small-World properties it is possible to reduce the maintenance cost of most structured overlays by discarding a core network connectivity element – the ring invariant. We argue that reliance on the ring structure is a serious impediment for real life deployment and scalability of structured overlays. We propose an overlay called Fuzzynet, which does not rely on the ring invariant, yet has all the functionalities of structured overlays. Fuzzynet takes the idea of lazy overlay maintenance further by eliminating the need for any explicit connectivity and data maintenance operations, relying merely on the actions performed when new Fuzzynet peers join the network. We show that with a sufficient amount of neighbors, even under high churn, data can be retrieved in Fuzzynet with high probability. Finally, we show how peer-to-peer systems based on the Small-World design and with the capability of supporting non-uniform key distributions can be successfully employed for large-scale data dissemination tasks. We introduce Gravity, a publish/subscribe system capable of building efficient dissemination structures, inducing only minimal dissemination relay overhead. This is achieved through Gravity's property to permit non-uniform peer key distributions which allows the subscribers to be clustered close to each other in the key space where data dissemination is cheap. An extensive experimental study confirms the effectiveness of our system under realistic subscription patterns and shows that Gravity surpasses existing approaches in efficiency by a large margin. With the peer-to-peer systems presented in this thesis we fill an important gap in the family of structured overlays, bringing into life practical systems, which can play a crucial role in enabling data-oriented applications distributed over wide-area networks

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum

    Physics of Complex Plasmas.

    Get PDF
    Physics of complex plasmas is a wide and varied field. In the context of this PhD thesis I present the major results from my research on fundamental properties of the plasma sheath, the plasma dust interaction, non-Hamiltonian dynamics, and on non-equilibrium phase transitions, using complex plasmas as a model system. The first chapter provides a short overview of the development of physics of Complex Plasmas. From fundamental plasma physics, properties of dust in plasmas, to the exceptional and unique features of complex plasmas. A summary of twenty years of research topics is also presented. This is followed by three chapters that illustrate publications based on experiments I did during my PhD. These publications, in my opinion, reflect nicely the large diversity of complex plasma research. • The investigation of nonlinear vertical oscillations of a particle in a sheath of an rf discharge was a simultaneous test of (pre-)sheath models and parameters. The nonlinear oscillations were shown to derive from a (strong) nonlinearity of the local sheath potential. They could be described quantitatively applying the theory of anharmonic oscillations, and the first two anharmonic terms in an expansion of the sheath potential were measured. On top of that we provided a simple experimentally, theoretically and mathematically based method that allows for in situ measurement of these coefficients for other experimental conditions. • The vertical pairing of identical particles suspended in the plasma sheath demonstrated some of the unique features that complex plasmas have as an open (non-Hamiltonian) system. Particle interaction becomes non-reciprocal in the presence of streaming ions. The symmetry breaking allows for mode-coupling of in plane and out of plane motion of particles. • Lane formation is a non-equilibrium phase transition. I summarize the main result of my papers on the dynamics of lane formation, i.e., the temporal evolution of lanes. This is followed by an outlook on my future research on non-equilibrium phase transitions, how they relate to our research of systems at the critical point, and how they allow us to test fundamental theories of charging of particles and the shielding of the resulting surface potential. Finally there is an appendix on the scaling index method. A versatile mathematical tool to quantify structural differences / peculiarities in data, that I used to define a suitable order parameter for lane formation

    Understanding Optimisation Processes with Biologically-Inspired Visualisations

    Get PDF
    Evolutionary algorithms (EAs) constitute a branch of artificial intelligence utilised to evolve solutions to solve optimisation problems abound in industry and research. EAs often generate many solutions and visualisation has been a primary strategy to display EA solutions, given that visualisation is a multi-domain well-evaluated medium to comprehend extensive data. The endeavour of visualising solutions is inherent with challenges resulting from high dimensional phenomenons and the large number of solutions to display. Recently, scholars have produced methods to mitigate some of these known issues when illustrating solutions. However, one key consideration is that displaying the final subset of solutions exclusively (rather than the whole population) discards most of the informativeness of the search, creating inadequate insight into the black-box EA. There is an unequivocal knowledge gap and requirement for methods which can visualise the whole population of solutions from an optimiser and subjugate the high-dimensional problems and scaling issues to create interpretability of the EA search process. Furthermore, a requirement for explainability in evolutionary computing has been demanded by the evolutionary computing community, which could take the form of visualisations, to support EA comprehension much like the support explainable artificial intelligence has brought to artificial intelligence. In this thesis, we report novel visualisation methods that can be used to visualise large and high-dimensional optimiser populations with the aim of creating greater interpretability during a search. We consider the nascent intersection of visualisation and explainability in evolutionary computing. The potential high informativeness of a visualisation method from an early chapter of this work forms an effective platform to develop an explainability visualisation method, namely the population dynamics plot, to attempt to inject explainability into the inner workings of the search process. We further support the visualisation of populations using machine learning to construct models which can capture the characteristics of an EA search and develop intelligent visualisations which use artificial intelligence to potentially enhance and support visualisation for a more informative search process. The methods developed in this thesis are evaluated both quantitatively and qualitatively. We use multi-feature benchmark problems to show the method’s ability to reveal specific problem characteristics such as disconnected fronts, local optima and bias, as well as potentially creating a better understanding of the problem landscape and optimiser search for evaluating and comparing algorithm performance (we show the visualisation method to be more insightful than conventional metrics like hypervolume alone). One of the most insightful methods developed in this thesis can produce a visualisation requiring less than 1% of the time and memory necessary to produce a visualisation of the same objective space solutions using existing methods. This allows for greater scalability and the use in short compile time applications such as online visualisations. Predicated by an existing visualisation method in this thesis, we then develop and apply an explainability method to a real-world problem and evaluate it to show the method to be highly effective at explaining the search via solutions in the objective spaces, solution lineage and solution variation operators to compactly comprehend, evaluate and communicate the search of an optimiser, although we note the explainability properties are only evaluated against the author’s ability and could be evaluated further in future work with a usability study. The work is then supported by the development of intelligent visualisation models that may allow one to predict solutions in optima (importantly local optima) in unseen problems by using a machine learning model. The results are effective, with some models able to predict and visualise solution optima with a balanced F1 accuracy metric of 96%. The results of this thesis provide a suite of visualisations which aims to provide greater informativeness of the search and scalability than previously existing literature. The work develops one of the first explainability methods aiming to create greater insight into the search space, solution lineage and reproductive operators. The work applies machine learning to potentially enhance EA understanding via visualisation. These models could also be used for a number of applications outside visualisation. Ultimately, the work provides novel methods for all EA stakeholders which aims to support understanding, evaluation and communication of EA processes with visualisation
    • …
    corecore