151 research outputs found

    Performance analysis of wormhole routing in multicomputer interconnection networks

    Get PDF
    Perhaps the most critical component in determining the ultimate performance potential of a multicomputer is its interconnection network, the hardware fabric supporting communication among individual processors. The message latency and throughput of such a network are affected by many factors of which topology, switching method, routing algorithm and traffic load are the most significant. In this context, the present study focuses on a performance analysis of k-ary n-cube networks employing wormhole switching, virtual channels and adaptive routing, a scenario of especial interest to current research. This project aims to build upon earlier work in two main ways: constructing new analytical models for k-ary n-cubes, and comparing the performance merits of cubes of different dimensionality. To this end, some important topological properties of k-ary n-cubes are explored initially; in particular, expressions are derived to calculate the number of nodes at/within a given distance from a chosen centre. These results are important in their own right but their primary significance here is to assist in the construction of new and more realistic analytical models of wormhole-routed k-ary n-cubes. An accurate analytical model for wormhole-routed k-ary n-cubes with adaptive routing and uniform traffic is then developed, incorporating the use of virtual channels and the effect of locality in the traffic pattern. New models are constructed for wormhole k-ary n-cubes, with the ability to simulate behaviour under adaptive routing and non-uniform communication workloads, such as hotspot traffic, matrix-transpose and digit-reversal permutation patterns. The models are equally applicable to unidirectional and bidirectional k-ary n-cubes and are significantly more realistic than any in use up to now. With this level of accuracy, the effect of each important network parameter on the overall network performance can be investigated in a more comprehensive manner than before. Finally, k-ary n-cubes of different dimensionality are compared using the new models. The comparison takes account of various traffic patterns and implementation costs, using both pin-out and bisection bandwidth as metrics. Networks with both normal and pipelined channels are considered. While previous similar studies have only taken account of network channel costs, our model incorporates router costs as well thus generating more realistic results. In fact the results of this work differ markedly from those yielded by earlier studies which assumed deterministic routing and uniform traffic, illustrating the importance of using accurate models to conduct such analyses

    Visual analytics for relationships in scientific data

    Get PDF
    Domain scientists hope to address grand scientific challenges by exploring the abundance of data generated and made available through modern high-throughput techniques. Typical scientific investigations can make use of novel visualization tools that enable dynamic formulation and fine-tuning of hypotheses to aid the process of evaluating sensitivity of key parameters. These general tools should be applicable to many disciplines: allowing biologists to develop an intuitive understanding of the structure of coexpression networks and discover genes that reside in critical positions of biological pathways, intelligence analysts to decompose social networks, and climate scientists to model extrapolate future climate conditions. By using a graph as a universal data representation of correlation, our novel visualization tool employs several techniques that when used in an integrated manner provide innovative analytical capabilities. Our tool integrates techniques such as graph layout, qualitative subgraph extraction through a novel 2D user interface, quantitative subgraph extraction using graph-theoretic algorithms or by querying an optimized B-tree, dynamic level-of-detail graph abstraction, and template-based fuzzy classification using neural networks. We demonstrate our system using real-world workflows from several large-scale studies. Parallel coordinates has proven to be a scalable visualization and navigation framework for multivariate data. However, when data with thousands of variables are at hand, we do not have a comprehensive solution to select the right set of variables and order them to uncover important or potentially insightful patterns. We present algorithms to rank axes based upon the importance of bivariate relationships among the variables and showcase the efficacy of the proposed system by demonstrating autonomous detection of patterns in a modern large-scale dataset of time-varying climate simulation

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D

    Adaptive Asynchronous Control and Consistency in Distributed Data Exploration Systems

    Get PDF
    Advances in machine learning and streaming systems provide a backbone to transform vast arrays of raw data into valuable information. Leveraging distributed execution, analysis engines can process this information effectively within an iterative data exploration workflow to solve problems at unprecedented rates. However, with increased input dimensionality, a desire to simultaneously share and isolate information, as well as overlapping and dependent tasks, this process is becoming increasingly difficult to maintain. User interaction derails exploratory progress due to manual oversight on lower level tasks such as tuning parameters, adjusting filters, and monitoring queries. We identify human-in-the-loop management of data generation and distributed analysis as an inhibiting problem precluding efficient online, iterative data exploration which causes delays in knowledge discovery and decision making. The flexible and scalable systems implementing the exploration workflow require semi-autonomous methods integrated as architectural support to reduce human involvement. We, thus, argue that an abstraction layer providing adaptive asynchronous control and consistency management over a series of individual tasks coordinated to achieve a global objective can significantly improve data exploration effectiveness and efficiency. This thesis introduces methodologies which autonomously coordinate distributed execution at a lower level in order to synchronize multiple efforts as part of a common goal. We demonstrate the impact on data exploration through serverless simulation ensemble management and multi-model machine learning by showing improved performance and reduced resource utilization enabling a more productive semi-autonomous exploration workflow. We focus on the specific genres of molecular dynamics and personalized healthcare, however, the contributions are applicable to a wide variety of domains

    Arbitrary views of high-dimensional space and data

    Get PDF
    Computer generated images of three dimensional scenes objects are the result of parallel/perspective projections of the objects onto a two dimensional plane. The computational techniques may be extended to project n-dimensional hyperobjects onto (n-1) dimensions, for n \u3e 3. Projection to one less dimension may be applied recursively for data of any high dimension until that data is two-dimensional, when it may be directed to a computer screen or to some other two-dimensional output device. Arbitrary specification of eye location, target location, field-of-view angles and other parameters provide flexibility, so that data may be viewed-and hence perceived-in previously unavailable ways. However, arbitrary views may also increase the computational requirements, and may complicate the user\u27s task in preparing and interpreting a view. Data with a dimension greater than three are difficult to perceive geometrically, yet may be invaluable to the observer. This study designs and implements a data visualisation system which incorporates arbitrary views of high-dimensional objects using repeated hyperplanar projection

    Machine Learning Post-Minkowskian Integrals

    Full text link
    We study a neural network framework for the numerical evaluation of Feynman loop integrals that are fundamental building blocks for perturbative computations of physical observables in gauge and gravity theories. We show that such a machine learning approach improves the convergence of the Monte Carlo algorithm for high-precision evaluation of multi-dimensional integrals compared to traditional algorithms. In particular, we use a neural network to improve the importance sampling. For a set of representative integrals appearing in the computation of the conservative dynamics for a compact binary system in General Relativity, we perform a quantitative comparison between the Monte Carlo integrators VEGAS and i-flow, an integrator based on neural network sampling.Comment: 26 pages + references, 3 figures, 3 table

    Field D* Pathfinding in Weighted Simplicial Complexes

    Get PDF
    The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D. Such representations offer benefits in terms of space over a weighted grid, since fewer triangles can represent polygonal objects with greater accuracy than a large number of grid cells. By exploiting these savings, we show that Triangulated Field D* can produce an equivalent path cost to grid-based Multi-resolution Field D*, using up to an order of magnitude fewer triangles over grid cells and visiting an order of magnitude fewer nodes. Finally, as a practical demonstration of the utility of our formulation, we show how Field D* can be used to approximate a distance field on the nodes of a simplicial complex, and how this distance field can be used to weight the simplicial complex to produce contour-following behaviour by shortest paths computed with Field D*

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Approximation methods in geometry and topology: learning, coarsening, and sampling

    Get PDF
    Data materialize in many different forms and formats. These can be continuous or discrete, from algebraic expressions to unstructured pointclouds and highly structured graphs and simplicial complexes. Their sheer volume and plethora of different modalities used to manipulate and understand them highlight the need for expressive abstractions and approximations, enabling novel insights and efficiency. Geometry and topology provide powerful and intuitive frameworks for modelling structure, form, and connectivity. Acting as a multi-focal lens, they enable inspection and manipulation at different levels of detail, from global discriminant features to local intricate characteristics. However, these fundamentally algebraic theories do not scale well in the digital world. Adjusting topology and geometry to the computational setting is a non-trivial task, adhering to the “no free lunch” adage. The necessary discretizations can be inaccurate, the underlying combinatorial structures can grow unmanageably in size, and computing salient topological and geometric features can become computationally taxing. Approximations are a necessity when theory cannot accommodate for efficient algorithms. This thesis explores different approaches to simplifying computations pertaining to geometry and topology via approximations. Our methods contribute to the approximation of topological features on discrete domains, and employ geometry and topology to efficiently guide discretizations and approximations. This line of work fits un der the umbrella of Topological Data Analysis (TDA) and Discrete Geometry, which aim to bridge the continuous algebraic mindset with the discrete. We construct topological and geometric approximation methods operating on three different levels. We approximate topological features on discrete combinatorial spaces; we approximate the combinatorial spaces themselves; and we guide processes that allow us to discretize domains via sampling. With our Dist2Cycle model we learn geometric manifestations of topological features, the “optimal” homology generating cycles. This is achieved by a novel simplicial complex neural network that exploits the kernel of Hodge Laplacian operators to localize concise homology generators. Compression of meshes and arbitrary simplicial complexes is made possible by our general spectral coarsening strategy. Functional and structural properties are preserved by optimizing for important eigenspaces of general differential operators, the Hodge Laplacians, at multiple dimensions. Finally, we offer a geometry-driven sampling strategy for data accumulation and stochastic integration. By employing the kd-tree geometric partitioning algorithm we construct a sample set with provable equidistribution guarantees. Our findings are contextualized within prior and recent work, and our methods are thoroughly discussed and evaluated on diverse settings. Ultimately, we are making a claim towards the usefulness of examining the ever-present topological and geometric properties of data, not only in terms of feature discovery, but also as informed generation, manipulation, and simplification tools
    corecore