1,447 research outputs found

    Parallel Architectures for Planetary Exploration Requirements (PAPER)

    Get PDF
    The Parallel Architectures for Planetary Exploration Requirements (PAPER) project is essentially research oriented towards technology insertion issues for NASA's unmanned planetary probes. It was initiated to complement and augment the long-term efforts for space exploration with particular reference to NASA/LaRC's (NASA Langley Research Center) research needs for planetary exploration missions of the mid and late 1990s. The requirements for space missions as given in the somewhat dated Advanced Information Processing Systems (AIPS) requirements document are contrasted with the new requirements from JPL/Caltech involving sensor data capture and scene analysis. It is shown that more stringent requirements have arisen as a result of technological advancements. Two possible architectures, the AIPS Proof of Concept (POC) configuration and the MAX Fault-tolerant dataflow multiprocessor, were evaluated. The main observation was that the AIPS design is biased towards fault tolerance and may not be an ideal architecture for planetary and deep space probes due to high cost and complexity. The MAX concepts appears to be a promising candidate, except that more detailed information is required. The feasibility for adding neural computation capability to this architecture needs to be studied. Key impact issues for architectural design of computing systems meant for planetary missions were also identified

    Analysis of randomized load distribution for reproduction trees in linear arrays and rings

    Get PDF
    AbstractHigh performance computing requires high quality load distribution of processes of a parallel application over processors in a parallel computer at runtime such that both maximum load and dilation are minimized. The performance of a simple randomized load distribution algorithm that dynamically supports tree-structured parallel computations on two simple static networks, namely, linear arrays and rings, is analyzed in this paper. The algorithm spreads newly created tree nodes to neighboring processors, which actually provides randomized dilation-1 tree embedding in a static network. We develop linear systems of equations that characterize expected loads on all processors, and find their closed form solutions under the reproduction tree model, which can generate trees of arbitrary size and shape. The main contribution of the paper is to show that the above simple randomized algorithm is able to generate high-quality dynamic tree embeddings even in very simple and sparse networks such as linear arrays and rings. In particular, we prove that as tree size becomes large, the asymptotic performance ratio of such a randomized dilation-1 tree embedding is N/(N−1) in linear arrays and is optimal in rings

    Investigation of the robustness of star graph networks

    Full text link
    The star interconnection network has been known as an attractive alternative to n-cube for interconnecting a large number of processors. It possesses many nice properties, such as vertex/edge symmetry, recursiveness, sublogarithmic degree and diameter, and maximal fault tolerance, which are all desirable when building an interconnection topology for a parallel and distributed system. Investigation of the robustness of the star network architecture is essential since the star network has the potential of use in critical applications. In this study, three different reliability measures are proposed to investigate the robustness of the star network. First, a constrained two-terminal reliability measure referred to as Distance Reliability (DR) between the source node u and the destination node I with the shortest distance, in an n-dimensional star network, Sn, is introduced to assess the robustness of the star network. A combinatorial analysis on DR especially for u having a single cycle is performed under different failure models (node, link, combined node/link failure). Lower bounds on the special case of the DR: antipode reliability, are derived, compared with n-cube, and shown to be more fault-tolerant than n-cube. The degradation of a container in a Sn having at least one operational optimal path between u and I is also examined to measure the system effectiveness in the presence of failures under different failure models. The values of MTTF to each transition state are calculated and compared with similar size containers in n-cube. Meanwhile, an upper bound under the probability fault model and an approximation under the fixed partitioning approach on the ( n-1)-star reliability are derived, and proved to be similarly accurate and close to the simulations results. Conservative comparisons between similar size star networks and n-cubes show that the star network is more robust than n-cube in terms of ( n-1)-network reliability

    09451 Abstracts Collection -- Geometric Networks, Metric Space Embeddings and Spatial Data Mining

    Get PDF
    From November 1 to 6, 2009, the Dagstuhl Seminar 09451 ``Geometric Networks, Metric Space Embeddings and Spatial Data Mining\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Efficient Frequent Subtree Mining Beyond Forests

    Get PDF
    A common paradigm in distance-based learning is to embed the instance space into some appropriately chosen feature space equipped with a metric and to define the dissimilarity between instances by the distance of their images in the feature space. If the instances are graphs, then frequent connected subgraphs are a well-suited pattern language to define such feature spaces. Identifying the set of frequent connected subgraphs and subsequently computing embeddings for graph instances, however, is computationally intractable. As a result, existing frequent subgraph mining algorithms either restrict the structural complexity of the instance graphs or require exponential delay between the output of subsequent patterns. Hence distance-based learners lack an efficient way to operate on arbitrary graph data. To resolve this problem, in this thesis we present a mining system that gives up the demand on the completeness of the pattern set to instead guarantee a polynomial delay between subsequent patterns. Complementing this, we devise efficient methods to compute the embedding of arbitrary graphs into the Hamming space spanned by our pattern set. As a result, we present a system that allows to efficiently apply distance-based learning methods to arbitrary graph databases. To overcome the computational intractability of the mining step, we consider only frequent subtrees for arbitrary graph databases. This restriction alone, however, does not suffice to make the problem tractable. We reduce the mining problem from arbitrary graphs to forests by replacing each graph by a polynomially sized forest obtained from a random sample of its spanning trees. This results in an incomplete mining algorithm. However, we prove that the probability of missing a frequent subtree pattern is low. We show empirically that this is true in practice even for very small sized forests. As a result, our algorithm is able to mine frequent subtrees in a range of graph databases where state-of-the-art exact frequent subgraph mining systems fail to produce patterns in reasonable time or even at all. Furthermore, the predictive performance of our patterns is comparable to that of exact frequent connected subgraphs, where available. The above method considers polynomially many spanning trees for the forest, while many graphs have exponentially many spanning trees. The number of patterns found by our mining algorithm can be negatively influenced by this exponential gap. We hence propose a method that can (implicitly) consider forests of exponential size, while remaining computationally tractable. This results in a higher recall for our incomplete mining algorithm. Furthermore, the methods extend the known positive results on the tractability of exact frequent subtree mining to a novel class of transaction graphs. We conjecture that the next natural extension of our results to a larger transaction graph class is at least as difficult as proving whether P = NP, or not. Regarding the graph embedding step, we apply a similar strategy as in the mining step. We represent a novel graph by a forest of its spanning trees and decide whether the frequent trees from the mining step are subgraph isomorphic to this forest. As a result, the embedding computation has one-sided error with respect to the exact subgraph isomorphism test but is computationally tractable. Furthermore, we show that we can leverage a partial order on the pattern set. This structure can be used to reduce the runtime of the embedding computation dramatically. For the special case of Jaccard-similarity between graph embeddings, a further substantial reduction of runtime can be achieved using min-hashing. The Jaccard-distance can be approximated using small sketch vectors that can be computed fast, again using the partial order on the tree patterns

    Integrating topological features to enhance cardiac disease diagnosis from 3D CMR images

    Full text link
    Treballs Finals de Grau de Matemàtiques, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2023, Director: Carles Casacuberta i Polyxeni Gkontra[en] Persistent homology is a technique from the field of algebraic topology for the analysis and characterization of the shape and structure of datasets in multiple dimensions. Its use is based on the identification and quantification of topological patterns in the dataset across various scales. In this thesis, persistent homology is applied with the objective of extracting topological descriptors from three-dimensional cardiovascular magnetic resonance (CMR) imaging. Thereafter, topological descriptors are used for the detection of cardiovascular diseases by means of Machine Learning (ML) techniques. Radiomics has been one of the recently proposed approaches for disease diagnosis. This method involves the extraction and subsequent analysis of a significant number of quantitative descriptors from medical images. These descriptors offer a characterization of the spatial distribution, texture, and intensity of the structures present in the images. This study demonstrates that radiomics and topological descriptors achieve comparable results, providing complementary insights into the underlying structures and characteristics of anatomical tissues. Moreover, the combination of these two methods leads to a further improvement of the performance of ML models, thereby enhancing medical diagnosis

    Redundancy management for efficient fault recovery in NASA's distributed computing system

    Get PDF
    The management of redundancy in computer systems was studied and guidelines were provided for the development of NASA's fault-tolerant distributed systems. Fault recovery and reconfiguration mechanisms were examined. A theoretical foundation was laid for redundancy management by efficient reconfiguration methods and algorithmic diversity. Algorithms were developed to optimize the resources for embedding of computational graphs of tasks in the system architecture and reconfiguration of these tasks after a failure has occurred. The computational structure represented by a path and the complete binary tree was considered and the mesh and hypercube architectures were targeted for their embeddings. The innovative concept of Hybrid Algorithm Technique was introduced. This new technique provides a mechanism for obtaining fault tolerance while exhibiting improved performance

    Aspects of practical implementations of PRAM algorithms

    Get PDF
    The PRAM is a shared memory model of parallel computation which abstracts away from inessential engineering details. It provides a very simple architecture independent model and provides a good programming environment. Theoreticians of the computer science community have proved that it is possible to emulate the theoretical PRAM model using current technology. Solutions have been found for effectively interconnecting processing elements, for routing data on these networks and for distributing the data among memory modules without hotspots. This thesis reviews this emulation and the possibilities it provides for large scale general purpose parallel computation. The emulation employs a bridging model which acts as an interface between the actual hardware and the PRAM model. We review the evidence that such a scheme crn achieve scalable parallel performance and portable parallel software and that PRAM algorithms can be optimally implemented on such practical models. In the course of this review we presented the following new results: 1. Concerning parallel approximation algorithms, we describe an NC algorithm for finding an approximation to a minimum weight perfect matching in a complete weighted graph. The algorithm is conceptually very simple and it is also the first NC-approximation algorithm for the task with a sub-linear performance ratio. 2. Concerning graph embedding, we describe dense edge-disjoint embeddings of the complete binary tree with n leaves in the following n-node communication networks: the hypercube, the de Bruijn and shuffle-exchange networks and the 2-dimcnsional mesh. In the embeddings the maximum distance from a leaf to the root of the tree is asymptotically optimally short. The embeddings facilitate efficient implementation of many PRAM algorithms on networks employing these graphs as interconnection networks. 3. Concerning bulk synchronous algorithmics, we describe scalable transportable algorithms for the following three commonly required types of computation; balanced tree computations. Fast Fourier Transforms and matrix multiplications

    An attentive neural architecture for joint segmentation and parsing and its application to real estate ads

    Get PDF
    In processing human produced text using natural language processing (NLP) techniques, two fundamental subtasks that arise are (i) segmentation of the plain text into meaningful subunits (e.g., entities), and (ii) dependency parsing, to establish relations between subunits. In this paper, we develop a relatively simple and effective neural joint model that performs both segmentation and dependency parsing together, instead of one after the other as in most state-of-the-art works. We will focus in particular on the real estate ad setting, aiming to convert an ad to a structured description, which we name property tree, comprising the tasks of (1) identifying important entities of a property (e.g., rooms) from classifieds and (2) structuring them into a tree format. In this work, we propose a new joint model that is able to tackle the two tasks simultaneously and construct the property tree by (i) avoiding the error propagation that would arise from the subtasks one after the other in a pipelined fashion, and (ii) exploiting the interactions between the subtasks. For this purpose, we perform an extensive comparative study of the pipeline methods and the new proposed joint model, reporting an improvement of over three percentage points in the overall edge F1 score of the property tree. Also, we propose attention methods, to encourage our model to focus on salient tokens during the construction of the property tree. Thus we experimentally demonstrate the usefulness of attentive neural architectures for the proposed joint model, showcasing a further improvement of two percentage points in edge F1 score for our application.Comment: Preprint - Accepted for publication in Expert Systems with Application
    • …
    corecore