3,126 research outputs found

    Sorting Integers on the AP1000

    Full text link
    Sorting is one of the classic problems of computer science. Whilst well understood on sequential machines, the diversity of architectures amongst parallel systems means that algorithms do not perform uniformly on all platforms. This document describes the implementation of a radix based algorithm for sorting positive integers on a Fujitsu AP1000 Supercomputer, which was constructed as an entry in the Joint Symposium on Parallel Processing (JSPP) 1994 Parallel Software Contest (PSC94). Brief consideration is also given to a full radix sort conducted in parallel across the machine.Comment: 1994 Project Report, 23 page

    Evaluating geometric queries using few arithmetic operations

    Full text link
    Let \cp:=(P_1,...,P_s) be a given family of nn-variate polynomials with integer coefficients and suppose that the degrees and logarithmic heights of these polynomials are bounded by dd and hh, respectively. Suppose furthermore that for each 1is1\leq i\leq s the polynomial PiP_i can be evaluated using LL arithmetic operations (additions, subtractions, multiplications and the constants 0 and 1). Assume that the family \cp is in a suitable sense \emph{generic}. We construct a database D\cal D, supported by an algebraic computation tree, such that for each x[0,1]nx\in [0,1]^n the query for the signs of P1(x),...,Ps(x)P_1(x),...,P_s(x) can be answered using h d^{\cO(n^2)} comparisons and nLnL arithmetic operations between real numbers. The arithmetic-geometric tools developed for the construction of D\cal D are then employed to exhibit example classes of systems of nn polynomial equations in nn unknowns whose consistency may be checked using only few arithmetic operations, admitting however an exponential number of comparisons

    Real-valued feature selection for process approximation and prediction

    Get PDF
    The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features

    Lightweight MPI Communicators with Applications to Perfectly Balanced Quicksort

    Get PDF
    MPI uses the concept of communicators to connect groups of processes. It provides nonblocking collective operations on communicators to overlap communication and computation. Flexible algorithms demand flexible communicators. E.g., a process can work on different subproblems within different process groups simultaneously, new process groups can be created, or the members of a process group can change. Depending on the number of communicators, the time for communicator creation can drastically increase the running time of the algorithm. Furthermore, a new communicator synchronizes all processes as communicator creation routines are blocking collective operations. We present RBC, a communication library based on MPI, that creates range-based communicators in constant time without communication. These RBC communicators support (non)blocking point-to-point communication as well as (non)blocking collective operations. Our experiments show that the library reduces the time to create a new communicator by a factor of more than 400 whereas the running time of collective operations remains about the same. We propose Janus Quicksort, a distributed sorting algorithm that avoids any load imbalances. We improved the performance of this algorithm by a factor of 15 for moderate inputs by using RBC communicators. Finally, we discuss different approaches to bring nonblocking (local) communicator creation of lightweight (range-based) communicators into MPI

    Data broadcasting and reduction, prefix computation, and sorting on reduced hypercube (RH) parallel computers

    Get PDF
    The binary hypercube parallel computer has been very popular due to its rich interconnection structure and small average internode distance which allow the efficient embedding of frequently used topologies. Communication patterns of many parallel algorithms also match the hypercube topology. The hypercube has high VLSI complexity. however. due to the logarithmic increase in the number of connections to each node with the increase in the number of dimensions of the hypercube. The reduced hypercube (RH) interconnection network. which is obtained by a uniform reduction in the number of links for each hypercube node. yields lower-complexity interconnection networks when compared to hypercubes with the same number of nodes. It has been shown elsewhere that the RH interconnection network achieves performance comparable to that of the hypercube. at lower hardware cost. The reduced VLSI complexity of the RH also permits the construction of larger systems. thus. making the RH suitable for massively parallel processing. This thesis proposes algorithms for data broadcasting and reduction. prefix computation, and sorting on the RH parallel computer. All these operations are fundamental to many parallel algorithms. A worst case analysis of each algorithm is given and compared with equivalent- algorithms for the regular hypercube. It is shown that the proposed algorithms for the RH yield performance comparable to that for the regular hypercube
    corecore