57 research outputs found

    Matrix powers algorithms for trust evaluation in PKI architectures

    Get PDF
    This paper deals with the evaluation of trust in public-key infrastructures. Different trust models have been proposed to interconnect the various PKI components in order to propagate the trust between them. In this paper we provide a new polynomial algorithm using linear algebra to assess trust relationships in a network using different trust evaluation schemes. The advantages are twofold: first the use of matrix computations instead of graph algorithms provides an optimized computational solution; second, our algorithm can be used for generic graphs, even in the presence of cycles. Our algorithm is designed to evaluate the trust using all existing (finite) trust paths between entities as a preliminary to any exchanges between PKIs. This can give a precise evaluation of trust, and accelerate for instance cross-certificate validation

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Exploiting Parallelization in Spatial Statistics: an Applied Survey using R.

    Get PDF
    Computing tasks may be parallelized top-down by splitting into per-node chunks when the tasks permit this kind of division, and particularly when there is little or no need for communication between the nodes. Another approach is to parallelize bottom-up, by the substitution of multi-threaded low-level functions for single-threaded ones in otherwise unchanged user-level functions. This survey examines the timings of typical spatial data analysis tasks across a range of data sizes and hardware under different combinations of these two approaches. Conclusions are drawn concerning choices of alternatives for parallelization, and attention is drawn to factors conditioning those choices.Statistical software; Parallelization; Optimized linear algebra subroutines; Multicore processors; Spatial statistics.

    QR Factorization of Tall and Skinny Matrices in a Grid Computing Environment

    Get PDF
    Previous studies have reported that common dense linear algebra operations do not achieve speed up by using multiple geographical sites of a computational grid. Because such operations are the building blocks of most scientific applications, conventional supercomputers are still strongly predominant in high-performance computing and the use of grids for speeding up large-scale scientific problems is limited to applications exhibiting parallelism at a higher level. We have identified two performance bottlenecks in the distributed memory algorithms implemented in ScaLAPACK, a state-of-the-art dense linear algebra library. First, because ScaLAPACK assumes a homogeneous communication network, the implementations of ScaLAPACK algorithms lack locality in their communication pattern. Second, the number of messages sent in the ScaLAPACK algorithms is significantly greater than other algorithms that trade flops for communication. In this paper, we present a new approach for computing a QR factorization -- one of the main dense linear algebra kernels -- of tall and skinny matrices in a grid computing environment that overcomes these two bottlenecks. Our contribution is to articulate a recently proposed algorithm (Communication Avoiding QR) with a topology-aware middleware (QCG-OMPI) in order to confine intensive communications (ScaLAPACK calls) within the different geographical sites. An experimental study conducted on the Grid'5000 platform shows that the resulting performance increases linearly with the number of geographical sites on large-scale problems (and is in particular consistently higher than ScaLAPACK's).Comment: Accepted at IPDPS10. (IEEE International Parallel & Distributed Processing Symposium 2010 in Atlanta, GA, USA.

    Architecture-Aware Configuration and Scheduling of Matrix Multiplication on Asymmetric Multicore Processors

    Get PDF
    Asymmetric multicore processors (AMPs) have recently emerged as an appealing technology for severely energy-constrained environments, especially in mobile appliances where heterogeneity in applications is mainstream. In addition, given the growing interest for low-power high performance computing, this type of architectures is also being investigated as a means to improve the throughput-per-Watt of complex scientific applications. In this paper, we design and embed several architecture-aware optimizations into a multi-threaded general matrix multiplication (gemm), a key operation of the BLAS, in order to obtain a high performance implementation for ARM big.LITTLE AMPs. Our solution is based on the reference implementation of gemm in the BLIS library, and integrates a cache-aware configuration as well as asymmetric--static and dynamic scheduling strategies that carefully tune and distribute the operation's micro-kernels among the big and LITTLE cores of the target processor. The experimental results on a Samsung Exynos 5422, a system-on-chip with ARM Cortex-A15 and Cortex-A7 clusters that implements the big.LITTLE model, expose that our cache-aware versions of gemm with asymmetric scheduling attain important gains in performance with respect to its architecture-oblivious counterparts while exploiting all the resources of the AMP to deliver considerable energy efficiency
    corecore