4 research outputs found

    Founsure 1.0: An erasure code library with efficient repair and update features

    Get PDF
    Founsure is an open-source software library that implements a multi-dimensional graph-based erasure coding entirely based on fast exclusive OR (XOR) logic. Its implementation utilizes compiler optimizations and multi-threading to generate the right assembly code for the given multi-core CPU architecture with vector processing capabilities. Founsure possesses important features that shall find various applications in modern data storage, communication, and networked computer systems, in which the data needs protection against device, hardware, and node failures. As data size reached unprecedented levels, these systems have become hungry for network bandwidth, computational resources, and average consumed power. To address that, the proposed library provides a three-dimensional design space that trades off the computational complexity, coding overhead, and data/node repair bandwidth to meet different requirements of modern distributed data storage and processing systems. Founsure library enables efficient encoding, decoding, repairs/rebuilds, and updates while all the required data storage and computations are distributed across the network nodes.Turkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK) Grant Number : 115C111 - 119E235WOS:000656825700019Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ3ArticleUluslararası işbirliği ile yapılmayan - HAYIRJanuary2021YÖK - 2020-2

    An erasure-resilient and compute-efficient coding scheme for storage applications

    Get PDF
    Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme

    OpenMP and POSIX threads implementation of Jerasure 2.0

    No full text
    Şefik Şuayb Arslan (MEF Author)##nofulltext##In shared memory multiprocessor architectures, threads can be used to implement parallelism. POSIX threads (pthreads) is a low-level bare-bones programming interface for working with threads. Therefore, we have extremely fine-grained control over thread management (create/join/etc), mutexes, and so on. On the other hand, openMP, as a shared-memory standard, is much higher level and portable interface which makes it easier to use multi-threading capability and obtain satisfactory performance improvements. Since pthreads is more flexible, it helps programmers gain more control on performance optimizations. Jerasure 2.0 erasure coding library has encoding/decoding engines which comprise independent "for" loop iterations and hence possess huge potential for multi-threaded processing. In this short paper, we investigate multi-threaded implementations of encoder/decoder pair of Jerasure 2.0 using two different technologies: OpenMP and pthreads. We constrain our changes to a minimum possible and compare the pure encoding/decoding performance with respect to each other as well as against that of the original single-threaded version by running them on two different server systems.WOS:000427892400034Scopus - Affiliation ID: 60105072Conference Proceedings Citation Index- ScienceProceedings PaperHaziran2017YÖK - 2016-1

    Scaling and Resilience in Numerical Algorithms for Exascale Computing

    Get PDF
    The first Petascale supercomputer, the IBM Roadrunner, went online in 2008. Ten years later, the community is now looking ahead to a new generation of Exascale machines. During the decade that has passed, several hundred Petascale capable machines have been installed worldwide, yet despite the abundance of machines, applications that scale to their full size remain rare. Large clusters now routinely have 50.000+ cores, some have several million. This extreme level of parallelism, that has allowed a theoretical compute capacity in excess of a million billion operations per second, turns out to be difficult to use in many applications of practical interest. Processors often end up spending more time waiting for synchronization, communication, and other coordinating operations to complete, rather than actually computing. Component reliability is another challenge facing HPC developers. If even a single processor fail, among many thousands, the user is forced to restart traditional applications, wasting valuable compute time. These issues collectively manifest themselves as low parallel efficiency, resulting in waste of energy and computational resources. Future performance improvements are expected to continue to come in large part due to increased parallelism. One may therefore speculate that the difficulties currently faced, when scaling applications to Petascale machines, will progressively worsen, making it difficult for scientists to harness the full potential of Exascale computing. The thesis comprises two parts. Each part consists of several chapters discussing modifications of numerical algorithms to make them better suited for future Exascale machines. In the first part, the use of Parareal for Parallel-in-Time integration techniques for scalable numerical solution of partial differential equations is considered. We propose a new adaptive scheduler that optimize the parallel efficiency by minimizing the time-subdomain length without making communication of time-subdomains too costly. In conjunction with an appropriate preconditioner, we demonstrate that it is possible to obtain time-parallel speedup on the nonlinear shallow water equation, beyond what is possible using conventional spatial domain-decomposition techniques alone. The part is concluded with the proposal of a new method for constructing Parallel-in-Time integration schemes better suited for convection dominated problems. In the second part, new ways of mitigating the impact of hardware failures are developed and presented. The topic is introduced with the creation of a new fault-tolerant variant of Parareal. In the chapter that follows, a C++ Library for multi-level checkpointing is presented. The library uses lightweight in-memory checkpoints, protected trough the use of erasure codes, to mitigate the impact of failures by decreasing the overhead of checkpointing and minimizing the compute work lost. Erasure codes have the unfortunate property that if more data blocks are lost than parity codes created, the data is effectively considered unrecoverable. The final chapter contains a preliminary study on partial information recovery for incomplete checksums. Under the assumption that some meta knowledge exists on the structure of the data encoded, we show that the data lost may be recovered, at least partially. This result is of interest not only in HPC but also in data centers where erasure codes are widely used to protect data efficiently
    corecore