73 research outputs found

    Spatial Implementation for Erasure Coding by Finite Radon Transform

    Get PDF
    International audienceFault-tolerance has been widely studied these years in order to fit new kinds of applications running on unreliable systems such as the Internet. Erasure coding aims at recovering information that has been lost during a transmission (e.g. congestion). Considered as the alternative to the Automatic Repeat-reQuest (ARQ) strategy, erasure coding differs by adding redundancy to recover lost information without the need to retransmit data. In this paper we propose a new approach using the Finite Radon Transform (FRT). The FRT is an exact and discrete transformation that relies on simple additions to obtain a set of projections. The proposed erasure code is Maximal Distance Separable (MDS). We detail in this paper the systematic and nonsystematic implementation. As an optimization, we use the same algorithm called "row-solving" for creating the redundancy and for recovering missing data

    Utilization of forward error correction (FEC) techniques with extensible markup language (XML) schema-based binary compression (XSBC) technology

    Get PDF
    In order to plug-in current open sourced, open standard Java programming technology into the building blocks of the US Navy's ForceNet, first, stove-piped systems need to be made extensible to other pertinent applications and then a new paradigm of adopting extensible and cross-platform open technologies will begin to bridge gaps with old and new weapons systems. The battle-space picture in real time and with as much detail, or as little detail needed is now a current vital requirement. Access to this information via wireless laptop technology is here now. Transmission of data to increase the resolution of that battle-space snapshot will invariably be through noisy links. Noisy links such as found in the shallow water littoral regions of interest will be where Autonomous Underwater and Unmanned Underwater Vehicles (AUVs/UUVs) are gathering intelligence for the sea warrior in need of that intelligence. The battle-space picture built from data transmitted within these noisy and unpredictable acoustic regions demands efficiency and reliability features abstract to the user. To realize this efficiency Extensible Markup Language (XML) Schema-based Binary Compression (XSBC), in combination with Vandermode-based Forward Error Correction (FEC) erasure codes, offer the qualities of efficient streaming of plain text XML documents in a highly compressed form, and a data self-healing capability should there be loss of data during transmission in unpredictable transmission mediums. Both the XSBC and FEC libraries detailed in this thesis are open sourced Java Application Program Interfaces (APIs) that can be readily adapted for extensible, cross-platform applications that will be enhanced by these desired features to add functional capability to ForceNet for the sea warrior to access on demand, at sea and in real-time. These features will be presented in the Autonomous Underwater Vehicle (AUV) Workbench (AUVW) Java-based application that will become a valuable tool for warriors involved with Undersea Warfare (UW).http://archive.org/details/utilizationoffor109451247Lieutenant, United States NavyApproved for public release; distribution is unlimited

    An erasure-resilient and compute-efficient coding scheme for storage applications

    Get PDF
    Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme

    Solving 3D relativistic hydrodynamical problems with WENO discontinuous Galerkin methods

    Get PDF
    Discontinuous Galerkin (DG) methods coupled to WENO algorithms allow high order convergence for smooth problems and for the simulation of discontinuities and shocks. In this work, we investigate WENO-DG algorithms in the context of numerical general relativity, in particular for general relativistic hydrodynamics. We implement the standard WENO method at different orders, a compact (simple) WENO scheme, as well as an alternative subcell evolution algorithm. To evaluate the performance of the different numerical schemes, we study non-relativistic, special relativistic, and general relativistic testbeds. We present the first three-dimensional simulations of general relativistic hydrodynamics, albeit for a fixed spacetime background, within the framework of WENO-DG methods. The most important testbed is a single TOV-star in three dimensions, showing that long term stable simulations of single isolated neutron stars can be obtained with WENO-DG methods.Comment: 21 pages, 10 figure

    Projections et distances discrètes

    Get PDF
    Le travail se situe dans le domaine de la géométrie discrète. La tomographie discrète sera abordée sous l'angle de ses liens avec la théorie de l'information, illustrés par l'application de la transformation Mojette et de la "Finite Radon Transform" au codage redondant d'information pour la transmission et le stockage distribué. Les distances discrètes seront exposées selon les points de vue théorique (avec une nouvelle classe de distances construites par des chemins à poids variables) et algorithmique (transformation en distance, axe médian, granulométrie) en particulier par des méthodes en un balayage d'image (en "streaming"). Le lien avec les séquences d'entiers non-décroissantes et l'inverse de Lambek-Moser sera mis en avant

    Topics on Reliable and Secure Communication using Rank-Metric and Classical Linear Codes

    Get PDF

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
    • …
    corecore