13 research outputs found

    Integration of FAPEC as data compressor stage in a SpaceFibre link

    Get PDF
    SpaceFibre is a new technology for use onboard spacecraft that provides point-to-point and networked interconnections at 3.125 Gbits/s in flight qualified technology. SpaceFibre is an European Space Agency (ESA) initiative and will substitute the ubiquitous SpaceWire for high speed applications in space. FAPEC is a lossless data compression algorithm that typically offers better ratios than the CCSDS 121.0 Lossless Data Compression Recommendation on realistic data sets. FAPEC was designed for space communications, where requirements are very strong in terms of energy consumption and efficiency. In this project we have demonstrated that FAPEC can be easily integrated on top of SpaceFibre to reduce the amount of information that the spacecraft network has to deal with. The integration of FAPEC with SpaceFibre has successfully been validated in a representative FPGA platform. In the developed design FAPEC operated at ~12 Msamples/s (~200 Mbit/s) using a Xilinx Spartan-6 but it is expected to reach Gbit/s speeds with some additional work. The speed of the algorithm has been improved by a factor 6 while the resource usage remains low, around 2% of a Xilinx Virtex-5QV or a Microsemi RTG4. The combination of these two technologies can help to reduce the large amounts of data generated by some satellite instruments in a transparent way, without the need of user intervention, and to provide a solution to the increasing data volumes in spacecrafts. Consequently the combination of FAPEC with SpaceFibre can help to save mass, power consumption and reduce system complexity.SpaceFibre es una nueva tecnología para uso embarcado en satélites que proporciona conexiones punto a punto y de red a 3.125 Gbit/s en tecnología calificada para espacio. SpaceFibre es una iniciativa de la Agencia Espacial Europea (ESA) y sustituirá al popular SpaceWire en aplicaciones espaciales de alta velocidad. FAPEC es un algoritmo de compresión sin pérdidas que normalmente ofrece relaciones de compresión para conjuntos de datos realistas mejores que las de la recomendación CCSDS 121.0. FAPEC ha sido diseñado para las comunicaciones espaciales, donde las restricciones de consumo de energía y eficiencia son muy fuertes. En este proyecto hemos demostrado que FAPEC puede ser integrado fácilmente con SpaceFibre para reducir la cantidad de información que la red del satélite tiene que procesar. La integración de FAPEC con SpaceFibre ha sido validada con éxito en una plataforma FPGA representativa. En el diseño desarrollado, FAPEC funciona a ~12 Mmuestras/s (~200 Mbit/s) usando una Xilinx Spartan-6 pero se espera que alcance velocidades de Gbit/s con un poco más de trabajo. La velocidad del algoritmo se ha mejorado un factor 6 mientras que el uso de recursos continua siendo bajo, alrededor de un 2% de una Xilinx Virtex-5QV o Microsemi RTG4. La combinación de estas dos tecnologías puede ayudar a reducir las grandes cantidades de datos generados por los instrumentos de los satélites de una manera transparente, sin necesidad de una intervención por parte del usuario, y de proporcionar una solución al continuo incremento de datos generados. En consecuencia, la combinación de FAPEC y SpaceFibre puede ayudar a ahorrar masa y consumo de energía, y reducir la complejidad de los sistemas.SpaceFibre és una nova tecnologia per a ús embarcat en satèl·lits que proporciona connexions punt a punt i de xarxa a 3.125 Gbit/s en tecnologia qualificada per espai. SpaceFibre és una iniciativa de l'Agència Espacial Europea (ESA) i substituirà el popular SpaceWire en aplicacions espacials d'alta velocitat. FAPEC és un algorisme de compressió sense pèrdues que normalment ofereix relacions de compressió per a conjunts de dades realistes millors que les de la recomanació CCSDS 121.0. FAPEC ha estat dissenyat per a les comunicacions espacials, on les restriccions de consum d'energia i eficiència són molt fortes. En aquest projecte hem demostrat que FAPEC pot ser integrat fàcilment amb SpaceFibre per reduir la quantitat d'informació que la xarxa del satèl·lit ha de processar. La integració de FAPEC amb SpaceFibre ha estat validada amb èxit en una plataforma FPGA representativa. En el disseny desenvolupat, FAPEC funciona a ~12 Mmostres/s (~200 Mbit/s) utilitzant una Xilinx Spartan-6 però s'espera que arribi velocitats de Gbit/s amb una mica més de feina. La velocitat de l'algorisme s'ha millorat un factor 6 mentre que l'ús de recursos continua sent baix, al voltant d'un 2% d'una Xilinx Virtex-5QV o Microsemi RTG4. La combinació d'aquestes dues tecnologies pot ajudar a reduir les grans quantitats de dades generades pels instruments dels satèl·lits d'una manera transparent, sense necessitat d'una intervenció per part de l'usuari, i de proporcionar una solució al continu increment de dades generades. En conseqüència, la combinació de FAPEC i SpaceFibre pot ajudar a estalviar massa i consum d'energia, i reduir la complexitat dels sistemes

    High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation

    Get PDF
    The capacity of the downlink channel is a major bottleneck for applications based on remotesensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amountof HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraintsof spaceborne devices impose limitations on the complexity of practical compression algorithms.To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in thisstudy. This work aims at finding the most advantageous compression-complexity trade-off withinthe state of the art in HSI compression. To do so, a novel comparison of the most competitive spectraldecorrelation approaches combined with the best performing low-complexity compressors of thestate is presented. Compression performance and execution time results are obtained for a set of47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming onlya limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields thebest trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster andits compressed data rates are on average within 16% of the CCSDS standard. In scenarios whereenergy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of allevaluated methods

    Study, design and implementation of robust entropy coders

    Get PDF

    High-Performance Compression of Multibeam Echosounders Water Column Data

    Get PDF
    Over the last few decades, multibeam echosounders (MBES) have become the dominant technique to efficiently and accurately map the seafloor. They now allow to collect water column acoustic images along with the bathymetry, which is providing a wealth of new possibilities in oceans exploration. However, water column imagery generates vast amounts of data that poses obvious logistic, economic, and technical challenges. Surprisingly, very few studies have addressed this problem by providing efficient lossless or lossy data compression solutions. Currently, the available options are only lossless, providing low compression ratios at low speeds. In this paper, we adapt a data compression algorithm, the Fully Adaptive Prediction Error Coder (FAPEC), which was created to offer outstanding performance under the strong requirements of space data transmission. We have added to this entropy coder a specific pre-processing stage tailored to theKongsbergMaritime water column file formats. Here, we test it on data acquired with Kongsberg MBES models EM302, EM710, andEM2040.With this bespoke pre-processing, FAPEC provides good lossless compression ratios at high speeds, whereas lossy ratios reach water column file sizes even smaller than bathymetry raw files still with good image quality. We show the advantages over other lossless compression solutions, both in terms of compression ratios and speed.We illustrate the quality of water column images after lossy FAPEC compression, as well as its resilience to datagram errors and its potential for automatic detection of water column targets. We also show the successful integration in ARM microprocessors (like those used by smartphones and also by autonomous underwater vehicles), which provides a real-time solution for MBES water column data compression

    Implementation of a GNSS-R payload based on software defined radio for the 3CAT-2 mission

    Get PDF
    ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The 3CAT-2 nanosatellite aims at demonstrating global navigation satellite system reflectometry (GNSS-R) techniques for spaceborne applications in the small form of a six-unit CubeSat. There are many challenges involved from a size, processing, and power perspectives. The proposed solution for the payload uses a software-defined radio (SDR) connected to a nadir looking array of dual-band and dual-frequency and dual-polarization antennas to capture the reflected GNSS signals and to a zenith looking patch antenna to capture the direct ones. The SDR is controlled by the payload computer, which retrieves the binary samples and processes the raw data to obtain delay-doppler maps (DDMs) via various techniques. DDMs are then compressed using the fully adaptive prediction error coder algorithm, producing an output more suitable for the limited downlink capabilities of these small platforms.Peer ReviewedPostprint (author's final draft

    FAPEC integration as an HDF5 filter

    Get PDF
    The Data Compression Group of the Institute for Space Studies of Catalonia (IEEC) has developed FAPEC, the Fully Adaptive Prediction Error Coder. It is a highly optimized adaptive entropy coder, which can be applied as a data compression solution for satellite payloads owing to its very quick and autonomous operation, furthermore with a high resiliency in front of data outliers. FAPEC is also being prepared for on-ground applications as well, for instance within HDF5, a general purpose library and file format for storing scientific data. The project proposed here consists iIn this work we propose a solution to some of the problems found in supercomputing environments by combining an extremely efficient, standard, open-source data manager suite with a high-performance data compressor. We do not intend to use such an efficient file format and, later, compress the resulting files or data sets without further ado, as we would be losing in the compression process the file format benefits. Our aim is to compress the little portions of data that conform the data sets stored inside the file (which are named chunks), thus without losing any of the functionalities offered by the mentioned data management suite. HDF5 is our choice for the data storage and management format, and FAPEC is the high-performance data compressor chosen. By integrating FAPEC as an HDF5 filter we will offer a solution that can solve in a smart, clean and efficient way the storage and management problems in supercomputing environments.In this work we propose a solution to some of the problems found in supercomputing environments by combining an extremely efficient, standard, open-source data manager suite with a high-performance data compressor. We do not intend to use such an efficient file format and, later, compress the resulting files or data sets without further ado, as we would be losing in the compression process the file format benefits. Our aim is to compress the little portions of data that conform the data sets stored inside the file (which are named chunks), thus without losing any of the functionalities offered by the mentioned data management suite. HDF5 is our choice for the data storage and management format, and FAPEC is the high-performance data compressor chosen. By integrating FAPEC as an HDF5 filter we will offer a solution that can solve in a smart, clean and efficient way the storage and management problems in supercomputing environments.In this work we propose a solution to some of the problems found in supercomputing environments by combining an extremely efficient, standard, open-source data manager suite with a high-performance data compressor. We do not intend to use such an efficient file format and, later, compress the resulting files or data sets without further ado, as we would be losing in the compression process the file format benefits. Our aim is to compress the little portions of data that conform the data sets stored inside the file (which are named chunks), thus without losing any of the functionalities offered by the mentioned data management suite. HDF5 is our choice for the data storage and management format, and FAPEC is the high-performance data compressor chosen. By integrating FAPEC as an HDF5 filter we will offer a solution that can solve in a smart, clean and efficient way the storage and management problems in supercomputing environments

    Study, design and implementation of robust entropy coders

    Get PDF

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    High-throughput variable-to-fixed entropy codec using selective, stochastic code forests

    Get PDF
    Efficient high-throughput (HT) compression algorithms are paramount to meet the stringent constraints of present and upcoming data storage, processing, and transmission systems. In particular, latency, bandwidth and energy requirements are critical for those systems. Most HT codecs are designed to maximize compression speed, and secondarily to minimize compressed lengths. On the other hand, decompression speed is often equally or more critical than compression speed, especially in scenarios where decompression is performed multiple times and/or at critical parts of a system. In this work, an algorithm to design variable-to-fixed (VF) codes is proposed that prioritizes decompression speed. Stationary Markov analysis is employed to generate multiple, jointly optimized codes (denoted code forests). Their average compression efficiency is on par with the state of the art in VF codes, e.g., within 1% of Yamamoto et al.\u27s algorithm. The proposed code forest structure enables the implementation of highly efficient codecs, with decompression speeds 3.8 times faster than other state-of-the-art HT entropy codecs with equal or better compression ratios for natural data sources. Compared to these HT codecs, the proposed forests yields similar compression efficiency and speeds

    Extending the PCIe Interface with Parallel Compression/Decompression Hardware for Energy and Performance Optimization

    Get PDF
    PCIe is a high-performing interface used to move data from a central host PC to an accelerator such as Field Programmable Gate Arrays (FPGA). This interface allows a system to perform fast data transfers in High-Performance Computing (HPC) and provide a performance boost. However, HPC systems normally require large datasets, and in these situations PCIe can become a bottleneck. To address this issue, we propose an open-source hardware compression/decompression system that can be used to adapt with continuously-streamed data with low latency and high throughput. We implement a compressor and decompressor engines on FPGA, scale up with multiple engines working in parallel, and evaluate the energy reduction and performance with different numbers of multiple engines. To alleviate the performance bottleneck in the processor acting as a controller, we propose a hardware scheduler to fairly distribute the datasets among the engines. Our design reduces the transmission time in PCIe, and the results show an energy reduction of up to 48% in the PCIe transfers, thanks to the decrease in the number of bits that have to be transmitted. The overhead in terms of latency is maintained to a minimum and user selectable depending on the tolerances of the intended application
    corecore