6,824 research outputs found

    A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm

    Full text link
    Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput. The length-NN input sequence is partitioned into BB blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of BB, but degrades the compression quality. Instead, our approach is to first estimate the minimum description length (MDL) context tree source underlying the entire input, and then encode each of the BB blocks in parallel based on the MDL source. With this two-pass approach, the compression loss incurred by using more parallel units is insignificant. Our algorithm is work-efficient, i.e., its computational complexity is O(N/B)O(N/B). Its redundancy is approximately Blog(N/B)B\log(N/B) bits above Rissanen's lower bound on universal compression performance, with respect to any context tree source whose maximal depth is at most log(N/B)\log(N/B). We improve the compression by using different quantizers for states of the context tree based on the number of symbols corresponding to those states. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special issue on Signal Processing for Big Data (expected publication date June 2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note: substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a typ

    Self-organizing lists on the Xnet

    Get PDF
    The first parallel designs for implementing self-organizing lists on the Xnet interconnection network are presented. Self-organizing lists permute the order of list entries after an entry is accessed according to some update hueristic. The heuristic attempts to place frequently requested entries closer to the front of the list. This paper outlines Xnet systems for self-organizing lists under the move-to-front and transpose update heuristics. Our novel designs can be used to achieve high-speed lossless text compression

    A very high speed lossless compression/decompression chip set

    Get PDF
    A chip is described that will perform lossless compression and decompression using the Rice Algorithm. The chip set is designed to compress and decompress source data in real time for many applications. The encoder is designed to code at 20 M samples/second at MIL specifications. That corresponds to 280 Mbits/second at maximum quantization or approximately 500 Mbits/second under nominal conditions. The decoder is designed to decode at 10 M samples/second at industrial specifications. A wide range of quantization levels is allowed (4...14 bits) and both nearest neighbor prediction and external prediction are supported. When the pre and post processors are bypassed, the chip set performs high speed entropy coding and decoding. This frees the chip set from being tied to one modeling technique or specific application. Both the encoder and decoder are being fabricated in a 1.0 micron CMOS process that has been tested to survive 1 megarad of total radiation dosage. The CMOS chips are small, only 5 mm on a side, and both are estimated to consume less than 1/4 of a Watt of power while operating at maximum frequency

    The implementation of a lossless data compression module in an advanced orbiting system: Analysis and development

    Get PDF
    Data compression has been proposed for several flight missions as a means of either reducing on board mass data storage, increasing science data return through a bandwidth constrained channel, reducing TDRSS access time, or easing ground archival mass storage requirement. Several issues arise with the implementation of this technology. These include the requirement of a clean channel, onboard smoothing buffer, onboard processing hardware and on the algorithm itself, the adaptability to scene changes and maybe even versatility to the various mission types. This paper gives an overview of an ongoing effort being performed at Goddard Space Flight Center for implementing a lossless data compression scheme for space flight. We will provide analysis results on several data systems issues, the performance of the selected lossless compression scheme, the status of the hardware processor and current development plan

    EChO Payload electronics architecture and SW design

    Full text link
    EChO is a three-modules (VNIR, SWIR, MWIR), highly integrated spectrometer, covering the wavelength range from 0.55 μ\mum, to 11.0 μ\mum. The baseline design includes the goal wavelength extension to 0.4 μ\mum while an optional LWIR module extends the range to the goal wavelength of 16.0 μ\mum. An Instrument Control Unit (ICU) is foreseen as the main electronic subsystem interfacing the spacecraft and collecting data from all the payload spectrometers modules. ICU is in charge of two main tasks: the overall payload control (Instrument Control Function) and the housekeepings and scientific data digital processing (Data Processing Function), including the lossless compression prior to store the science data to the Solid State Mass Memory of the Spacecraft. These two main tasks are accomplished thanks to the Payload On Board Software (P-OBSW) running on the ICU CPUs.Comment: Experimental Astronomy - EChO Special Issue 201
    corecore