9,988 research outputs found

    New Algorithms and Lower Bounds for Sequential-Access Data Compression

    Get PDF
    This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.Comment: draft of PhD thesi

    Real-time and distributed applications for dictionary-based data compression

    Get PDF
    The greedy approach to dictionary-based static text compression can be executed by a finite state machine. When it is applied in parallel to different blocks of data independently, there is no lack of robustness even on standard large scale distributed systems with input files of arbitrary size. Beyond standard large scale, a negative effect on the compression effectiveness is caused by the very small size of the data blocks. A robust approach for extreme distributed systems is presented in this paper, where this problem is fixed by overlapping adjacent blocks and preprocessing the neighborhoods of the boundaries. Moreover, we introduce the notion of pseudo-prefix dictionary, which allows optimal compression by means of a real-time semi-greedy procedure and a slight improvement on the compression ratio obtained by the distributed implementations

    Evaluation of GPU/CPU Co-Processing Models for JPEG 2000 Packetization

    Get PDF
    With the bottom-line goal of increasing the throughput of a GPU-accelerated JPEG 2000 encoder, this paper evaluates whether the post-compression rate control and packetization routines should be carried out on the CPU or on the GPU. Three co-processing models that differ in how the workload is split among the CPU and GPU are introduced. Both routines are discussed and algorithms for executing them in parallel are presented. Experimental results for compressing a detail-rich UHD sequence to 4 bits/sample indicate speed-ups of 200x for the rate control and 100x for the packetization compared to the single-threaded implementation in the commercial Kakadu library. These two routines executed on the CPU take 4x as long as all remaining coding steps on the GPU and therefore present a bottleneck. Even if the CPU bottleneck could be avoided with multi-threading, it is still beneficial to execute all coding steps on the GPU as this minimizes the required device-to-host transfer and thereby speeds up the critical path from 17.2 fps to 19.5 fps for 4 bits/sample and to 22.4 fps for 0.16 bits/sample

    Data compression for the microgravity experiments

    Get PDF
    Researchers present the environment and conditions under which data compression is to be performed for the microgravity experiment. Also presented are some coding techniques that would be useful for coding in this environment. It should be emphasized that researchers are currently at the beginning of this program and the toolkit mentioned is far from complete

    Spectral Efficiency of MIMO Millimeter-Wave Links with Single-Carrier Modulation for 5G Networks

    Full text link
    Future wireless networks will extensively rely upon bandwidths centered on carrier frequencies larger than 10GHz. Indeed, recent research has shown that, despite the large path-loss, millimeter wave (mmWave) frequencies can be successfully exploited to transmit very large data-rates over short distances to slowly moving users. Due to hardware complexity and cost constraints, single-carrier modulation schemes, as opposed to the popular multi-carrier schemes, are being considered for use at mmWave frequencies. This paper presents preliminary studies on the achievable spectral efficiency on a wireless MIMO link operating at mmWave in a typical 5G scenario. Two different single-carrier modem schemes are considered, i.e. a traditional modulation scheme with linear equalization at the receiver, and a single-carrier modulation with cyclic prefix, frequency-domain equalization and FFT-based processing at the receiver. Our results show that the former achieves a larger spectral efficiency than the latter. Results also confirm that the spectral efficiency increases with the dimension of the antenna array, as well as that performance gets severely degraded when the link length exceeds 100 meters and the transmit power falls below 0dBW. Nonetheless, mmWave appear to be very suited for providing very large data-rates over short distances.Comment: 8 pages, 8 figures, to appear in Proc. 20th International ITG Workshop on Smart Antennas (WSA2016
    corecore