22,474 research outputs found

    Low complexity intra video coding using transform domain prediction

    Get PDF
    In this paper, a new low complexity intra coding framework is presented. The proposed method is extremely computationally efficient as it uses intra prediction in the DCT domain. To facilitate finding a good predictor, we propose to extend the number of neighouring blocks to be searched, based on a consideration of the type of edges we can expect to observe in the pixel data. The best predictor can be selected from the candidate blocks without recourse to rate-distortion optimisation or pixel interpolation. To obtain better performance we also propose to automatically adapt the entropy encoding block to the prediction mode used. Experimental results show that the encoding scheme compares favorably to H.264/AVC in terms of compression efficiency but with a significant reduction in overall computational complexity

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Quantifying Potential Energy Efficiency Gain in Green Cellular Wireless Networks

    Full text link
    Conventional cellular wireless networks were designed with the purpose of providing high throughput for the user and high capacity for the service provider, without any provisions of energy efficiency. As a result, these networks have an enormous Carbon footprint. In this paper, we describe the sources of the inefficiencies in such networks. First we present results of the studies on how much Carbon footprint such networks generate. We also discuss how much more mobile traffic is expected to increase so that this Carbon footprint will even increase tremendously more. We then discuss specific sources of inefficiency and potential sources of improvement at the physical layer as well as at higher layers of the communication protocol hierarchy. In particular, considering that most of the energy inefficiency in cellular wireless networks is at the base stations, we discuss multi-tier networks and point to the potential of exploiting mobility patterns in order to use base station energy judiciously. We then investigate potential methods to reduce this inefficiency and quantify their individual contributions. By a consideration of the combination of all potential gains, we conclude that an improvement in energy consumption in cellular wireless networks by two orders of magnitude, or even more, is possible.Comment: arXiv admin note: text overlap with arXiv:1210.843

    Optimizing Lossy Compression Rate-Distortion from Automatic Online Selection between SZ and ZFP

    Full text link
    With ever-increasing volumes of scientific data produced by HPC applications, significantly reducing data size is critical because of limited capacity of storage space and potential bottlenecks on I/O or networks in writing/reading or transferring data. SZ and ZFP are the two leading lossy compressors available to compress scientific data sets. However, their performance is not consistent across different data sets and across different fields of some data sets: for some fields SZ provides better compression performance, while other fields are better compressed with ZFP. This situation raises the need for an automatic online (during compression) selection between SZ and ZFP, with a minimal overhead. In this paper, the automatic selection optimizes the rate-distortion, an important statistical quality metric based on the signal-to-noise ratio. To optimize for rate-distortion, we investigate the principles of SZ and ZFP. We then propose an efficient online, low-overhead selection algorithm that predicts the compression quality accurately for two compressors in early processing stages and selects the best-fit compressor for each data field. We implement the selection algorithm into an open-source library, and we evaluate the effectiveness of our proposed solution against plain SZ and ZFP in a parallel environment with 1,024 cores. Evaluation results on three data sets representing about 100 fields show that our selection algorithm improves the compression ratio up to 70% with the same level of data distortion because of very accurate selection (around 99%) of the best-fit compressor, with little overhead (less than 7% in the experiments).Comment: 14 pages, 9 figures, first revisio
    corecore