72 research outputs found

    Optimal trellis-based buffered compression and fast approximations

    Get PDF
    The authors formalize the description of the buffer-constrained adaptive quantization problem. For a given set of admissible quantizers used to code a discrete nonstationary signal sequence in a buffer-constrained environment, they formulate the optimal solution. They also develop slightly suboptimal but much faster approximations. These solutions are valid for any globally minimum distortion criterion, which is additive over the individual elements of the sequence. As a first step, they define the problem as one of constrained, discrete optimization and establish its equivalence to some of the problems studied in the field of integer programming. Forward dynamic programming using the Viterbi algorithm is shown to provide a way of computing the optimal solution. Then, they provide a heuristic algorithm based on Lagrangian optimization using an operational rate-distortion framework that, with computing complexity reduced by an order of magnitude, approaches the optimally achievable performance. The algorithms can serve as a benchmark for assessing the performance of buffer control strategies and are useful for applications such as multimedia workstation displays, video encoding for CD-ROMs, and buffered JPEG coding environments, where processing delay is not a concern but decoding buffer size has to be minimize

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Streaming Video over HTTP with Consistent Quality

    Full text link
    In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.Comment: Refined version submitted to ACM Multimedia Systems Conference (MMSys), 201

    Buffer Constrained Proactive Dynamic Voltage Scaling for Video Decoding Systems

    Full text link

    FAST rate allocation for JPEG2000 video transmission over time-varying channels

    Get PDF
    This work introduces a rate allocation method for the transmission of pre-encoded JPEG2000 video over timevarying channels, which vary their capacity during video transmission due to network congestion, hardware failures, or router saturation. Such variations occur often in networks and are commonly unpredictable in practice. The optimization problem is posed for such networks and a rate allocation method is formulated to handle such variations. The main insight of the proposed method is to extend the complexity scalability features of the FAst rate allocation through STeepest descent (FAST) algorithm. Extensive experimental results suggest that the proposed transmission scheme achieves near-optimal performance while expending few computational resources

    The Telecommunications and Data Acquisition Report

    Get PDF
    This quarterly publication provides archival reports on developments in programs managed by JPL's Telecommunications and Mission Operations Directorate (TMOD), which now includes the former Telecommunications and Data Acquisition (TDA) Office. In space communications, radio navigation, radio science, and ground-based radio and radar astronomy, it reports on activities of the Deep Space Network (DSN) in planning, supporting research and technology, implementation, and operations. Also included are standards activity at JPL for space data and information systems and reimbursable DSN work performed for other space agencies through NASA. The preceding work is all performed for NASA's Office of Space Communications (OSC)

    Enhanced Multicarrier Techniques for Professional Ad-Hoc and Cell-Based Communications (EMPhAtiC) Document Number D3.3 Reduction of PAPR and non linearities effects

    Get PDF
    Livrable d'un projet Européen EMPHATICLike other multicarrier modulation techniques, FBMC suffers from high peak-to-average power ratio (PAPR), impacting its performance in the presence of a nonlinear high power amplifier (HPA) in two ways. The first impact is an in-band distortion affecting the error rate performance of the link. The second impact is an out-of-band effect appearing as power spectral density (PSD) regrowth, making the coexistence between FBMC based broad-band Professional Mobile Radio (PMR) systems with existing narrowband systems difficult to achieve. This report addresses first the theoretical analysis of in-band HPA distortions in terms of Bit Error Rate. Also, the out-of band impact of HPA nonlinearities is studied in terms of PSD regrowth prediction. Furthermore, the problem of PAPR reduction is addressed along with some HPA linearization techniques and nonlinearity compensation approaches

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements
    • …
    corecore