46 research outputs found

    A comparison of CABAC throughput for HEVC/H.265 VS. AVC/H.264

    Get PDF
    The CABAC entropy coding engine is a well known throughput bottleneck in the AVC/H.264 video codec. It was redesigned to achieve higher throughput for the latest video coding standard HEVC/H.265. Various improvements were made including reduction in context coded bins, reduction in total bins and grouping of bypass bins. This paper discusses and quantifies the impact of these techniques and introduces a new metric called Bjontegaard delta cycles (BD-cycle) to compare the CABAC throughput of HEVC vs. AVC. BD-cycle uses the Bjontegaard delta measurement method to compute the average difference between the cycles vs. bit-rate curves of HEVC and AVC. This metric is useful for estimating the throughput of an HEVC CABAC engine from an existing AVC CABAC design for a given bit-rate. Under the common conditions set by the JCT-VC standardization body, HEVC CABAC has an average BD-cycle reduction of 31.1% for all intra, 24.3% for low delay, and 25.9% for random ac-cess, when processing up to 8 bypass bins per cycle

    Robust and fast selective encryption for HEVC videos

    Get PDF
    Emerging High efficiency video coding (HEVC) is expected to be widely adopted in network applications for high definition devices and mobile terminals. Thus, construction of HEVC's encryption schemes that maintain format compliance and bit rate of encrypted bitstream becomes an active security's researches area. This paper presents a novel selective encryption technique for HEVC videos, based on enciphering the bins of selected Golomb–Rice code’s suffixes with the Advanced Encryption Standard (AES) in a CBC operating mode. The scheme preserves format compliance and size of the encrypted HEVC bitstream, and provides high visual degradation with optimized encryption space defined by selected Golomb–Rice suffixes. Experimental results show reliability and robustness of the proposed technique

    Novi algoritam za kompresiju seizmičkih podataka velike amplitudske rezolucije

    Get PDF
    Renewable sources cannot meet energy demand of a growing global market. Therefore, it is expected that oil & gas will remain a substantial sources of energy in a coming years. To find a new oil & gas deposits that would satisfy growing global energy demands, significant efforts are constantly involved in finding ways to increase efficiency of a seismic surveys. It is commonly considered that, in an initial phase of exploration and production of a new fields, high-resolution and high-quality images of the subsurface are of the great importance. As one part in the seismic data processing chain, efficient managing and delivering of a large data sets, that are vastly produced by the industry during seismic surveys, becomes extremely important in order to facilitate further seismic data processing and interpretation. In this respect, efficiency to a large extent relies on the efficiency of the compression scheme, which is often required to enable faster transfer and access to data, as well as efficient data storage. Motivated by the superior performance of High Efficiency Video Coding (HEVC), and driven by the rapid growth in data volume produced by seismic surveys, this work explores a 32 bits per pixel (b/p) extension of the HEVC codec for compression of seismic data. It is proposed to reassemble seismic slices in a format that corresponds to video signal and benefit from the coding gain achieved by HEVC inter mode, besides the possible advantages of the (still image) HEVC intra mode. To this end, this work modifies almost all components of the original HEVC codec to cater for high bit-depth coding of seismic data: Lagrange multiplier used in optimization of the coding parameters has been adapted to the new data statistics, core transform and quantization have been reimplemented to handle the increased bit-depth range, and modified adaptive binary arithmetic coder has been employed for efficient entropy coding. In addition, optimized block selection, reduced intra prediction modes, and flexible motion estimation are tested to adapt to the structure of seismic data. Even though the new codec after implementation of the proposed modifications goes beyond the standardized HEVC, it still maintains a generic HEVC structure, and it is developed under the general HEVC framework. There is no similar work in the field of the seismic data compression that uses the HEVC as a base codec setting. Thus, a specific codec design has been tailored which, when compared to the JPEG-XR and commercial wavelet-based codec, significantly improves the peak-signal-tonoise- ratio (PSNR) vs. compression ratio performance for 32 b/p seismic data. Depending on a proposed configurations, PSNR gain goes from 3.39 dB up to 9.48 dB. Also, relying on the specific characteristics of seismic data, an optimized encoder is proposed in this work. It reduces encoding time by 67.17% for All-I configuration on trace image dataset, and 67.39% for All-I, 97.96% for P2-configuration and 98.64% for B-configuration on 3D wavefield dataset, with negligible coding performance losses. As a side contribution of this work, HEVC is analyzed within all of its functional units, so that the presented work itself can serve as a specific overview of methods incorporated into the standard

    Parallelization of CABAC transform coefficient coding for HEVC

    Get PDF
    Abstract-Data dependencies in CABAC make it difficult to parallelize and thus limits its throughput. The majority of the bins processed by the CABAC are used to represent the prediction error/residual in terms of quantized transform coefficients. This paper provides an overview of the various improvements to context selection and scans in transform coefficient coding that enable HEVC to potentially achieve higher throughput relative to AVC/H.264. Specifically, it describes changes that remove data dependencies in significance map and coefficient level coding. Proposed and adopted techniques up to HM-4.0 are discussed. This work illustrates that accounting for implementation cost when designing video coding algorithms can result in a design that can enable higher processing speed and reduce hardware cost, while still delivering high coding efficiency

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    Transparent encryption with scalable video communication: Lower-latency, CABAC-based schemes

    Get PDF
    Selective encryption masks all of the content without completely hiding it, as full encryption would do at a cost in encryption delay and increased bandwidth. Many commercial applications of video encryption do not even require selective encryption, because greater utility can be gained from transparent encryption, i.e. allowing prospective viewers to glimpse a reduced quality version of the content as a taster. Our lightweight selective encryption scheme when applied to scalable video coding is well suited to transparent encryption. The paper illustrates the gains in reducing delay and increased distortion arising from a transparent encryption that leaves reduced quality base layer in the clear. Reduced encryption of B-frames is a further step beyond transparent encryption in which the computational overhead reduction is traded against content security and limited distortion. This spectrum of video encryption possibilities is analyzed in this paper, though all of the schemes maintain decoder compatibility and add no bitrate overhead as a result of jointly encoding and encrypting the input video by virtue of carefully selecting the entropy coding parameters that are encrypted. The schemes are suitable both for H.264 and HEVC codecs, though demonstrated in the paper for H.264. Selected Content Adaptive Binary Arithmetic Coding (CABAC) parameters are encrypted by a lightweight Exclusive OR technique, which is chosen for practicality

    High throughput image compression and decompression on GPUs

    Get PDF
    Diese Arbeit befasst sich mit der Entwicklung eines GPU-freundlichen, intra-only, Wavelet-basierten Videokompressionsverfahrens mit hohem Durchsatz, das für visuell verlustfreie Anwendungen optimiert ist. Ausgehend von der Beobachtung, dass der JPEG 2000 Entropie-Kodierer ein Flaschenhals ist, werden verschiedene algorithmische Änderungen vorgeschlagen und bewertet. Zunächst wird der JPEG 2000 Selective Arithmetic Coding Mode auf der GPU realisiert, wobei sich die Erhöhung des Durchsatzes hierdurch als begrenzt zeigt. Stattdessen werden zwei nicht standard-kompatible Änderungen vorgeschlagen, die (1) jede Bitebebene in nur einem einzelnen Pass verarbeiten (Single-Pass-Modus) und (2) einen echten Rohcodierungsmodus einführen, der sample-weise parallelisierbar ist und keine aufwendige Kontextmodellierung erfordert. Als nächstes wird ein alternativer Entropiekodierer aus der Literatur, der Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), evaluiert. Er gibt Signaladaptivität zu Gunsten von höherer Parallelität auf und daher wird hier untersucht und gezeigt, dass ein aus verschiedensten Testsequenzen gemitteltes statisches Wahrscheinlichkeitsmodell eine kompetitive Kompressionseffizienz erreicht. Es wird zudem eine Kombination von BPC-PaCo mit dem Single-Pass-Modus vorgeschlagen, der den Speedup gegenüber dem JPEG 2000 Entropiekodierer von 2,15x (BPC-PaCo mit zwei Pässen) auf 2,6x (BPC-PaCo mit Single-Pass-Modus) erhöht auf Kosten eines um 0,3 dB auf 1,0 dB erhöhten Spitzen-Signal-Rausch-Verhältnis (PSNR). Weiter wird ein paralleler Algorithmus zur Post-Compression Ratenkontrolle vorgestellt sowie eine parallele Codestream-Erstellung auf der GPU. Es wird weiterhin ein theoretisches Laufzeitmodell formuliert, das es durch Benchmarking von einer GPU ermöglicht die Laufzeit einer Routine auf einer anderen GPU vorherzusagen. Schließlich wird der erste JPEG XS GPU Decoder vorgestellt und evaluiert. JPEG XS wurde als Low Complexity Codec konzipiert und forderte erstmals explizit GPU-Freundlichkeit bereits im Call for Proposals. Ab Bitraten über 1 bpp ist der Decoder etwa 2x schneller im Vergleich zu JPEG 2000 und 1,5x schneller als der schnellste hier vorgestellte Entropiekodierer (BPC-PaCo mit Single-Pass-Modus). Mit einer GeForce GTX 1080 wird ein Decoder Durchsatz von rund 200 fps für eine UHD-4:4:4-Sequenz erreicht.This work investigates possibilities to create a high throughput, GPU-friendly, intra-only, Wavelet-based video compression algorithm optimized for visually lossless applications. Addressing the key observation that JPEG 2000’s entropy coder is a bottleneck and might be overly complex for a high bit rate scenario, various algorithmic alterations are proposed. First, JPEG 2000’s Selective Arithmetic Coding mode is realized on the GPU, but the gains in terms of an increased throughput are shown to be limited. Instead, two independent alterations not compliant to the standard are proposed, that (1) give up the concept of intra-bit plane truncation points and (2) introduce a true raw-coding mode that is fully parallelizable and does not require any context modeling. Next, an alternative block coder from the literature, the Bitplane Coder with Parallel Coefficient Processing (BPC-PaCo), is evaluated. Since it trades signal adaptiveness for increased parallelism, it is shown here how a stationary probability model averaged from a set of test sequences yields competitive compression efficiency. A combination of BPC-PaCo with the single-pass mode is proposed and shown to increase the speedup with respect to the original JPEG 2000 entropy coder from 2.15x (BPC-PaCo with two passes) to 2.6x (proposed BPC-PaCo with single-pass mode) at the marginal cost of increasing the PSNR penalty by 0.3 dB to at most 1 dB. Furthermore, a parallel algorithm is presented that determines the optimal code block bit stream truncation points (given an available bit rate budget) and builds the entire code stream on the GPU, reducing the amount of data that has to be transferred back into host memory to a minimum. A theoretical runtime model is formulated that allows, based on benchmarking results on one GPU, to predict the runtime of a kernel on another GPU. Lastly, the first ever JPEG XS GPU-decoder realization is presented. JPEG XS was designed to be a low complexity codec and for the first time explicitly demanded GPU-friendliness already in the call for proposals. Starting at bit rates above 1 bpp, the decoder is around 2x faster compared to the original JPEG 2000 and 1.5x faster compared to JPEG 2000 with the fastest evaluated entropy coder (BPC-PaCo with single-pass mode). With a GeForce GTX 1080, a decoding throughput of around 200 fps is achieved for a UHD 4:4:4 sequence

    Hybrid Region-based Image Compression Scheme for Mamograms and Ultrasound Images

    Get PDF
    The need for transmission and archive of mammograms and ultrasound Images has dramatically increased in tele-healthcare applications. Such images require large amount of' storage space which affect transmission speed. Therefore an effective compression scheme is essential. Compression of these images. in general. laces a great challenge to compromise between the higher compression ratio and the relevant diagnostic information. Out of the many studied compression schemes. lossless . IPl. (i- LS and lossy SPII IT are found to he the most efficient ones. JPEG-LS and SI'll IT are chosen based on a comprehensive experimental study carried on a large number of mammograms and ultrasound images of different sizes and texture. The lossless schemes are evaluated based on the compression ratio and compression speed. The distortion in the image quality which is introduced by lossy methods evaluated based on objective criteria using Mean Square Error (MSE) and Peak signal to Noise Ratio (PSNR). It is found that lossless compression can achieve a modest compression ratio 2: 1 - 4: 1. bossy compression schemes can achieve higher compression ratios than lossless ones but at the price of the image quality which may impede diagnostic conclusions. In this work, a new compression approach called Ilvbrid Region-based Image Compression Scheme (IIYRICS) has been proposed for the mammograms and ultrasound images to achieve higher compression ratios without compromising the diagnostic quality. In I LYRICS, a modification for JPI; G-LS is introduced to encode the arbitrary shaped disease affected regions. Then Shape adaptive SPIT IT is applied on the remaining non region of interest. The results clearly show that this hybrid strategy can yield high compression ratios with perfect reconstruction of diagnostic relevant regions, achieving high speed transmission and less storage requirement. For the sample images considered in our experiment, the compression ratio increases approximately ten times. However, this increase depends upon the size of the region of interest chosen. It is also föund that the pre-processing (contrast stretching) of region of interest improves compression ratios on mammograms but not on ultrasound images

    Contributions to Medical Image Segmentation and Signal Analysis Utilizing Model Selection Methods

    Get PDF
    This thesis presents contributions to model selection techniques, especially based on information theoretic criteria, with the goal of solving problems appearing in signal analysis and in medical image representation, segmentation, and compression.The field of medical image segmentation is wide and is quickly developing to make use of higher available computational power. This thesis concentrates on several applications that allow the utilization of parametric models for image and signal representation. One important application is cell nuclei segmentation from histological images. We model nuclei contours by ellipses and thus the complicated problem of separating overlapping nuclei can be rephrased as a model selection problem, where the number of nuclei, their shapes, and their locations define one segmentation. In this thesis, we present methods for model selection in this parametric setting, where the intuitive algorithms are combined with more principled ones, namely those based on the minimum description length (MDL) principle. The results of the introduced unsupervised segmentation algorithm are compared with human subject segmentations, and are also evaluated with the help of a pathology expert.Another considered medical image application is lossless compression. The objective has been to add the task of image segmentation to that of image compression such that the image regions can be transmitted separately, depending on the region of interest for diagnosis. The experiments performed on retinal color images show that our modeling, in which the MDL criterion selects the structure of the linear predictive models, outperforms publicly available image compressors such as the lossless version of JPEG 2000.For time series modeling, the thesis presents an algorithm which allows detection of changes in time series signals. The algorithm is based on one of the most recent implementations of the MDL principle, the sequentially normalized maximum likelihood (SNML) models.This thesis produces contributions in the form of new methods and algorithms, where the simplicity of information theoretic principles are combined with a rather complex and problem dependent modeling formulation, resulting in both heuristically motivated and principled algorithmic solutions
    corecore