200 research outputs found

    A two-stage video coding framework with both self-adaptive redundant dictionary and adaptively orthonormalized DCT basis

    Full text link
    In this work, we propose a two-stage video coding framework, as an extension of our previous one-stage framework in [1]. The two-stage frameworks consists two different dictionaries. Specifically, the first stage directly finds the sparse representation of a block with a self-adaptive dictionary consisting of all possible inter-prediction candidates by solving an L0-norm minimization problem using an improved orthogonal matching pursuit with embedded orthonormalization (eOMP) algorithm, and the second stage codes the residual using DCT dictionary adaptively orthonormalized to the subspace spanned by the first stage atoms. The transition of the first stage and the second stage is determined based on both stages' quantization stepsizes and a threshold. We further propose a complete context adaptive entropy coder to efficiently code the locations and the coefficients of chosen first stage atoms. Simulation results show that the proposed coder significantly improves the RD performance over our previous one-stage coder. More importantly, the two-stage coder, using a fixed block size and inter-prediction only, outperforms the H.264 coder (x264) and is competitive with the HEVC reference coder (HM) over a large rate range

    Algorithms and Hardware Co-Design of HEVC Intra Encoders

    Get PDF
    Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction

    On Sparse Coding as an Alternate Transform in Video Coding

    Get PDF
    In video compression, specifically in the prediction process, a residual signal is calculated by subtracting the predicted from the original signal, which represents the error of this process. This residual signal is usually transformed by a discrete cosine transform (DCT) from the pixel, into the frequency domain. It is then quantized, which filters more or less high frequencies (depending on a quality parameter). The quantized signal is then entropy encoded usually by a context-adaptive binary arithmetic coding engine (CABAC), and written into a bitstream. In the decoding phase the process is reversed. DCT and quantization in combination are efficient tools, but they are not performing well at lower bitrates and creates distortion and side effect. The proposed method uses sparse coding as an alternate transform which compresses well at lower bitrates, but not well at high bitrates. The decision which transform is used is based on a rate-distortion optimization (RDO) cost calculation to get both transforms in their optimal performance range. The proposed method is implemented in high efficient video coding (HEVC) test model HM-16.18 and high efficient video coding for screen content coding (HEVC-SCC) for test model HM-16.18+SCM-8.7, with a Bjontegaard rate difference (BD-rate) saving, which archives up to 5.5%, compared to the standard

    Application-Specific Cache and Prefetching for HEVC CABAC Decoding

    Get PDF
    Context-based Adaptive Binary Arithmetic Coding (CABAC) is the entropy coding module in the HEVC/H.265 video coding standard. As in its predecessor, H.264/AVC, CABAC is a well-known throughput bottleneck due to its strong data dependencies. Besides other optimizations, the replacement of the context model memory by a smaller cache has been proposed for hardware decoders, resulting in an improved clock frequency. However, the effect of potential cache misses has not been properly evaluated. This work fills the gap by performing an extensive evaluation of different cache configurations. Furthermore, it demonstrates that application-specific context model prefetching can effectively reduce the miss rate and increase the overall performance. The best results are achieved with two cache lines consisting of four or eight context models. The 2 × 8 cache allows a performance improvement of 13.2 percent to 16.7 percent compared to a non-cached decoder due to a 17 percent higher clock frequency and highly effective prefetching. The proposed HEVC/H.265 CABAC decoder allows the decoding of high-quality Full HD videos in real-time using few hardware resources on a low-power FPGA.EC/H2020/645500/EU/Improving European VoD Creative Industry with High Efficiency Video Delivery/Film26

    Transparent encryption with scalable video communication: Lower-latency, CABAC-based schemes

    Get PDF
    Selective encryption masks all of the content without completely hiding it, as full encryption would do at a cost in encryption delay and increased bandwidth. Many commercial applications of video encryption do not even require selective encryption, because greater utility can be gained from transparent encryption, i.e. allowing prospective viewers to glimpse a reduced quality version of the content as a taster. Our lightweight selective encryption scheme when applied to scalable video coding is well suited to transparent encryption. The paper illustrates the gains in reducing delay and increased distortion arising from a transparent encryption that leaves reduced quality base layer in the clear. Reduced encryption of B-frames is a further step beyond transparent encryption in which the computational overhead reduction is traded against content security and limited distortion. This spectrum of video encryption possibilities is analyzed in this paper, though all of the schemes maintain decoder compatibility and add no bitrate overhead as a result of jointly encoding and encrypting the input video by virtue of carefully selecting the entropy coding parameters that are encrypted. The schemes are suitable both for H.264 and HEVC codecs, though demonstrated in the paper for H.264. Selected Content Adaptive Binary Arithmetic Coding (CABAC) parameters are encrypted by a lightweight Exclusive OR technique, which is chosen for practicality

    Compression vidéo basée sur l'exploitation d'un décodeur intelligent

    Get PDF
    This Ph.D. thesis studies the novel concept of Smart Decoder (SDec) where the decoder is given the ability to simulate the encoder and is able to conduct the R-D competition similarly as in the encoder. The proposed technique aims to reduce the signaling of competing coding modes and parameters. The general SDec coding scheme and several practical applications are proposed, followed by a long-term approach exploiting machine learning concept in video coding. The SDec coding scheme exploits a complex decoder able to reproduce the choice of the encoder based on causal references, eliminating thus the need to signal coding modes and associated parameters. Several practical applications of the general outline of the SDec scheme are tested, using different coding modes during the competition on the reference blocs. Despite the choice for the SDec reference block being still simple and limited, interesting gains are observed. The long-term research presents an innovative method that further makes use of the processing capacity of the decoder. Machine learning techniques are exploited in video coding with the purpose of reducing the signaling overhead. Practical applications are given, using a classifier based on support vector machine to predict coding modes of a block. The block classification uses causal descriptors which consist of different types of histograms. Significant bit rate savings are obtained, which confirms the potential of the approach.Cette thĂšse de doctorat Ă©tudie le nouveau concept de dĂ©codeur intelligent (SDec) dans lequel le dĂ©codeur est dotĂ© de la possibilitĂ© de simuler l’encodeur et est capable de mener la compĂ©tition R-D de la mĂȘme maniĂšre qu’au niveau de l’encodeur. Cette technique vise Ă  rĂ©duire la signalisation des modes et des paramĂštres de codage en compĂ©tition. Le schĂ©ma gĂ©nĂ©ral de codage SDec ainsi que plusieurs applications pratiques sont proposĂ©es, suivis d’une approche en amont qui exploite l’apprentissage automatique pour le codage vidĂ©o. Le schĂ©ma de codage SDec exploite un dĂ©codeur complexe capable de reproduire le choix de l’encodeur calculĂ© sur des blocs de rĂ©fĂ©rence causaux, Ă©liminant ainsi la nĂ©cessitĂ© de signaler les modes de codage et les paramĂštres associĂ©s. Plusieurs applications pratiques du schĂ©ma SDec sont testĂ©es, en utilisant diffĂ©rents modes de codage lors de la compĂ©tition sur les blocs de rĂ©fĂ©rence. MalgrĂ© un choix encore simple et limitĂ© des blocs de rĂ©fĂ©rence, les gains intĂ©ressants sont observĂ©s. La recherche en amont prĂ©sente une mĂ©thode innovante qui permet d’exploiter davantage la capacitĂ© de traitement d’un dĂ©codeur. Les techniques d’apprentissage automatique sont exploitĂ©es pour but de rĂ©duire la signalisation. Les applications pratiques sont donnĂ©es, utilisant un classificateur basĂ© sur les machines Ă  vecteurs de support pour prĂ©dire les modes de codage d’un bloc. La classification des blocs utilise des descripteurs causaux qui sont formĂ©s Ă  partir de diffĂ©rents types d’histogrammes. Des gains significatifs en dĂ©bit sont obtenus, confirmant ainsi le potentiel de l’approche

    Efficient Coding of Transform Coefficient Levels in Hybrid Video Coding

    Get PDF
    All video coding standards of practical importance, such as Advanced Video Coding (AVC), its successor High Efficiency Video Coding (HEVC), and the state-of-the-art Versatile Video Coding (VVC), follow the basic principle of block-based hybrid video coding. In such an architecture, the video pictures are partitioned into blocks. Each block is first predicted by either intra-picture or motion-compensated prediction, and the resulting prediction errors, referred to as residuals, are compressed using transform coding. This thesis deals with the entropy coding of quantization indices for transform coefficients, also referred to as transform coefficient levels, as well as the entropy coding of directly quantized residual samples. The entropy coding of quantization indices is referred to as level coding in this thesis. The presented developments focus on both improving the coding efficiency and reducing the complexity of the level coding for HEVC and VVC. These goals were achieved by modifying the context modeling and the binarization of the level coding. The first development presented in this thesis is a transform coefficient level coding for variable transform block sizes, which was introduced in HEVC. It exploits the fact that non-zero levels are typically concentrated in certain parts of the transform block by partitioning blocks larger than \square{4} samples into \square{4} sub-blocks. Each \square{4} sub-block is then coded similarly to the level coding specified in AVC for \square{4} transform blocks. This sub-block processing improves coding efficiency and has the advantage that the number of required context models is independent of the set of supported transform block sizes. The maximum number of context-coded bins for a transform coefficient level is one indicator for the complexity of the entropy coding. An adaptive binarization of absolute transform coefficient levels using Rice codes is presented that reduces the maximum number of context-coded bins from 15 (as used in AVC) to three for HEVC. Based on the developed selection of an appropriate Rice code for each scanning position, this adaptive binarization achieves virtually the same coding efficiency as the binarization specified in AVC for bit-rate operation points typically used in consumer applications. The coding efficiency is improved for high bit-rate operation points, which are used in more advanced and professional applications. In order to further improve the coding efficiency for HEVC and VVC, the statistical dependencies among the transform coefficient levels of a transform block are exploited by a template-based context modeling developed in this thesis. Instead of selecting the context model for a current scanning position primarily based on its location inside a transform block, already coded neighboring locations inside a local template are utilized. To further increase the coding efficiency achieved by the template-based context modeling, the different coding phases of the initially developed level coding are merged into a single coding phase. As a consequence, the template-based context modeling can utilize the absolute levels of the neighboring frequency locations, which provides better conditional probability estimates and further improves coding efficiency. This template-based context modeling with a single coding phase is also suitable for trellis-coded quantization (TCQ), since TCQ is state-driven and derives the next state from the current state and the parity of the current level. TCQ introduces different context model sets for coding the significance flag depending on the current state. Based on statistical analyses, an extension of the state-dependent context modeling of TCQ is presented, which further improves the coding efficiency in VVC. After that, a method to reduce the complexity of the level coding at the decoder is presented. This method separates the level coding into a coding phase exclusively consisting of context-coded bins and another one consisting of bypass-coded bins only. For retaining the state-dependent context selection, which significantly contributes to the coding efficiency of TCQ, a dedicated parity flag is introduced and coded with context models in the first coding phase. An adaptive approach is then presented that further reduces the worst-case complexity, effectively lowering the maximum number of context-coded bins per transform coefficient to 1.75 without negatively affecting the coding efficiency. In the last development presented in this thesis, a dedicated level coding for transform skip blocks, which often occur in screen content applications, is introduced for VVC. This dedicated level coding better exploits the statistical properties of directly quantized residual samples for screen content. Various modifications to the level coding improve the coding efficiency for this type of content. Examples for these modifications are a binarization with additional context-coded flags and the coding of the sign information with adaptive context models

    Feasibility Study of High-Level Synthesis : Implementation of a Real-Time HEVC Intra Encoder on FPGA

    Get PDF
    High-Level Synthesis (HLS) on automatisoitu suunnitteluprosessi, joka pyrkii parantamaan tuottavuutta perinteisiin suunnittelumenetelmiin verrattuna, nostamalla suunnittelun abstraktiota rekisterisiirtotasolta (RTL) kĂ€yttĂ€ytymistasolle. Erilaisia kaupallisia HLS-työkaluja on ollut markkinoilla aina 1990-luvulta lĂ€htien, mutta vasta Ă€skettĂ€in ne ovat alkaneet saada hyvĂ€ksyntÀÀ teollisuudessa sekĂ€ akateemisessa maailmassa. Hidas kĂ€yttöönottoaste on johtunut pÀÀasiassa huonommasta tulosten laadusta (QoR) kuin mitĂ€ on ollut mahdollista tavanomaisilla laitteistokuvauskielillĂ€ (HDL). Uusimmat HLS-työkalusukupolvet ovat kuitenkin kaventaneet QoR-aukkoa huomattavasti. TĂ€mĂ€ vĂ€itöskirja tutkii HLS:n soveltuvuutta videokoodekkien kehittĂ€miseen. Se esittelee useita HLS-toteutuksia High EïŹƒciency Video Coding (HEVC) -koodaukselle, joka on keskeinen mahdollistava tekniikka lukuisille nykyaikaisille mediasovelluksille. HEVC kaksinkertaistaa koodaustehokkuuden edeltĂ€jÀÀnsĂ€ Advanced Video Coding (AVC) -standardiin verrattuna, saavuttaen silti saman subjektiivisen visuaalisen laadun. TĂ€mĂ€ tyypillisesti saavutetaan huomattavalla laskennallisella lisĂ€kustannuksella. Siksi reaaliaikainen HEVC vaatii automatisoituja suunnittelumenetelmiĂ€, joita voidaan kĂ€yttÀÀ rautatoteutus- (HW ) ja varmennustyön minimoimiseen. TĂ€ssĂ€ vĂ€itöskirjassa ehdotetaan HLS:n kĂ€yttöÀ koko enkooderin suunnitteluprosessissa. DataintensiivisistĂ€ koodaustyökaluista, kuten intra-ennustus ja diskreetit muunnokset, myös enemmĂ€n kontrollia vaativiin kokonaisuuksiin, kuten entropiakoodaukseen. Avoimen lĂ€hdekoodin Kvazaar HEVC -enkooderin C-lĂ€hdekoodia hyödynnetÀÀn tĂ€ssĂ€ työssĂ€ referenssinĂ€ HLS-suunnittelulle sekĂ€ toteutuksen varmentamisessa. Suorituskykytulokset saadaan ja raportoidaan ohjelmoitavalla porttimatriisilla (FPGA). TĂ€mĂ€n vĂ€itöskirjan tĂ€rkein tuotos on HEVC intra enkooderin prototyyppi. Prototyyppi koostuu Nokia AirFrame Cloud Server palvelimesta, varustettuna kahdella 2.4 GHz:n 14-ytiminen Intel Xeon prosessorilla, sekĂ€ kahdesta Intel Arria 10 GX FPGA kiihdytinkortista, jotka voidaan kytkeĂ€ serveriin kĂ€yttĂ€en joko peripheral component interconnect express (PCIe) liitĂ€ntÀÀ tai 40 gigabitin EthernettiĂ€. PrototyyppijĂ€rjestelmĂ€ saavuttaa reaaliaikaisen 4K enkoodausnopeuden, jopa 120 kuvaa sekunnissa. LisĂ€ksi jĂ€rjestelmĂ€n suorituskykyĂ€ on helppo skaalata paremmaksi lisÀÀmĂ€llĂ€ jĂ€rjestelmÀÀn kĂ€ytĂ€nnössĂ€ minkĂ€ tahansa mÀÀrĂ€n verkkoon kytkettĂ€viĂ€ FPGA-kortteja. Monimutkaisen HEVC:n tehokas mallinnus ja sen monipuolisten ominaisuuksien mukauttaminen reaaliaikaiselle HW HEVC enkooderille ei ole triviaali tehtĂ€vĂ€, koska HW-toteutukset ovat perinteisesti erittĂ€in aikaa vieviĂ€. TĂ€mĂ€ vĂ€itöskirja osoittaa, ettĂ€ HLS:n avulla pystytÀÀn nopeuttamaan kehitysaikaa, tarjoamaan ennen nĂ€kemĂ€töntĂ€ suunnittelun skaalautuvuutta, ja silti osoittamaan kilpailukykyisiĂ€ QoR-arvoja ja absoluuttista suorituskykyĂ€ verrattuna olemassa oleviin toteutuksiin.High-Level Synthesis (HLS) is an automated design process that seeks to improve productivity over traditional design methods by increasing design abstraction from register transfer level (RTL) to behavioural level. Various commercial HLS tools have been available on the market since the 1990s, but only recently they have started to gain adoption across industry and academia. The slow adoption rate has mainly stemmed from lower quality of results (QoR) than obtained with conventional hardware description languages (HDLs). However, the latest HLS tool generations have substantially narrowed the QoR gap. This thesis studies the feasibility of HLS in video codec development. It introduces several HLS implementations for High EïŹƒciency Video Coding (HEVC) , that is the key enabling technology for numerous modern media applications. HEVC doubles the coding eïŹƒciency over its predecessor Advanced Video Coding (AVC) standard for the same subjective visual quality, but typically at the cost of considerably higher computational complexity. Therefore, real-time HEVC calls for automated design methodologies that can be used to minimize the HW implementation and veriïŹcation eïŹ€ort. This thesis proposes to use HLS throughout the whole encoder design process. From data-intensive coding tools, like intra prediction and discrete transforms, to more control-oriented tools, such as entropy coding. The C source code of the open-source Kvazaar HEVC encoder serves as a design entry point for the HLS ïŹ‚ow, and it is also utilized in design veriïŹcation. The performance results are gathered with and reported for ïŹeld programmable gate array (FPGA) . The main contribution of this thesis is an HEVC intra encoder prototype that is built on a Nokia AirFrame Cloud Server equipped with 2.4 GHz dual 14-core Intel Xeon processors and two Intel Arria 10 GX FPGA Development Kits, that can be connected to the server via peripheral component interconnect express (PCIe) generation 3 or 40 Gigabit Ethernet. The proof-of-concept system achieves real-time. 4K coding speed up to 120 fps, which can be further scaled up by adding practically any number of network-connected FPGA cards. Overcoming the complexity of HEVC and customizing its rich features for a real-time HEVC encoder implementation on hardware is not a trivial task, as hardware development has traditionally turned out to be very time-consuming. This thesis shows that HLS is able to boost the development time, provide previously unseen design scalability, and still result in competitive performance and QoR over state-of-the-art hardware implementations
    • 

    corecore