67 research outputs found

    Data Hiding of Motion Information in Chroma and Luma Samples for Video Compression

    Get PDF
    International audience2010 appears to be the launching date for new compression activities intended to challenge the current video compression standard H.264/AVC. Several improvements of this standard are already known like competition-based motion vector prediction. However the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, a study of data hiding is first performed. Then, an efficient way of using data hiding for video compression is proposed. The main idea is to hide the indices into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the modification is performed via a rate-distortion optimization. Objective improvements (up to 2.3% bitrate saving) and subjective assessment of chroma loss are reported and analyzed for several sequences

    Guided Transcoding for Next-Generation Video Coding (HEVC)

    Get PDF
    Video content is the dominant traffic type on mobile networks today and this portion is only expected to increase in the future. In this thesis we investigate ways of reducing bit rates for adaptive streaming applications in the latest video coding standard, H.265 / High Efficiency Video Coding (HEVC). The current models for offering different-resolution versions of video content in a dynamic way, so called adaptive streaming, require either large amounts of storage capacity where full encodings of the material is kept at all times, or extremely high computational power in order to regenerate content on-demand. Guided transcoding aims at finding a middle-ground were we can store and transmit less data, at full or near-full quality, while still keeping computational complexity low. This is achieved by shifting the computationally heavy operations to a preprocessing step where so called side-information is generated. The side-information can then be used to quickly reconstruct sequences on-demand -- even when running on generic, non-specialized, hardware. Two method for generating side-information, pruning and deflation, are compared on a varying set of standardized HEVC test sequences and the respective upsides and downsides of each method are discussed.Genom att slänga bort viss information från en komprimerad video och sedan återskapa sekvensen i realtid kan vi minska behovet av lagringsutrymme för adaptiv videostreaming med 20–30%. Detta med helt bibehållen bildkvalité eller endast små försämringar. ==================== Adaptiv streaming Streaming är ett populärt sätt att skicka video över internet där en sekvens delas upp i korta segment som skickas kontinuerligt till användaren. Dessa segment kan skickas med varierande kvalité, och en modell där vi automatiskt känner av nätverkets belastning och dynamiskt anpassar kvalitén kallas för adaptiv streaming. Detta är ett system som används av SVT Play, TV4 Play och YouTube. HD- eller UltraHD-video måste komprimeras för att kunna skickas över ett nätverk – den tar helt enkelt för stor plats annars. Video som kodas med den senaste komprimeringsstandarden, HEVC/H.265, blir upp emot 700 gånger mindre med minimala försämringar av bildkvalitén. Ett segment på tio sekunder som tar 1,5 GB att skicka i rå form kan då komprimeras till strax över 2 MB. För att kunna erbjuda tittaren en videosekvens – en film eller ett TV-program – i varierande kvalité, skapar man olika kodningar av materialet. Generellt har vi inte möjlighet att förändra kvalitén på en sekvens i efterhand – omkodning av även en kort HD-video tar timmar att genomföra – så för att adaptiv streaming ska kunna fungera i praktiken genereras alla versioner på förhand och sparas undan. Men detta kräver stort lagringsutrymme. Guided transcoding Guided transcoding (”guidad omkodning”) erbjuder ett sätt att minska behovet av lagringsutrymme genom att slänga bort viss information och sedan återskapa den vid behov i ett senare skede. Vi gör detta för varje sekvens av lägre kvalité, men behåller högsta kvalitén som den är. En stympad lågkvalité-video tillsammans med videon av högsta kvalitén kan sedan användas för att exakt återskapa sekvensen. Denna process är mycket snabb i jämförelse med vanlig omkodning, så vi kan med kort varsel generera videokodningar av varierande kvalité. Vi har undersökt två metoder för plocka bort och återskapa videoinformation: pruning och deflation. Den första ger små försämringar i bildkvalitén och sparar närmare 30% lagringsutrymme. Den senare har ingen påverkan på bildkvalitén men sparar bara drygt 20% i utrymme

    Content Fragile Watermarking for H.264/AVC Video Authentication

    Get PDF
    Discrete Cosine transform (DCT) to generate the authentication data that are treated as a fragile watermark. This watermark is embedded in the motion vectors (MVs) The advances in multimedia technologies and digital processing tools have brought with them new challenges for the source and content authentication. To ensure the integrity of the H.264/AVC video stream, we introduce an approach based on a content fragile video watermarking method using an independent authentication of each Group of Pictures (GOPs) within the video. This technique uses robust visual features extracted from the video pertaining to the set of selected macroblocs (MBs) which hold the best partition mode in a tree-structured motion compensation process. An additional security degree is offered by the proposed method through using a more secured keyed function HMAC-SHA-256 and randomly choosing candidates from already selected MBs. In here, the watermark detection and verification processes are blind, whereas the tampered frames detection is not since it needs the original frames within the tampered GOPs. The proposed scheme achieves an accurate authentication technique with a high fragility and fidelity whilst maintaining the original bitrate and the perceptual quality. Furthermore, its ability to detect the tampered frames in case of spatial, temporal and colour manipulations, is confirmed

    Optimal coding unit decision for early termination in high efficiency video coding using enhanced whale optimization algorithm

    Get PDF
    Video compression is an emerging research topic in the field of block based video encoders. Due to the growth of video coding technologies, high efficiency video coding (HEVC) delivers superior coding performance. With the increased encoding complexity, the HEVC enhances the rate-distortion (RD) performance. In the video compression, the out-sized coding units (CUs) have higher encoding complexity. Therefore, the computational encoding cost and complexity remain vital concerns, which need to be considered as an optimization task. In this manuscript, an enhanced whale optimization algorithm (EWOA) is implemented to reduce the computational time and complexity of the HEVC. In the EWOA, a cosine function is incorporated with the controlling parameter A and two correlation factors are included in the WOA for controlling the position of whales and regulating the movement of search mechanism during the optimization and search processes. The bit streams in the Luma-coding tree block are selected using EWOA that defines the CU neighbors and is used in the HEVC. The results indicate that the EWOA achieves best bit rate (BR), time saving, and peak signal to noise ratio (PSNR). The EWOA showed 0.006-0.012 dB higher PSNR than the existing models in the real-time videos

    A SECURE METHOD OF OPTIMIZED LOW COMPLEXITY VIDEO WATERMARKING

    Get PDF
    ABSTRACT In recent existence numerous video watermarking schemes have been anticipated, but the majority of them are functional to uncompressed video. At this point we propose a blind digital compressed video watermarking scheme for H.264 compressed domain to condense the number of computations.Due to its characteristics of high compressibility, it can acquire immense quality at minor bit rate, and this is the reason why many applications conform H.264 Codec. The planned method elites the macro block performing the Differential Evaluation algorithm further based on the fore determined threshold and the coefficient rarefied for watermark embedding is deployed on the parity of the coefficients after transformation and quantization. The proposed method impede the bit rate increase within a decent end point by selecting appropriate non-zero quantized AC residuals for embedding the watermark. Experimental results flaunt that there is more acceptable control towards bit rate augment and at the same time perceptual quality can be kept in existence even after undergoing different attacks

    Improved intra-prediction for video coding

    Full text link
    This thesis focuses on improving the HEVC (High Efficiency Video Coding) standard. HEVC is the newest video coding standard developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO / IEC Moving Picture Experts Group (MPEG), as a successor to the popular state-of-the-art H.264/MPEG-4 AVC (Advanced Video Coding) standard. HEVC makes use of prediction to exploit redundancies in the signal and therefore achieve high compression efficiency. In particular, the Intra-Picture prediction block consists of predicting a block in the current frame using the reference information from neighbouring blocks in the same frame. It supports three different modes, the angular mode with 33 different directions, the planar mode and DC mode. HEVC is reportedly able to achieve in average more than 50% higher efficiency than H.264/MPEG-4 AVC, but this comes at the cost of very high computational complexity. The contributions of this thesis mainly consist in improvements to the Intra-Picture prediction block, with the goal of drastically reducing computational complexity and, at the same time achieving comparable compression efficiency as conventional HEVC. In average, 16.5% encoding operations can be saved using the proposed approach at the cost of relatively small compression efficiency losses.Éste proyecto se va a centrar en mejorar el estándar HEVC (High Efficiency Video Coding). HEVC es el estándar de codificación de video más reciente desarrollado por el UIT-T Video Coding Experts Group (VCEG) e ISO/IEC Moving Picture Experts Group (MPEG), siendo sucesor del popular estado del arte H.264/MPEG-4 AVC (Advanced Video Coding) estándar . HEVC hace uso de la predicción para aprovechar las redundancias en las señales y por lo tanto conseguir una alta eficiencia de compresión. En particular, el bloque Intra-Picture prediction consiste en predecir un bloque en el cuadro actual, utilizando información de referencia de bloques vecinos en el mismo cuadro. Soporta tres modos distintos, el modo angular con 33 diferentes direcciones, el modo Planar y el modo DC. HEVC es suficientemente capaz de lograr de media una eficiencia mayor del 50% que H.264/MPEG-4 AVC, a costa de una alta complejidad computacional. Las aportaciones a esta tesis consisten principalmente en mejoras en el bloque Intra-Picture prediction, con el objetivo de reducir drásticamente la complejidad computacional y a la vez, lograr una eficiencia de compresión comparable al HEVC convencional. En promedio, un 16.5% de las operaciones de codificación pueden evitarse usando el enfoque propuesto a costa de pérdidas relativamente pequeñas de la eficiencia de compresión

    Robust drift-free bit-rate preserving H.264 watermarking

    Get PDF
    International audienceThis paper presents a novel method for open-loop watermarking of H.264/AVC bitstreams. Existing watermarking algorithms designed for previous encoders, such as MPEG-2 cannot be directly applied to H.264/AVC, as H.264/AVC implements numerous new features that were not considered in previous coders. In contrast to previous watermarking techniques for H.264/AVC bitstreams, which embed the information after the reconstruction loop and perform drift compensation, we propose a completely new intra-drift-free watermarking algorithm. The major design goals of this novel H.264/AVC watermarking algorithm are runtime-efficiency, high perceptual quality, (almost) no bit-rate increase and robustness to re-compression. The watermark is extremely runtime-efficiently embedded in the compressed domain after the reconstruction loop, i.e., all prediction results are reused. Nevertheless, intra-drift is avoided, as the watermark is embedded in such a way that the pixels used for the prediction are kept unchanged. Thus, there is no drift as the pixels being used in the intra-prediction process of H.264/AVC are not modified. For watermark detection, we use a two-stage cross-correlation. Our simulation results confirm that the proposed technique is robust against re-encoding and shows a negligible impact on both the bit-rate and the visual quality

    Kvazaar HEVC videokooderin pakkaustehokkuuden ja suorituskyvyn optimointi

    Get PDF
    Growing video resolutions have led to an increasing volume of Internet video traffic, which has created a need for more efficient video compression. New video coding standards, such as High Efficiency Video Coding (HEVC), enable a higher level of compression, but the complexity of the corresponding encoder implementations is also higher. Therefore, encoders that are efficient in terms of both compression and complexity are required. In this work, we implement four optimizations to Kvazaar HEVC encoder: 1) uniform inter and intra cost comparison; 2) concurrency-oriented SAO implementation; 3) resolution-adaptive thread allocation; and 4) fast cost estimation of coding coefficients. Optimization 1 changes the selection criterion of the prediction mode in fast configurations, which greatly improves the coding efficiency. Optimization 2 replaces the implementation of one of the in-loop filters with one that better supports concurrent processing. This allows removing some dependencies between encoding tasks, which provides more opportunities for parallel processing to increase coding speed. Optimization 3 reduces the overhead of thread management by spawning fewer threads when there is not enough work for all available threads. Optimization 4 speeds up the computation of residual coefficient coding costs by switching to a faster but less accurate estimation. The impact of the optimizations is measured with two coding configurations of Kvazaar: the ultrafast preset, which aims for the fastest coding speed, and the veryslow preset, which aims for the best coding efficiency. Together, the introduced optimizations give a 2.8× speedup in the ultrafast configuration and a 3.4× speedup in the veryslow configuration. The trade-off for the speedup with the veryslow preset is a 0.15 % bit rate increase. However, with the ultrafast preset, the optimizations also improve coding efficiency by 14.39 %

    An Analysis of VP8, a new video codec for the web

    Get PDF
    Video is an increasingly ubiquitous part of our lives. Fast and efficient video codecs are necessary to satisfy the increasing demand for video on the web and mobile devices. However, open standards and patent grants are paramount to the adoption of video codecs across different platforms and browsers. Google On2 released VP8 in May 2010 to compete with H.264, the current standard of video codecs, complete with source code, specification and a perpetual patent grant. As the amount of video being created every day is growing rapidly, the decision of which codec to encode this video with is paramount; if a low quality codec or a restrictively licensed codec is used, the video recorded might be of little to no use. We sought to study VP8 and its quality versus its resource consumption compared to H.264 -- the most popular current video codec -- so that reader may make an informed decision for themselves or for their organizations about whether to use H.264 or VP8, or something else entirely. We examined VP8 in detail, compared its theoretical complexity to H.264 and measured the efficiency of its current implementation. VP8 shares many facets of its design with H.264 and other Discrete Cosine Transform (DCT) based video codecs. However, VP8 is both simpler and less feature rich than H.264, which may allow for rapid hardware and software implementations. As it was designed for the Internet and newer mobile devices, it contains fewer legacy features, such as interlacing, than H.264 supports. To perform quality measurements, the open source VP8 implementation libvpx was used. This is the reference implementation. For H.264, the open source H.264 encoder x264 was used. This encoder has very high performance, and is often rated at the top of its field in efficiency. The JM reference encoder was used to establish a baseline quality for H.264. Our findings indicate that VP8 performs very well at low bitrates, at resolutions at and below CIF. VP8 may be able to successfully displace H.264 Baseline in the mobile streaming video domain. It offers higher quality at a lower bitrate for low resolution images due to its high performing entropy coder and non-contiguous macroblock segmentation. At higher resolutions, VP8 still outperforms H.264 Baseline, but H.264 High profile leads. At HD resolution (720p and above), H.264 is significantly better than VP8 due to its superior motion estimation and adaptive coding. There is little significant difference between the intra-coding performance between H.264 and VP8. VP8\u27s in-loop deblocking filter outperforms H.264\u27s version. H.264\u27s inter-coding, with full support for B frames and weighting outperforms VP8\u27s alternate reference scheme, although this may improve in the future. On average, VP8\u27s feature set is less complex than H.264\u27s equivalents, which, along with its open source implementation, may spur development in the future. These findings indicate that VP8 has strong fundamentals when compared with H.264, but that it lacks optimization and maturity. It will likely improve as engineers optimize VP8\u27s reference implementation, or when a competing implementation is developed. We recommend several areas that the VP8 developers should focus on in the future
    corecore