1,255 research outputs found

    Spatiotemporal adaptive quantization for the perceptual video coding of RGB 4:4:4 data

    Get PDF
    Due to the spectral sensitivity phenomenon of the Human Visual System (HVS), the color channels of raw RGB 4:4:4 sequences contain significant psychovisual redundancies; these redundancies can be perceptually quantized. The default quantization systems in the HEVC standard are known as Uniform Reconstruction Quantization (URQ) and Rate Distortion Optimized Quantization (RDOQ); URQ and RDOQ are not perceptually optimized for the coding of RGB 4:4:4 video data. In this paper, we propose a novel spatiotemporal perceptual quantization technique named SPAQ. With application for RGB 4:4:4 video data, SPAQ exploits HVS spectral sensitivity-related color masking in addition to spatial masking and temporal masking; SPAQ operates at the Coding Block (CB) level and the Prediction Unit (PU) level. The proposed technique perceptually adjusts the Quantization Step Size (QStep) at the CB level if high variance spatial data in G, B and R CBs is detected and also if high motion vector magnitudes in PUs are detected. Compared with anchor 1 (HEVC HM 16.17 RExt), SPAQ considerably reduces bitrates with a maximum reduction of approximately 80%. The Mean Opinion Score (MOS) in the subjective evaluations, in addition to the SSIM scores, show that SPAQ successfully achieves perceptually lossless compression compared with anchors

    Efficient HEVC-based video adaptation using transcoding

    Get PDF
    In a video transmission system, it is important to take into account the great diversity of the network/end-user constraints. On the one hand, video content is typically streamed over a network that is characterized by different bandwidth capacities. In many cases, the bandwidth is insufficient to transfer the video at its original quality. On the other hand, a single video is often played by multiple devices like PCs, laptops, and cell phones. Obviously, a single video would not satisfy their different constraints. These diversities of the network and devices capacity lead to the need for video adaptation techniques, e.g., a reduction of the bit rate or spatial resolution. Video transcoding, which modifies a property of the video without the change of the coding format, has been well-known as an efficient adaptation solution. However, this approach comes along with a high computational complexity, resulting in huge energy consumption in the network and possibly network latency. This presentation provides several optimization strategies for the transcoding process of HEVC (the latest High Efficiency Video Coding standard) video streams. First, the computational complexity of a bit rate transcoder (transrater) is reduced. We proposed several techniques to speed-up the encoder of a transrater, notably a machine-learning-based approach and a novel coding-mode evaluation strategy have been proposed. Moreover, the motion estimation process of the encoder has been optimized with the use of decision theory and the proposed fast search patterns. Second, the issues and challenges of a spatial transcoder have been solved by using machine-learning algorithms. Thanks to their great performance, the proposed techniques are expected to significantly help HEVC gain popularity in a wide range of modern multimedia applications

    Towards visualization and searching :a dual-purpose video coding approach

    Get PDF
    In modern video applications, the role of the decoded video is much more than filling a screen for visualization. To offer powerful video-enabled applications, it is increasingly critical not only to visualize the decoded video but also to provide efficient searching capabilities for similar content. Video surveillance and personal communication applications are critical examples of these dual visualization and searching requirements. However, current video coding solutions are strongly biased towards the visualization needs. In this context, the goal of this work is to propose a dual-purpose video coding solution targeting both visualization and searching needs by adopting a hybrid coding framework where the usual pixel-based coding approach is combined with a novel feature-based coding approach. In this novel dual-purpose video coding solution, some frames are coded using a set of keypoint matches, which not only allow decoding for visualization, but also provide the decoder valuable feature-related information, extracted at the encoder from the original frames, instrumental for efficient searching. The proposed solution is based on a flexible joint Lagrangian optimization framework where pixel-based and feature-based processing are combined to find the most appropriate trade-off between the visualization and searching performances. Extensive experimental results for the assessment of the proposed dual-purpose video coding solution under meaningful test conditions are presented. The results show the flexibility of the proposed coding solution to achieve different optimization trade-offs, notably competitive performance regarding the state-of-the-art HEVC standard both in terms of visualization and searching performance.Em modernas aplicaƧƵes de vĆ­deo, o papel do vĆ­deo decodificado Ć© muito mais que simplesmente preencher uma tela para visualizaĆ§Ć£o. Para oferecer aplicaƧƵes mais poderosas por meio de sinais de vĆ­deo,Ć© cada vez mais crĆ­tico nĆ£o apenas considerar a qualidade do conteĆŗdo objetivando sua visualizaĆ§Ć£o, mas tambĆ©m possibilitar meios de realizar busca por conteĆŗdos semelhantes. Requisitos de visualizaĆ§Ć£o e de busca sĆ£o considerados, por exemplo, em modernas aplicaƧƵes de vĆ­deo vigilĆ¢ncia e comunicaƧƵes pessoais. No entanto, as atuais soluƧƵes de codificaĆ§Ć£o de vĆ­deo sĆ£o fortemente voltadas aos requisitos de visualizaĆ§Ć£o. Nesse contexto, o objetivo deste trabalho Ć© propor uma soluĆ§Ć£o de codificaĆ§Ć£o de vĆ­deo de propĆ³sito duplo, objetivando tanto requisitos de visualizaĆ§Ć£o quanto de busca. Para isso, Ć© proposto um arcabouƧo de codificaĆ§Ć£o em que a abordagem usual de codificaĆ§Ć£o de pixels Ć© combinada com uma nova abordagem de codificaĆ§Ć£o baseada em features visuais. Nessa soluĆ§Ć£o, alguns quadros sĆ£o codificados usando um conjunto de pares de keypoints casados, possibilitando nĆ£o apenas visualizaĆ§Ć£o, mas tambĆ©m provendo ao decodificador valiosas informaƧƵes de features visuais, extraĆ­das no codificador a partir do conteĆŗdo original, que sĆ£o instrumentais em aplicaƧƵes de busca. A soluĆ§Ć£o proposta emprega um esquema flexĆ­vel de otimizaĆ§Ć£o Lagrangiana onde o processamento baseado em pixel Ć© combinado com o processamento baseado em features visuais objetivando encontrar um compromisso adequado entre os desempenhos de visualizaĆ§Ć£o e de busca. Os resultados experimentais mostram a flexibilidade da soluĆ§Ć£o proposta em alcanƧar diferentes compromissos de otimizaĆ§Ć£o, nomeadamente desempenho competitivo em relaĆ§Ć£o ao padrĆ£o HEVC tanto em termos de visualizaĆ§Ć£o quanto de busca

    Weighted Combination of Sample Based and Block Based Intra Prediction in Video Coding

    Get PDF
    The latest standard within video compression, HEVC/H.265, was released during 2013 and provides a significant improvement from its predecessor AVC/H.264. However, with a constantly increasing demand for high denition video and streaming of large video files, there are still improvements that can be done. Difficult content in video sequences, for example smoke, leaves and water that moves irregularly, is being hard to predict and can be troublesome at the prediction stage in the video compression. In this thesis, carried out at Ericsson in Stockholm, the combination of sample based intra prediction (SBIP) and block based intra prediction (BBIP) is tested to see if it could improve the prediction of video sequences containing difficult content, here focusing on water. The combined methods are compared to HEVC intra prediction. All implementations have been done in Matlab. The results show that a combination reduces the Mean Squared Error (MSE) as well as could improve the Visual Information Fidelity (VIF) and the mean Structural Similarity (MSSIM). Moreover the visual quality was improved by more details and less blocking artefacts

    Foveated Video Streaming for Cloud Gaming

    Get PDF
    Video gaming is generally a computationally intensive application and to provide a pleasant user experience specialized hardware like Graphic Processing Units may be required. Computational resources and power consumption are constraints which limit visually complex gaming on, for example, laptops, tablets and smart phones. Cloud gaming may be a possible approach towards providing a pleasant gaming experience on thin clients which have limited computational and energy resources. In a cloud gaming architecture, the game-play video is rendered and encoded in the cloud and streamed to a client where it is displayed. User inputs are captured at the client and streamed back to the server, where they are relayed to the game. High quality of experience requires the streamed video to be of high visual quality which translates to substantial downstream bandwidth requirements. The visual perception of the human eye is non-uniform, being maximum along the optical axis of the eye and dropping off rapidly away from it. This phenomenon, called foveation, makes the practice of encoding all areas of a video frame with the same resolution wasteful. In this thesis, foveated video streaming from a cloud gaming server to a cloud gaming client is investigated. A prototype cloud gaming system with foveated video streaming is implemented. The cloud gaming server of the prototype is configured to encode gameplay video in a foveated fashion based on gaze location data provided by the cloud gaming client. The effect of foveated encoding on the output bitrate of the streamed video is investigated. Measurements are performed using games from various genres and with different player points of view to explore changes in video bitrate with different parameters of foveation. Latencies involved in foveated video streaming for cloud gaming, including latency of the eye tracker used in the thesis, are also briefly discussed

    Challenges and solutions in H.265/HEVC for integrating consumer electronics in professional video systems

    Get PDF

    ę·±å±¤å­¦ēæ’恫åŸŗć„ćē”»åƒåœ§ēø®ćØ品č³Ŗč©•ä¾”

    Get PDF
    ę—©å¤§å­¦ä½čؘē•Ŗ号:ꖰ8427ę—©ēزē”°å¤§

    Advanced methods and deep learning for video and satellite data compression

    Get PDF
    L'abstract ĆØ presente nell'allegato / the abstract is in the attachmen
    • ā€¦
    corecore