110 research outputs found

    360-Degree Panoramic Video Coding

    Get PDF
    Virtual reality (VR) creates an immersive experience of real world in virtual environment through computer interface. Due to the technological advancements in recent years, VR technology is growing very fast and as a result industrial usage of this technology is feasible nowadays. This technology is being used in many applications for example gaming, education, streaming live events, etc. Since VR is visualizing the real world experience, the image or video content which is used must represent the whole 3D world characteristics. Omnidirectional images/videos demonstrate such characteristics and hence are used in VR applications. However, these contents are not suitable for conventional video coding standards, which use only 2D image/video format content. Accordingly, the omnidirectional content are projected onto a 2D image plane using cylindrical or pseudo-cylindrical projections. In this work, coding methods for two types of projection formats that are popular among the VR contents are studied: Equirectangular panoramic projection and Pseudo-cylindrical panoramic projection. The equirectangular projection is the most commonly used format in VR applications due to its rectangular image plane and also wide support in software development environments. However, this projection stretches the nadir and zenith areas of the panorama and as a result contain a relatively large portion of redundant data in these areas. The redundant information causes extra bitrate and also higher encoding/decoding time. Regional downsampling (RDS) methods are used in this work in order to decrease the extra bitrate caused by over-stretched polar areas. These methods are categorized into persistent regional down-sampling (P-RDS) and temporal regional down-sampling (T-RDS) methods. In the P-RDS method, the down-sampling is applied to all frames of the video, but in the T-RDS method, only inter frames are down-sampled and the intra frames are coded in full resolution format in order to maintain the highest possible quality of these frames. The pseudo-cylindrical projections map the 3D spherical domain to a non-rectangular 2D image plane in which the polar areas do not have redundant information. Therefore, the more realistic sample distribution of 3D world is achieved by using these projection formats. However, because of non-rectangular image plane format, pseudocylindrical panoramas are not favorable for image/video coding standards and as a result the compression performance is not efficient. Therefore, two methods are investigated for improving the intra-frame and inter-frame compression of these panorama formats. In the intra-frame coding method, border edges are smoothed by modifying the content of the image in non-effective picture area. In the interframe coding method, gaining the benefit of 360-degree property of the content, non-effective picture area of reference frames at the border is filled with the content of the effective picture area from the opposite border to improve the performance of motion compensation. As a final contribution, the quality assessment methods in VR applications are studied. Since the VR content are mainly displayed in head mounted displays (HMDs) which use 3D coordinate system, measuring the quality of decoded image/video with conventional methods does not represent the quality fairly. In this work, spherical quality metrics are investigated for measuring the quality of the proposed coding methods of omnidirectional panoramas. Moreover, a novel spherical quality metric (USS-PSNR) is proposed for evaluating the quality of VR images/video

    Improvements for Projection-based Point Cloud Compression

    Get PDF
    Point clouds for immersive media technology have received substantial interest in recent years. Such representation of three-dimensional (3D) scenery provides freedom of movement for the viewer. However, transmitting and/or storing such content requires large amount of data and it is not feasible on today's network technology. Thus, there is a necessity for having e cient compression algorithms in order to facilitate proper transmission and storage of such content. Recently, projection-based methods have been considered for compressing point cloud data. In these methods, the point cloud data are projected onto a 2D image plane in order to utilize the current 2D video coding standards for compressing such content. These coding schemes provide signi cant improvement over state-ofthe-art methods in terms of compression e ciency. However, the projection-based point cloud compression requires special handling of boundaries and sparsity in the 2D projections. This thesis work addresses these issues by proposing two methods which improve the compression performance of both intra-frame and inter-frame coding for 2D video coding of volumetric data and meanwhile reduce the coding artifacts. The conducted experiments illustrated that the bitrate requirements are reduced by around 26% and 29% for geometry and color attributes, respectively compared to the case that the proposed algorithms are not applied. In addition, the proposed techniques showed negligible complexity impact in terms of encoding and decoding runtimes

    Deep learning and bidirectional optical flow based viewport predictions for 360° video coding

    Get PDF
    The rapid development of virtual reality applications continues to urge better compression of 360° videos owing to the large volume of content. These videos are typically converted to 2-D formats using various projection techniques in order to benefit from ad-hoc coding tools designed to support conventional 2-D video compression. Although recently emerged video coding standard, Versatile Video Coding (VVC) introduces 360° video specific coding tools, it fails to prioritize the user observed regions in 360° videos, represented by the rectilinear images called the viewports. This leads to the encoding of redundant regions in the video frames, escalating the bit rate cost of the videos. In response to this issue, this paper proposes a novel 360° video coding framework for VVC which exploits user observed viewport information to alleviate pixel redundancy in 360° videos. In this regard, bidirectional optical flow, Gaussian filter and Spherical Convolutional Neural Networks (Spherical CNN) are deployed to extract perceptual features and predict user observed viewports. By appropriately fusing the predicted viewports on the 2-D projected 360° video frames, a novel Regions of Interest (ROI) aware weightmap is developed which can be used to mask the source video and introduce adaptive changes to the Lagrange and quantization parameters in VVC. Comprehensive experiments conducted in the context of VVC Test Model (VTM) 7.0 show that the proposed framework can improve bitrate reduction, achieving an average bitrate saving of 5.85% and up to 17.15% at the same perceptual quality which is measured using Viewport Peak Signal-To-Noise Ratio (VPSNR)

    Machine Learning for Multimedia Communications

    Get PDF
    Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learningoriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    マルチタスク学習を用いたシーン理解とデータ拡張による複合現実感の向上

    Get PDF
    早大学位記番号:新9140早稲田大

    Machine Learning for Multimedia Communications

    Get PDF
    Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learning-oriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
    corecore