4 research outputs found

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Visual Distortions in 360-degree Videos

    Get PDF
    Omnidirectional (or 360-degree) images and videos are emergent signals in many areas such as robotics and virtual/augmented reality. In particular, for virtual reality, they allow an immersive experience in which the user is provided with a 360-degree field of view and can navigate throughout a scene, e.g., through the use of Head Mounted Displays. Since it represents the full 360-degree field of view from one point of the scene, omnidirectional content is naturally represented as spherical visual signals. Current approaches for capturing, processing, delivering, and displaying 360-degree content, however, present many open technical challenges and introduce several types of distortions in these visual signals. Some of the distortions are specific to the nature of 360-degree images, and often different from those encountered in the classical image communication framework. This paper provides a first comprehensive review of the most common visual distortions that alter 360-degree signals undergoing state of the art processing in common applications. While their impact on viewers' visual perception and on the immersive experience at large is still unknown ---thus, it stays an open research topic--- this review serves the purpose of identifying the main causes of visual distortions in the end-to-end 360-degree content distribution pipeline. It is essential as a basis for benchmarking different processing techniques, allowing the effective design of new algorithms and applications. It is also necessary to the deployment of proper psychovisual studies to characterise the human perception of these new images in interactive and immersive applications

    Joint Source Encoding and Networking Optimization for Panoramic Video Streaming over LTE-A Downlink

    No full text
    With the increasing capacity of wireless networks, more people would like to consume the 360-degree panoramic video (PV) in virtual reality (VR) applications as its immersive experience. However, due to the super-high resolution of the PV and the dynamic features of wireless networks, it is very difficult to efficiently deliver PVs over wireless networks. The traditionally independent PV encoding and networking sometimes also results in the PV quality deterioration since it neglects the harmony between the source encoding and networking. In this paper, a joint source encoding and networking optimization scheme is proposed to transmit the PV over LTE-A downlink. The PV encoding parameters during the source compression, the modulation and coding scheme (MCS), and relay selection during the networking are jointly considered to optimize the end-to-end PV quality. In addition, the video quality for region of interest (RoI, the possible viewport region) is enhanced by allowing a larger latency bound in the joint source encoding and networking optimization. Experimental results show that the proposed scheme achieves significant performance improvement for the quality of the received PV over traditional PV streaming approaches. © 2017 IEEE
    corecore