135,732 research outputs found

    Complexity measurement and characterization of 360-degree content

    Get PDF
    The appropriate characterization of the test material, used for subjective evaluation tests and for benchmarking image and video processing algorithms and quality metrics, can be crucial in order to perform comparative studies that provide useful insights. This paper focuses on the characterisation of 360-degree images. We discuss why it is important to take into account the geometry of the signal and the interactive nature of 360-degree content navigation, for a perceptual characterization of these signals. Particularly, we show that the computation of classical indicators of spatial complexity, commonly used for 2D images, might lead to different conclusions depending on the geometrical domain use

    Complexity measurement and characterization of 360-degree content

    Get PDF
    The appropriate characterization of the test material, used for subjective evaluation tests and for benchmarking image and video processing algorithms and quality metrics, can be crucial in order to perform comparative studies that provide useful insights. This paper focuses on the characterisation of 360-degree images. We discuss why it is important to take into account the geometry of the signal and the interactive nature of 360-degree content navigation, for a perceptual characterization of these signals. Particularly, we show that the computation of classical indicators of spatial complexity, commonly used for 2D images, might lead to different conclusions depending on the geometrical domain use

    Rate-Splitting for Intelligent Reflecting Surface-Aided Multiuser VR Streaming

    Full text link
    The growing demand for virtual reality (VR) applications requires wireless systems to provide a high transmission rate to support 360-degree video streaming to multiple users simultaneously. In this paper, we propose an intelligent reflecting surface (IRS)-aided rate-splitting (RS) VR streaming system. In the proposed system, RS facilitates the exploitation of the shared interests of the users in VR streaming, and IRS creates additional propagation channels to support the transmission of high-resolution 360-degree videos. IRS also enhances the capability to mitigate the performance bottleneck caused by the requirement that all RS users have to be able to decode the common message. We formulate an optimization problem for maximization of the achievable bitrate of the 360-degree video subject to the quality-of-service (QoS) constraints of the users. We propose a deep deterministic policy gradient with imitation learning (Deep-GRAIL) algorithm, in which we leverage deep reinforcement learning (DRL) and the hidden convexity of the formulated problem to optimize the IRS phase shifts, RS parameters, beamforming vectors, and bitrate selection of the 360-degree video tiles. We also propose RavNet, which is a deep neural network customized for the policy learning in our Deep-GRAIL algorithm. Performance evaluation based on a real-world VR streaming dataset shows that the proposed IRS-aided RS VR streaming system outperforms several baseline schemes in terms of system sum-rate, achievable bitrate of the 360-degree videos, and online execution runtime. Our results also reveal the respective performance gains obtained from RS and IRS for improving the QoS in multiuser VR streaming systems.Comment: 20 pages, 12 figures. This paper has been submitted to IEEE journal for possible publicatio

    Relocation relocation: Does the use of virtual reality 360 degree images of a hospice improve perception at time of referral?

    Get PDF
    Background: Patients treated in the community and in hospitals may be offered transfer to a hospice for symptom management . Many of these patients may be unfamiliar with this setting, and some may feel anticipatory fear of this unknown environment. Our cancer centre uses Virtual Reality headsets and 360 degree photo/video technology on a digital media pad (tablet computer) to give patients a digital tour of what the regional hospices look like, in order to help decision making. Aims: To evaluate whether the use of a 360 degree visual tour of the local hospices, similar to what estate agents may offer for virtual house viewings, is useful to patients and whether it is easily implementable. To explore whether it impacts on palliative care patient perception. Methods: 360 degree filming and high resolution photography were undertaken as part of a quality improvement project in key areas of two local hospices, and uploaded to hospital and hospice websites, headsets and media pads. An online survey was created to assess the patient experience of the 360 degree digital views. Over a 6 month period, patients on the ward in the hospital who were willing to participate, known to the palliative care team, and/or who had an active hospice referral in place were offered a digital tour. Results: Of 25 patients, 90% felt more informed about hospices after seeing the 360 degree views. 95% of patients stated they would recommend the digital hospice tour to other patients. All preferred the electronic 360 degree tour to the paper patient information leaflets. Staff members felt the 360 degree photo tour was easily integrated into their day-to-day work. Conclusions: The use of 360 degree hospice views can make a significant difference to patient perception of what hospices look like and addresses the fear of the unknown. Whilst this evaluation was conducted prior to Covid-19, the use of the electronic media tour of hospices went up significantly in our inpatient unit during the pandemic, due to patients and relatives not being able to visit the hospice prior to deciding on relocation to this setting

    Bridge the Gap Between VQA and Human Behavior on Omnidirectional Video: A Large-Scale Dataset and a Deep Learning Model

    Full text link
    Omnidirectional video enables spherical stimuli with the 360×180∘360 \times 180^ \circ viewing range. Meanwhile, only the viewport region of omnidirectional video can be seen by the observer through head movement (HM), and an even smaller region within the viewport can be clearly perceived through eye movement (EM). Thus, the subjective quality of omnidirectional video may be correlated with HM and EM of human behavior. To fill in the gap between subjective quality and human behavior, this paper proposes a large-scale visual quality assessment (VQA) dataset of omnidirectional video, called VQA-OV, which collects 60 reference sequences and 540 impaired sequences. Our VQA-OV dataset provides not only the subjective quality scores of sequences but also the HM and EM data of subjects. By mining our dataset, we find that the subjective quality of omnidirectional video is indeed related to HM and EM. Hence, we develop a deep learning model, which embeds HM and EM, for objective VQA on omnidirectional video. Experimental results show that our model significantly improves the state-of-the-art performance of VQA on omnidirectional video.Comment: Accepted by ACM MM 201

    Co-projection-plane based 3-D padding for polyhedron projection for 360-degree video

    Full text link
    The polyhedron projection for 360-degree video is becoming more and more popular since it can lead to much less geometry distortion compared with the equirectangular projection. However, in the polyhedron projection, we can observe very obvious texture discontinuity in the area near the face boundary. Such a texture discontinuity may lead to serious quality degradation when motion compensation crosses the discontinuous face boundary. To solve this problem, in this paper, we first propose to fill the corresponding neighboring faces in the suitable positions as the extension of the current face to keep approximated texture continuity. Then a co-projection-plane based 3-D padding method is proposed to project the reference pixels in the neighboring face to the current face to guarantee exact texture continuity. Under the proposed scheme, the reference pixel is always projected to the same plane with the current pixel when performing motion compensation so that the texture discontinuity problem can be solved. The proposed scheme is implemented in the reference software of High Efficiency Video Coding. Compared with the existing method, the proposed algorithm can significantly improve the rate-distortion performance. The experimental results obviously demonstrate that the texture discontinuity in the face boundary can be well handled by the proposed algorithm.Comment: 6 pages, 9 figure
    • …
    corecore