4,380 research outputs found
Spherical clustering of users navigating 360{\deg} content
In Virtual Reality (VR) applications, understanding how users explore the
omnidirectional content is important to optimize content creation, to develop
user-centric services, or even to detect disorders in medical applications.
Clustering users based on their common navigation patterns is a first direction
to understand users behaviour. However, classical clustering techniques fail in
identifying these common paths, since they are usually focused on minimizing a
simple distance metric. In this paper, we argue that minimizing the distance
metric does not necessarily guarantee to identify users that experience similar
navigation path in the VR domain. Therefore, we propose a graph-based method to
identify clusters of users who are attending the same portion of the spherical
content over time. The proposed solution takes into account the spherical
geometry of the content and aims at clustering users based on the actual
overlap of displayed content among users. Our method is tested on real VR user
navigation patterns. Results show that our solution leads to clusters in which
at least 85% of the content displayed by one user is shared among the other
users belonging to the same cluster.Comment: 5 pages, conference (Published in: ICASSP 2019 - 2019 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Dynamic Adaptive Point Cloud Streaming
High-quality point clouds have recently gained interest as an emerging form
of representing immersive 3D graphics. Unfortunately, these 3D media are bulky
and severely bandwidth intensive, which makes it difficult for streaming to
resource-limited and mobile devices. This has called researchers to propose
efficient and adaptive approaches for streaming of high-quality point clouds.
In this paper, we run a pilot study towards dynamic adaptive point cloud
streaming, and extend the concept of dynamic adaptive streaming over HTTP
(DASH) towards DASH-PC, a dynamic adaptive bandwidth-efficient and view-aware
point cloud streaming system. DASH-PC can tackle the huge bandwidth demands of
dense point cloud streaming while at the same time can semantically link to
human visual acuity to maintain high visual quality when needed. In order to
describe the various quality representations, we propose multiple thinning
approaches to spatially sub-sample point clouds in the 3D space, and design a
DASH Media Presentation Description manifest specific for point cloud
streaming. Our initial evaluations show that we can achieve significant
bandwidth and performance improvement on dense point cloud streaming with minor
negative quality impacts compared to the baseline scenario when no adaptations
is applied.Comment: 6 pages, 23rd ACM Packet Video (PV'18) Workshop, June 12--15, 2018,
Amsterdam, Netherland
Bridge the Gap Between VQA and Human Behavior on Omnidirectional Video: A Large-Scale Dataset and a Deep Learning Model
Omnidirectional video enables spherical stimuli with the viewing range. Meanwhile, only the viewport region of omnidirectional
video can be seen by the observer through head movement (HM), and an even
smaller region within the viewport can be clearly perceived through eye
movement (EM). Thus, the subjective quality of omnidirectional video may be
correlated with HM and EM of human behavior. To fill in the gap between
subjective quality and human behavior, this paper proposes a large-scale visual
quality assessment (VQA) dataset of omnidirectional video, called VQA-OV, which
collects 60 reference sequences and 540 impaired sequences. Our VQA-OV dataset
provides not only the subjective quality scores of sequences but also the HM
and EM data of subjects. By mining our dataset, we find that the subjective
quality of omnidirectional video is indeed related to HM and EM. Hence, we
develop a deep learning model, which embeds HM and EM, for objective VQA on
omnidirectional video. Experimental results show that our model significantly
improves the state-of-the-art performance of VQA on omnidirectional video.Comment: Accepted by ACM MM 201
- …