360β videos have received widespread attention due to its realistic
and immersive experiences for users. To date, how to accurately model the user
perceptions on 360β display is still a challenging issue. In this paper,
we exploit the visual characteristics of 360β projection and display and
extend the popular just noticeable difference (JND) model to spherical JND
(SJND). First, we propose a quantitative 2D-JND model by jointly considering
spatial contrast sensitivity, luminance adaptation and texture masking effect.
In particular, our model introduces an entropy-based region classification and
utilizes different parameters for different types of regions for better
modeling performance. Second, we extend our 2D-JND model to SJND by jointly
exploiting latitude projection and field of view during 360β display.
With this operation, SJND reflects both the characteristics of human vision
system and the 360β display. Third, our SJND model is more consistent
with user perceptions during subjective test and also shows more tolerance in
distortions with fewer bit rates during 360β video compression. To
further examine the effectiveness of our SJND model, we embed it in Versatile
Video Coding (VVC) compression. Compared with the state-of-the-arts, our
SJND-VVC framework significantly reduced the bit rate with negligible loss in
visual quality