208 research outputs found
Stereoscopic Omnidirectional Image Quality Assessment Based on Predictive Coding Theory
Objective quality assessment of stereoscopic omnidirectional images is a
challenging problem since it is influenced by multiple aspects such as
projection deformation, field of view (FoV) range, binocular vision, visual
comfort, etc. Existing studies show that classic 2D or 3D image quality
assessment (IQA) metrics are not able to perform well for stereoscopic
omnidirectional images. However, very few research works have focused on
evaluating the perceptual visual quality of omnidirectional images, especially
for stereoscopic omnidirectional images. In this paper, based on the predictive
coding theory of the human vision system (HVS), we propose a stereoscopic
omnidirectional image quality evaluator (SOIQE) to cope with the
characteristics of 3D 360-degree images. Two modules are involved in SOIQE:
predictive coding theory based binocular rivalry module and multi-view fusion
module. In the binocular rivalry module, we introduce predictive coding theory
to simulate the competition between high-level patterns and calculate the
similarity and rivalry dominance to obtain the quality scores of viewport
images. Moreover, we develop the multi-view fusion module to aggregate the
quality scores of viewport images with the help of both content weight and
location weight. The proposed SOIQE is a parametric model without necessary of
regression learning, which ensures its interpretability and generalization
performance. Experimental results on our published stereoscopic omnidirectional
image quality assessment database (SOLID) demonstrate that our proposed SOIQE
method outperforms state-of-the-art metrics. Furthermore, we also verify the
effectiveness of each proposed module on both public stereoscopic image
datasets and panoramic image datasets
Quality Assessment of Stereoscopic 360-degree Images from Multi-viewports
Objective quality assessment of stereoscopic panoramic images becomes a
challenging problem owing to the rapid growth of 360-degree contents. Different
from traditional 2D image quality assessment (IQA), more complex aspects are
involved in 3D omnidirectional IQA, especially unlimited field of view (FoV)
and extra depth perception, which brings difficulty to evaluate the quality of
experience (QoE) of 3D omnidirectional images. In this paper, we propose a
multi-viewport based fullreference stereo 360 IQA model. Due to the freely
changeable viewports when browsing in the head-mounted display (HMD), our
proposed approach processes the image inside FoV rather than the projected one
such as equirectangular projection (ERP). In addition, since overall QoE
depends on both image quality and depth perception, we utilize the features
estimated by the difference map between left and right views which can reflect
disparity. The depth perception features along with binocular image qualities
are employed to further predict the overall QoE of 3D 360 images. The
experimental results on our public Stereoscopic OmnidirectionaL Image quality
assessment Database (SOLID) show that the proposed method achieves a
significant improvement over some well-known IQA metrics and can accurately
reflect the overall QoE of perceived images
No-Reference Quality Assessment for 360-degree Images by Analysis of Multi-frequency Information and Local-global Naturalness
360-degree/omnidirectional images (OIs) have achieved remarkable attentions
due to the increasing applications of virtual reality (VR). Compared to
conventional 2D images, OIs can provide more immersive experience to consumers,
benefitting from the higher resolution and plentiful field of views (FoVs).
Moreover, observing OIs is usually in the head mounted display (HMD) without
references. Therefore, an efficient blind quality assessment method, which is
specifically designed for 360-degree images, is urgently desired. In this
paper, motivated by the characteristics of the human visual system (HVS) and
the viewing process of VR visual contents, we propose a novel and effective
no-reference omnidirectional image quality assessment (NR OIQA) algorithm by
Multi-Frequency Information and Local-Global Naturalness (MFILGN).
Specifically, inspired by the frequency-dependent property of visual cortex, we
first decompose the projected equirectangular projection (ERP) maps into
wavelet subbands. Then, the entropy intensities of low and high frequency
subbands are exploited to measure the multi-frequency information of OIs.
Besides, except for considering the global naturalness of ERP maps, owing to
the browsed FoVs, we extract the natural scene statistics features from each
viewport image as the measure of local naturalness. With the proposed
multi-frequency information measurement and local-global naturalness
measurement, we utilize support vector regression as the final image quality
regressor to train the quality evaluation model from visual quality-related
features to human ratings. To our knowledge, the proposed model is the first
no-reference quality assessment method for 360-degreee images that combines
multi-frequency information and image naturalness. Experimental results on two
publicly available OIQA databases demonstrate that our proposed MFILGN
outperforms state-of-the-art approaches
Blind Omnidirectional Image Quality Assessment with Viewport Oriented Graph Convolutional Networks
Quality assessment of omnidirectional images has become increasingly urgent
due to the rapid growth of virtual reality applications. Different from
traditional 2D images and videos, omnidirectional contents can provide
consumers with freely changeable viewports and a larger field of view covering
the spherical surface, which makes the objective
quality assessment of omnidirectional images more challenging. In this paper,
motivated by the characteristics of the human vision system (HVS) and the
viewing process of omnidirectional contents, we propose a novel Viewport
oriented Graph Convolution Network (VGCN) for blind omnidirectional image
quality assessment (IQA). Generally, observers tend to give the subjective
rating of a 360-degree image after passing and aggregating different viewports
information when browsing the spherical scenery. Therefore, in order to model
the mutual dependency of viewports in the omnidirectional image, we build a
spatial viewport graph. Specifically, the graph nodes are first defined with
selected viewports with higher probabilities to be seen, which is inspired by
the HVS that human beings are more sensitive to structural information. Then,
these nodes are connected by spatial relations to capture interactions among
them. Finally, reasoning on the proposed graph is performed via graph
convolutional networks. Moreover, we simultaneously obtain global quality using
the entire omnidirectional image without viewport sampling to boost the
performance according to the viewing experience. Experimental results
demonstrate that our proposed model outperforms state-of-the-art full-reference
and no-reference IQA metrics on two public omnidirectional IQA databases
Complexity measurement and characterization of 360-degree content
The appropriate characterization of the test material, used for subjective evaluation tests and for benchmarking image and video processing algorithms and quality metrics, can be crucial in order to perform comparative studies that provide useful insights. This paper focuses on the characterisation of 360-degree images. We discuss why it is important to take into account the geometry of the signal and the interactive nature of 360-degree content navigation, for a perceptual characterization of these signals. Particularly, we show that the computation of classical indicators of spatial complexity, commonly used for 2D images, might lead to different conclusions depending on the geometrical domain use
Complexity measurement and characterization of 360-degree content
The appropriate characterization of the test material, used for subjective evaluation tests and for benchmarking image and video processing algorithms and quality metrics, can be crucial in order to perform comparative studies that provide useful insights. This paper focuses on the characterisation of 360-degree images. We discuss why it is important to take into account the geometry of the signal and the interactive nature of 360-degree content navigation, for a perceptual characterization of these signals. Particularly, we show that the computation of classical indicators of spatial complexity, commonly used for 2D images, might lead to different conclusions depending on the geometrical domain use
Perceptual Quality Assessment of Omnidirectional Audio-visual Signals
Omnidirectional videos (ODVs) play an increasingly important role in the
application fields of medical, education, advertising, tourism, etc. Assessing
the quality of ODVs is significant for service-providers to improve the user's
Quality of Experience (QoE). However, most existing quality assessment studies
for ODVs only focus on the visual distortions of videos, while ignoring that
the overall QoE also depends on the accompanying audio signals. In this paper,
we first establish a large-scale audio-visual quality assessment dataset for
omnidirectional videos, which includes 375 distorted omnidirectional
audio-visual (A/V) sequences generated from 15 high-quality pristine
omnidirectional A/V contents, and the corresponding perceptual audio-visual
quality scores. Then, we design three baseline methods for full-reference
omnidirectional audio-visual quality assessment (OAVQA), which combine existing
state-of-the-art single-mode audio and video QA models via multimodal fusion
strategies. We validate the effectiveness of the A/V multimodal fusion method
for OAVQA on our dataset, which provides a new benchmark for omnidirectional
QoE evaluation. Our dataset is available at https://github.com/iamazxl/OAVQA.Comment: 12 pages, 5 figures, to be published in CICAI202
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Mobile Robots Navigation
Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
- …