351 research outputs found

    Blind assessment for stereo images considering binocular characteristics and deep perception map based on deep belief network

    Get PDF
    © 2018 Elsevier Inc. In recent years, blind image quality assessment in the field of 2D image/video has gained the popularity, but its applications in 3D image/video are to be generalized. In this paper, we propose an effective blind metric evaluating stereo images via deep belief network (DBN). This method is based on wavelet transform with both 2D features from monocular images respectively as image content description and 3D features from a novel depth perception map (DPM) as depth perception description. In particular, the DPM is introduced to quantify longitudinal depth information to align with human stereo visual perception. More specifically, the 2D features are local histogram of oriented gradient (HoG) features from high frequency wavelet coefficients and global statistical features including magnitude, variance and entropy. Meanwhile, the global statistical features from the DPM are characterized as 3D features. Subsequently, considering binocular characteristics, an effective binocular weight model based on multiscale energy estimation of the left and right images is adopted to obtain the content quality. In the training and testing stages, three DBN models for the three types features separately are used to get the final score. Experimental results demonstrate that the proposed stereo image quality evaluation model has high superiority over existing methods and achieve higher consistency with subjective quality assessments

    Binocular Rivalry Oriented Predictive Auto-Encoding Network for Blind Stereoscopic Image Quality Measurement

    Full text link
    Stereoscopic image quality measurement (SIQM) has become increasingly important for guiding stereo image processing and commutation systems due to the widespread usage of 3D contents. Compared with conventional methods which are relied on hand-crafted features, deep learning oriented measurements have achieved remarkable performance in recent years. However, most existing deep SIQM evaluators are not specifically built for stereoscopic contents and consider little prior domain knowledge of the 3D human visual system (HVS) in network design. In this paper, we develop a Predictive Auto-encoDing Network (PAD-Net) for blind/No-Reference stereoscopic image quality measurement. In the first stage, inspired by the predictive coding theory that the cognition system tries to match bottom-up visual signal with top-down predictions, we adopt the encoder-decoder architecture to reconstruct the distorted inputs. Besides, motivated by the binocular rivalry phenomenon, we leverage the likelihood and prior maps generated from the predictive coding process in the Siamese framework for assisting SIQM. In the second stage, quality regression network is applied to the fusion image for acquiring the perceptual quality prediction. The performance of PAD-Net has been extensively evaluated on three benchmark databases and the superiority has been well validated on both symmetrically and asymmetrically distorted stereoscopic images under various distortion types

    Towards Top-Down Stereoscopic Image Quality Assessment via Stereo Attention

    Full text link
    Stereoscopic image quality assessment (SIQA) plays a crucial role in evaluating and improving the visual experience of 3D content. Existing binocular properties and attention-based methods for SIQA have achieved promising performance. However, these bottom-up approaches are inadequate in exploiting the inherent characteristics of the human visual system (HVS). This paper presents a novel network for SIQA via stereo attention, employing a top-down perspective to guide the quality assessment process. Our proposed method realizes the guidance from high-level binocular signals down to low-level monocular signals, while the binocular and monocular information can be calibrated progressively throughout the processing pipeline. We design a generalized Stereo AttenTion (SAT) block to implement the top-down philosophy in stereo perception. This block utilizes the fusion-generated attention map as a high-level binocular modulator, influencing the representation of two low-level monocular features. Additionally, we introduce an Energy Coefficient (EC) to account for recent findings indicating that binocular responses in the primate primary visual cortex are less than the sum of monocular responses. The adaptive EC can tune the magnitude of binocular response flexibly, thus enhancing the formation of robust binocular features within our framework. To extract the most discriminative quality information from the summation and subtraction of the two branches of monocular features, we utilize a dual-pooling strategy that applies min-pooling and max-pooling operations to the respective branches. Experimental results highlight the superiority of our top-down method in simulating the property of visual perception and advancing the state-of-the-art in the SIQA field. The code of this work is available at https://github.com/Fanning-Zhang/SATNet.Comment: 13 pages, 4 figure

    Stereoscopic video quality assessment based on 3D convolutional neural networks

    Get PDF
    The research of stereoscopic video quality assessment (SVQA) plays an important role for promoting the development of stereoscopic video system. Existing SVQA metrics rely on hand-crafted features, which is inaccurate and time-consuming because of the diversity and complexity of stereoscopic video distortion. This paper introduces a 3D convolutional neural networks (CNN) based SVQA framework that can model not only local spatio-temporal information but also global temporal information with cubic difference video patches as input. First, instead of using hand-crafted features, we design a 3D CNN architecture to automatically and effectively capture local spatio-temporal features. Then we employ a quality score fusion strategy considering global temporal clues to obtain final video-level predicted score. Extensive experiments conducted on two public stereoscopic video quality datasets show that the proposed method correlates highly with human perception and outperforms state-of-the-art methods by a large margin. We also show that our 3D CNN features have more desirable property for SVQA than hand-crafted features in previous methods, and our 3D CNN features together with support vector regression (SVR) can further boost the performance. In addition, with no complex preprocessing and GPU acceleration, our proposed method is demonstrated computationally efficient and easy to use

    No reference quality assessment of stereo video based on saliency and sparsity

    Get PDF
    With the popularity of video technology, stereoscopic video quality assessment (SVQA) has become increasingly important. Existing SVQA methods cannot achieve good performance because the videos' information is not fully utilized. In this paper, we consider various information in the videos together, construct a simple model to combine and analyze the diverse features, which is based on saliency and sparsity. First, we utilize the 3-D saliency map of sum map, which remains the basic information of stereoscopic video, as a valid tool to evaluate the videos' quality. Second, we use the sparse representation to decompose the sum map of 3-D saliency into coefficients, then calculate the features based on sparse coefficients to obtain the effective expression of videos' message. Next, in order to reduce the relevance between the features, we put them into stacked auto-encoder, mapping vectors to higher dimensional space, and adding the sparse restraint, then input them into support vector machine subsequently, and finally, get the quality assessment scores. Within that process, we take the advantage of saliency and sparsity to extract and simplify features. Through the later experiment, we can see the proposed method is fitting well with the subjective scores

    Stereoscopic image quality assessment by deep convolutional neural network

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.jvcir.2018.12.006. © 2018 This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper, we propose a no-reference (NR) quality assessment method for stereoscopic images by deep convolutional neural network (DCNN). Inspired by the internal generative mechanism (IGM) in the human brain, which shows that the brain first analyzes the perceptual information and then extract effective visual information. Meanwhile, in order to simulate the inner interaction process in the human visual system (HVS) when perceiving the visual quality of stereoscopic images, we construct a two-channel DCNN to evaluate the visual quality of stereoscopic images. First, we design a Siamese Network to extract high-level semantic features of left- and right-view images for simulating the process of information extraction in the brain. Second, to imitate the information interaction process in the HVS, we combine the high-level features of left- and right-view images by convolutional operations. Finally, the information after interactive processing is used to estimate the visual quality of stereoscopic image. Experimental results show that the proposed method can estimate the visual quality of stereoscopic images accurately, which also demonstrate the effectiveness of the proposed two-channel convolutional neural network in simulating the perception mechanism in the HVS.This work was supported in part by the Natural Science Foundation of China under Grant 61822109 and 61571212, Fok Ying Tung Education Foundation under Grant 161061 and by the Natural Science Foundation of Jiangxi under Grant 20181BBH80002

    Stereoscopic Omnidirectional Image Quality Assessment Based on Predictive Coding Theory

    Full text link
    Objective quality assessment of stereoscopic omnidirectional images is a challenging problem since it is influenced by multiple aspects such as projection deformation, field of view (FoV) range, binocular vision, visual comfort, etc. Existing studies show that classic 2D or 3D image quality assessment (IQA) metrics are not able to perform well for stereoscopic omnidirectional images. However, very few research works have focused on evaluating the perceptual visual quality of omnidirectional images, especially for stereoscopic omnidirectional images. In this paper, based on the predictive coding theory of the human vision system (HVS), we propose a stereoscopic omnidirectional image quality evaluator (SOIQE) to cope with the characteristics of 3D 360-degree images. Two modules are involved in SOIQE: predictive coding theory based binocular rivalry module and multi-view fusion module. In the binocular rivalry module, we introduce predictive coding theory to simulate the competition between high-level patterns and calculate the similarity and rivalry dominance to obtain the quality scores of viewport images. Moreover, we develop the multi-view fusion module to aggregate the quality scores of viewport images with the help of both content weight and location weight. The proposed SOIQE is a parametric model without necessary of regression learning, which ensures its interpretability and generalization performance. Experimental results on our published stereoscopic omnidirectional image quality assessment database (SOLID) demonstrate that our proposed SOIQE method outperforms state-of-the-art metrics. Furthermore, we also verify the effectiveness of each proposed module on both public stereoscopic image datasets and panoramic image datasets

    Event-Driven Technologies for Reactive Motion Planning: Neuromorphic Stereo Vision and Robot Path Planning and Their Application on Parallel Hardware

    Get PDF
    Die Robotik wird immer mehr zu einem Schlüsselfaktor des technischen Aufschwungs. Trotz beeindruckender Fortschritte in den letzten Jahrzehnten, übertreffen Gehirne von Säugetieren in den Bereichen Sehen und Bewegungsplanung noch immer selbst die leistungsfähigsten Maschinen. Industrieroboter sind sehr schnell und präzise, aber ihre Planungsalgorithmen sind in hochdynamischen Umgebungen, wie sie für die Mensch-Roboter-Kollaboration (MRK) erforderlich sind, nicht leistungsfähig genug. Ohne schnelle und adaptive Bewegungsplanung kann sichere MRK nicht garantiert werden. Neuromorphe Technologien, einschließlich visueller Sensoren und Hardware-Chips, arbeiten asynchron und verarbeiten so raum-zeitliche Informationen sehr effizient. Insbesondere ereignisbasierte visuelle Sensoren sind konventionellen, synchronen Kameras bei vielen Anwendungen bereits überlegen. Daher haben ereignisbasierte Methoden ein großes Potenzial, schnellere und energieeffizientere Algorithmen zur Bewegungssteuerung in der MRK zu ermöglichen. In dieser Arbeit wird ein Ansatz zur flexiblen reaktiven Bewegungssteuerung eines Roboterarms vorgestellt. Dabei wird die Exterozeption durch ereignisbasiertes Stereosehen erreicht und die Pfadplanung ist in einer neuronalen Repräsentation des Konfigurationsraums implementiert. Die Multiview-3D-Rekonstruktion wird durch eine qualitative Analyse in Simulation evaluiert und auf ein Stereo-System ereignisbasierter Kameras übertragen. Zur Evaluierung der reaktiven kollisionsfreien Online-Planung wird ein Demonstrator mit einem industriellen Roboter genutzt. Dieser wird auch für eine vergleichende Studie zu sample-basierten Planern verwendet. Ergänzt wird dies durch einen Benchmark von parallelen Hardwarelösungen wozu als Testszenario Bahnplanung in der Robotik gewählt wurde. Die Ergebnisse zeigen, dass die vorgeschlagenen neuronalen Lösungen einen effektiven Weg zur Realisierung einer Robotersteuerung für dynamische Szenarien darstellen. Diese Arbeit schafft eine Grundlage für neuronale Lösungen bei adaptiven Fertigungsprozesse, auch in Zusammenarbeit mit dem Menschen, ohne Einbußen bei Geschwindigkeit und Sicherheit. Damit ebnet sie den Weg für die Integration von dem Gehirn nachempfundener Hardware und Algorithmen in die Industrierobotik und MRK
    corecore