56 research outputs found

    PEA265: Perceptual Assessment of Video Compression Artifacts

    Full text link
    The most widely used video encoders share a common hybrid coding framework that includes block-based motion estimation/compensation and block-based transform coding. Despite their high coding efficiency, the encoded videos often exhibit visually annoying artifacts, denoted as Perceivable Encoding Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience (QoE) of end users. To monitor and improve visual QoE, it is crucial to develop subjective and objective measures that can identify and quantify various types of PEAs. In this work, we make the first attempt to build a large-scale subjectlabelled database composed of H.265/HEVC compressed videos containing various PEAs. The database, namely the PEA265 database, includes 4 types of spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types of temporal PEAs (i.e. flickering and floating). Each containing at least 60,000 image or video patches with positive and negative labels. To objectively identify these PEAs, we train Convolutional Neural Networks (CNNs) using the PEA265 database. It appears that state-of-theart ResNeXt is capable of identifying each type of PEAs with high accuracy. Furthermore, we define PEA pattern and PEA intensity measures to quantify PEA levels of compressed video sequence. We believe that the PEA265 database and our findings will benefit the future development of video quality assessment methods and perceptually motivated video encoders.Comment: 10 pages,15 figures,4 table

    Visual Distortions in 360-degree Videos.

    Get PDF
    Omnidirectional (or 360°) images and videos are emergent signals being used in many areas, such as robotics and virtual/augmented reality. In particular, for virtual reality applications, they allow an immersive experience in which the user can interactively navigate through a scene with three degrees of freedom, wearing a head-mounted display. Current approaches for capturing, processing, delivering, and displaying 360° content, however, present many open technical challenges and introduce several types of distortions in the visual signal. Some of the distortions are specific to the nature of 360° images and often differ from those encountered in classical visual communication frameworks. This paper provides a first comprehensive review of the most common visual distortions that alter 360° signals going through the different processing elements of the visual communication pipeline. While their impact on viewers' visual perception and the immersive experience at large is still unknown-thus, it is an open research topic-this review serves the purpose of proposing a taxonomy of the visual distortions that can be encountered in 360° signals. Their underlying causes in the end-to-end 360° content distribution pipeline are identified. This taxonomy is essential as a basis for comparing different processing techniques, such as visual enhancement, encoding, and streaming strategies, and allowing the effective design of new algorithms and applications. It is also a useful resource for the design of psycho-visual studies aiming to characterize human perception of 360° content in interactive and immersive applications

    Visual Distortions in 360-degree Videos

    Get PDF
    Omnidirectional (or 360-degree) images and videos are emergent signals in many areas such as robotics and virtual/augmented reality. In particular, for virtual reality, they allow an immersive experience in which the user is provided with a 360-degree field of view and can navigate throughout a scene, e.g., through the use of Head Mounted Displays. Since it represents the full 360-degree field of view from one point of the scene, omnidirectional content is naturally represented as spherical visual signals. Current approaches for capturing, processing, delivering, and displaying 360-degree content, however, present many open technical challenges and introduce several types of distortions in these visual signals. Some of the distortions are specific to the nature of 360-degree images, and often different from those encountered in the classical image communication framework. This paper provides a first comprehensive review of the most common visual distortions that alter 360-degree signals undergoing state of the art processing in common applications. While their impact on viewers' visual perception and on the immersive experience at large is still unknown ---thus, it stays an open research topic--- this review serves the purpose of identifying the main causes of visual distortions in the end-to-end 360-degree content distribution pipeline. It is essential as a basis for benchmarking different processing techniques, allowing the effective design of new algorithms and applications. It is also necessary to the deployment of proper psychovisual studies to characterise the human perception of these new images in interactive and immersive applications

    Stereoscopic video quality assessment using binocular energy

    Get PDF
    Stereoscopic imaging is becoming increasingly popular. However, to ensure the best quality of experience, there is a need to develop more robust and accurate objective metrics for stereoscopic content quality assessment. Existing stereoscopic image and video metrics are either extensions of conventional 2D metrics (with added depth or disparity information) or are based on relatively simple perceptual models. Consequently, they tend to lack the accuracy and robustness required for stereoscopic content quality assessment. This paper introduces full-reference stereoscopic image and video quality metrics based on a Human Visual System (HVS) model incorporating important physiological findings on binocular vision. The proposed approach is based on the following three contributions. First, it introduces a novel HVS model extending previous models to include the phenomena of binocular suppression and recurrent excitation. Second, an image quality metric based on the novel HVS model is proposed. Finally, an optimised temporal pooling strategy is introduced to extend the metric to the video domain. Both image and video quality metrics are obtained via a training procedure to establish a relationship between subjective scores and objective measures of the HVS model. The metrics are evaluated using publicly available stereoscopic image/video databases as well as a new stereoscopic video database. An extensive experimental evaluation demonstrates the robustness of the proposed quality metrics. This indicates a considerable improvement with respect to the state-of-the-art with average correlations with subjective scores of 0.86 for the proposed stereoscopic image metric and 0.89 and 0.91 for the proposed stereoscopic video metrics

    Perceptual video quality assessment: the journey continues!

    Get PDF
    Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future

    Perceptual Quality-of-Experience of Stereoscopic 3D Images and Videos

    Get PDF
    With the fast development of 3D acquisition, communication, processing and display technologies, automatic quality assessment of 3D images and videos has become ever important. Nevertheless, recent progress on 3D image quality assessment (IQA) and video quality assessment (VQA) remains limited. The purpose of this research is to investigate various aspects of human visual quality-of-experience (QoE) when viewing stereoscopic 3D images/videos and to develop objective quality assessment models that automatically predict visual QoE of 3D images/videos. Firstly, we create a new subjective 3D-IQA database that has two features that are lacking in the literature, i.e., the inclusion of both 2D and 3D images, and the inclusion of mixed distortion types. We observe strong distortion type dependent bias when using the direct average of 2D image quality to predict 3D image quality. We propose a binocular rivalry inspired multi-scale model to predict the quality of stereoscopic images and the results show that the proposed model eliminates the prediction bias, leading to significantly improved quality predictions. Second, we carry out two subjective studies on depth perception of stereoscopic 3D images. The first one follows a traditional framework where subjects are asked to rate depth quality directly on distorted stereopairs. The second one uses a novel approach, where the stimuli are synthesized independent of the background image content and the subjects are asked to identify depth changes and label the polarities of depth. Our analysis shows that the second approach is much more effective at singling out the contributions of stereo cues in depth perception. We initialize the notion of depth perception difficulty index (DPDI) and propose a novel computational model for DPDI prediction. The results show that the proposed model leads to highly promising DPDI prediction performance. Thirdly, we carry out subjective 3D-VQA experiments on two databases that contain various asymmetrically compressed stereoscopic 3D videos. We then compare different mixed-distortions asymmetric stereoscopic video coding schemes with symmetric coding methods and verify their potential coding gains. We propose a model to account for the prediction bias from using direct averaging of 2D video quality to predict 3D video quality. The results show that the proposed model leads to significantly improved quality predictions and can help us predict the coding gain of mixed-distortions asymmetric video compression. Fourthly, we investigate the problem of objective quality assessment of Multi-view-plus-depth (MVD) images, with a main focus on the pre- depth-image-based-rendering (pre-DIBR) case. We find that existing IQA methods are difficult to be employed as a guiding criterion in the optimization of MVD video coding and transmission systems when applied post-DIBR. We propose a novel pre-DIBR method based on information content weighting of both texture and depth images, which demonstrates competitive performance against state-of-the-art IQA models applied post-DIBR

    Qualitätstaxonomie für skalierbare Algorithmen von Free Viewpoint Video Objekten

    Get PDF
    Diese Dissertation beabsichtigt einen Beitrag zur Qualitätsbeurteilung von Algorithmen für Bildanalyse und Bildsynthese im Anwendungskontext Videokommunikationssysteme zu leisten. In der vorliegenden Arbeit werden Möglichkeiten und Hindernisse der nutzerzentrierten Definition von subjektiver Qualitätswahrnehmung in diesem speziellen Anwendungsfall untersucht. Qualitätsbeurteilung von aufkommender Visualisierungs-Technologie und neuen Verfahren zur Erzeugung einer dreidimensionalen Repräsentation unter der Nutzung von Bildinformation zweier Kameras für Videokommunikationssysteme wurde bisher noch nicht umfangreich behandelt und passende Ansätze dazu fehlen. Die Herausforderungen sind es qualitätsbeeinflussende Faktoren zu definieren, passende Maße zu formulieren, sowie die Qualitätsevaluierung mit den Erstellungsalgorithmen, welche noch in Entwicklung sind, zu verbinden. Der Vorteil der Verlinkung von Qualitätswahrnehmung und Servicequalität ist die Unterstützung der technischen Realisierungsprozesse hinsichtlich ihrer Anpassungsfähigkeit (z.B. an das vom Nutzer verwendete System) und Skalierbarkeit (z.B. Beachtung eines Aufwands- oder Ressourcenlimits) unter Berücksichtigung des Endnutzers und dessen Qualitätsanforderungen. Die vorliegende Arbeit beschreibt den theoretischen Hintergrund und einen Vorschlag für eine Qualitätstaxonomie als verlinkendes Modell. Diese Arbeit beinhaltet eine Beschreibung des Projektes Skalalgo3d, welches den Rahmen der Anwendung darstellt. Präsentierte Ergebnisse bestehen aus einer systematischen Definition von qualitätsbeeinflussenden Faktoren inklusive eines Forschungsrahmens und Evaluierungsaktivitäten die mehr als 350 Testteilnehmer inkludieren, sowie daraus heraus definierte Qualitätsmerkmale der evaluierten Qualität der visuellen Repräsentation für Videokommunikationsanwendungen. Ein darauf basierendes Modell um diese Ergebnisse mit den technischen Erstellungsschritten zu verlinken wird zum Schluss anhand eines formalisierten Qualitätsmaßes präsentiert. Ein Flussdiagramm und ein Richtungsfeld zur grafischen Annäherung an eine differenzierbare Funktion möglicher Zusammenhänge werden daraufhin für weitere Untersuchungen vorgeschlagen.The thesis intends to make a contribution to the quality assessment of free viewpoint video objects within the context of video communication systems. The current work analyzes opportunities and obstacles, focusing on users' subjective quality of experience in this special case. Quality estimation of emerging free viewpoint video object technology in video communication has not yet been assessed and adequate approaches are missing. The challenges are to define factors that influence quality, to formulate an adequate measure of quality, and to link the quality of experience to the technical realization within an undefined and ever-changing technical realization process. There are two advantages of interlinking the quality of experience with the quality of service: First, it can benefit the technical realization process, in order to allow adaptability (e.g., based on systems used by the end users). Second, it provides an opportunity to support scalability in a user-centered way, e.g., based on a cost or resources limitation. The thesis outlines the theoretical background and introduces a user-centered quality taxonomy in the form of an interlinking model. A description of the related project Skalalgo3d is included, which offered a framework for application. The outlined results consist of a systematic definition of factors that influence quality, including a research framework, and evaluation activities involving more than 350 participants. The thesis includes the presentation of quality features, defined by evaluations of free viewpoint video object quality, for video communication application. Based on these quality features, a model that links these results with the technical creation process, including a formalized quality measure, is presented. Based on this, a flow chart and slope field are proposed. These intend the visualization of these potential relationships and may work as a starting point for further investigations thereon and to differentiate relations in form of functions
    • …
    corecore