125,770 research outputs found

    A regression method for real-time video quality evaluation

    Get PDF
    No-Reference (NR) metrics provide a mechanism to assess video quality in an ever-growing wireless network. Their low computational complexity and functional characteristics make them the primary choice when it comes to realtime content management and mobile streaming control. Unfortunately, common NR metrics suer from poor accuracy, particularly in network-impaired video streams. In this work, we introduce a regression-based video quality metric that is simple enough for real-time computation on thin clients, and comparably as accurate as state-of-the-art Full-Reference (FR) metrics, which are functionally and computationally inviable in real-time streaming. We benchmark our metric against the FR metric VQM (Video Quality Metric), finding a very strong correlation factor

    Constructing a no-reference H.264/AVC bitstream-based video quality metric using genetic programming-based symbolic regression

    Get PDF
    In order to ensure optimal quality of experience toward end users during video streaming, automatic video quality assessment becomes an important field-of-interest to video service providers. Objective video quality metrics try to estimate perceived quality with high accuracy and in an automated manner. In traditional approaches, these metrics model the complex properties of the human visual system. More recently, however, it has been shown that machine learning approaches can also yield competitive results. In this paper, we present a novel no-reference bitstream-based objective video quality metric that is constructed by genetic programming-based symbolic regression. A key benefit of this approach is that it calculates reliable white-box models that allow us to determine the importance of the parameters. Additionally, these models can provide human insight into the underlying principles of subjective video quality assessment. Numerical results show that perceived quality can be modeled with high accuracy using only parameters extracted from the received video bitstream

    Fully-automatic inverse tone mapping algorithm based on dynamic mid-level tone mapping

    Get PDF
    High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state-of-the-art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques

    Quantifying subjective quality evaluations for mobile video watching in a semi-living lab context

    Get PDF
    This paper discusses results from an exploratory study in which Quality of Experience aspects related to mobile video watching were investigated in a semi-living lab setting. More specifically, we zoom in on usage patterns in a natural research context and on the subjective evaluation of high and low-resolution movie trailers that are transferred to a mobile device using two transmission protocols for video (i.e., real-time transport protocol and progressive download using HTTP). User feedback was collected by means of short questionnaires on the mobile device, combined with traditional pen and paper diaries. The subjective evaluations regarding the general technical quality, perceived distortion, fluentness of the video, and loading speed are studied and the influence of the transmission protocol and video resolution on these evaluations is analyzed. Multinomial logistic regression results in a model to estimate the subjective evaluations regarding the perceived distortion and loading speed based on objectively-measured parameters of the video session

    VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

    Full text link
    We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.Comment: Accepted to SIGGRAPH 201

    A Matlab-Based Tool for Video Quality Evaluation without Reference

    Get PDF
    This paper deals with the design of a Matlab based tool for measuring video quality with no use of a reference sequence. The main goals are described and the tool and its features are shown. The paper begins with a description of the existing pixel-based no-reference quality metrics. Then, a novel algorithm for simple PSNR estimation of H.264/AVC coded videos is presented as an alternative. The algorithm was designed and tested using publicly available video database of H.264/AVC coded videos. Cross-validation was used to confirm the consistency of results
    • …
    corecore