193 research outputs found
Measuring quality of video of internet protocol television (IPTV)
141 p.La motivación para el desarrollo de esta tesis es la necesidad que existe de monitorizar la calidad de experiencia del vídeo que se proporciona en una red IPTV (Internet Protocol Television). Esta necesidad surge del deseo de los operadores de telecomunicaciones de proporcionar un servicio más satisfactorio a sus clientes y alcanzar mayor penetración en el mercado. Los servicios sólo pueden tener éxito si la calidad de experiencia se garantiza. Las redes IPTV (Television sobre IP) son por naturaleza susceptibles a pérdidas de paquetes de datos que afectan a la calidad del vídeo que recibe el usuario. Entre los factores que contribuyen a la existencia de pérdida de paquetes de datos se encuentran la congestión de red, una planificación de red inadecuada o el fallo de algún equipamiento de la red. La calidad de experiencia de un vídeo se ve afectada por una serie de factores como por ejemplo la resolución, la ausencia de errores en las imágenes, la calidad de la televisión, las expectativas previas del usuario y muchos otros factores que se estudian en esta tesis
Measuring quality of video of internet protocol television (IPTV)
141 p.La motivación para el desarrollo de esta tesis es la necesidad que existe de monitorizar la calidad de experiencia del vídeo que se proporciona en una red IPTV (Internet Protocol Television). Esta necesidad surge del deseo de los operadores de telecomunicaciones de proporcionar un servicio más satisfactorio a sus clientes y alcanzar mayor penetración en el mercado. Los servicios sólo pueden tener éxito si la calidad de experiencia se garantiza. Las redes IPTV (Television sobre IP) son por naturaleza susceptibles a pérdidas de paquetes de datos que afectan a la calidad del vídeo que recibe el usuario. Entre los factores que contribuyen a la existencia de pérdida de paquetes de datos se encuentran la congestión de red, una planificación de red inadecuada o el fallo de algún equipamiento de la red. La calidad de experiencia de un vídeo se ve afectada por una serie de factores como por ejemplo la resolución, la ausencia de errores en las imágenes, la calidad de la televisión, las expectativas previas del usuario y muchos otros factores que se estudian en esta tesis
Adaptive deinterlacing of video sequences using motion data
In this work an efficient motion adaptive deinterlacing method with considerable improvement in picture quality is proposed. A temporal deinterlacing method has a high performance in static images while a spatial method has a better performance in dynamic parts. In the proposed deinterlacing method, a motion adaptive interpolator combines the results of a spatial method and a temporal method based on motion activity level of video sequence.
A high performance and low complexity algorithm for motion detection is introduced. This algorithm uses five consecutive interlaced video fields for motion detection. It is able to capture a wide range of motions from slow to fast. The algorithm benefits from a hierarchal structure. It starts with detecting motion in large partitions of a given field. Depending on the detected motion activity level for that partition, the motion detection algorithm might recursively be applied to sub-blocks of the original partition. Two different low pass filters are used during the motion detection to increase the algorithm accuracy. The result of motion detection is then used in the proposed motion adaptive interpolator.
The performance of the proposed deinterlacing algorithm is compared to previous methods in the literature. Experimenting with several standard video sequences, the method proposed in this work shows excellent results for motion detection and deinterlacing performance
High definition systems in Japan
The successful implementation of a strategy to produce high-definition systems within the Japanese economy will favorably affect the fundamental competitiveness of Japan relative to the rest of the world. The development of an infrastructure necessary to support high-definition products and systems in that country involves major commitments of engineering resources, plants and equipment, educational programs and funding. The results of these efforts appear to affect virtually every aspect of the Japanese industrial complex. The results of assessments of the current progress of Japan toward the development of high-definition products and systems are presented. The assessments are based on the findings of a panel of U.S. experts made up of individuals from U.S. academia and industry, and derived from a study of the Japanese literature combined with visits to the primary relevant industrial laboratories and development agencies in Japan. Specific coverage includes an evaluation of progress in R&D for high-definition television (HDTV) displays that are evolving in Japan; high-definition standards and equipment development; Japanese intentions for the use of HDTV; economic evaluation of Japan's public policy initiatives in support of high-definition systems; management analysis of Japan's strategy of leverage with respect to high-definition products and systems
Evaluation of the color image and video processing chain and visual quality management for consumer systems
With the advent of novel digital display technologies, color processing is increasingly becoming a key aspect in consumer video applications. Today’s state-of-the-art displays require sophisticated color and image reproduction techniques in order to achieve larger screen size, higher luminance and higher resolution than ever before. However, from color science perspective, there are clearly opportunities for improvement in the color reproduction capabilities of various emerging and conventional display technologies. This research seeks to identify potential areas for improvement in color processing in a video processing chain. As part of this research, various processes involved in a typical video processing chain in consumer video applications were reviewed. Several published color and contrast enhancement algorithms were evaluated, and a novel algorithm was developed to enhance color and contrast in images and videos in an effective and coordinated manner. Further, a psychophysical technique was developed and implemented for performing visual evaluation of color image and consumer video quality. Based on the performance analysis and visual experiments involving various algorithms, guidelines were proposed for the development of an effective color and contrast enhancement method for images and video applications. It is hoped that the knowledge gained from this research will help build a better understanding of color processing and color quality management methods in consumer video
Recommended from our members
Intelligent image cropping and scaling
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 2011.Nowadays, there exist a huge number of end devices with different screen properties for
watching television content, which is either broadcasted or transmitted over the internet.
To allow best viewing conditions on each of these devices, different image formats have
to be provided by the broadcaster. Producing content for every single format is,
however, not applicable by the broadcaster as it is much too laborious and costly.
The most obvious solution for providing multiple image formats is to produce one high resolution format and prepare formats of lower resolution from this. One possibility to do this is to simply scale video images to the resolution of the target image format. Two significant drawbacks are the loss of image details through ownscaling and possibly unused image areas due to letter- or pillarboxes. A preferable solution is to find the contextual most important region in the high-resolution format at first and crop this area with an aspect ratio of the target image format afterwards. On the other hand, defining
the contextual most important region manually is very time consuming. Trying to apply that to live productions would be nearly impossible. Therefore, some approaches exist that automatically define cropping areas. To do so, they extract visual features, like moving reas in a video, and define regions of interest
(ROIs) based on those. ROIs are finally used to define an enclosing cropping area. The
extraction of features is done without any knowledge about the type of content. Hence,
these approaches are not able to distinguish between features that might be important in
a given context and those that are not.
The work presented within this thesis tackles the problem of extracting visual features based on prior knowledge about the content. Such knowledge is fed into the system in form of metadata that is available from TV production environments. Based on the
extracted features, ROIs are then defined and filtered dependent on the analysed
content. As proof-of-concept, this application finally adapts SDTV (Standard Definition Television) sports productions automatically to image formats with lower resolution through intelligent cropping and scaling. If no content information is available, the system can still be applied on any type of content through a default mode. The presented approach is based on the principle of a plug-in system. Each plug-in
represents a method for analysing video content information, either on a low level by
extracting image features or on a higher level by processing extracted ROIs. The
combination of plug-ins is determined by the incoming descriptive production metadata
and hence can be adapted to each type of sport individually. The application has been comprehensively evaluated by comparing the results of the system against alternative cropping methods. This evaluation utilised videos which were manually cropped by a professional video editor, statically cropped videos and simply scaled, non-cropped videos. In addition to and apart from purely subjective evaluations,
the gaze positions of subjects watching sports videos have been measured and compared
to the regions of interest positions extracted by the system
Standard-Compliant Low-Pass Temporal Filter to Reduce the Perceived Flicker Artifact
Flicker is a common video-compression-related temporal artifact. It occurs when co-located regions of consecutive frames are not encoded in a consistent manner, especially when Intra frames are periodically inserted at low and medium bit rates. In this paper we propose a flicker reduction method which aims to make the luminance changes between pixels in the same area of consecutive frames less noticeable. To this end, a temporal low-pass filtering is proposed that smooths these luminance changes on a block-by-block basis. The proposed method has some advantages compared to another state-of-the-art methods. It has been designed to be compliant with conventional video coding standards, i.e., to generate a bitstream that is decodable by any standard decoder implementation. The filter strength is estimated on-the-fly to limit the PSNR loss and thus the appearance of a noticeable blurring effect. The proposed method has been implemented on the H. 264/AVC reference software and thoroughly assessed in comparison to a couple of state-of-the-art methods. The flicker reduction achieved by the proposed method (calculated using an objective measurement) is notably higher than that of compared methods: 18.78% versus 5.32% and 31.96% versus 8.34%, in exchange of some slight losses in terms of coding efficiency. In terms of subjective quality, the proposed method is perceived more than two times better than the compared methods.This work has been partially supported by the National Grant TEC2011-26807 of the Spanish Ministry of Science and Innovation.Publicad
Complexity management of H.264/AVC video compression.
The H. 264/AVC video coding standard offers significantly improved compression efficiency and flexibility compared to previous standards. However, the high computational complexity of H. 264/AVC is a problem for codecs running on low-power hand held devices and general purpose computers. This thesis presents new techniques to reduce, control and manage the computational complexity of an H. 264/AVC codec. A new complexity reduction algorithm for H. 264/AVC is developed. This algorithm predicts "skipped" macroblocks prior to motion estimation by estimating a Lagrange ratedistortion cost function. Complexity savings are achieved by not processing the macroblocks that are predicted as "skipped". The Lagrange multiplier is adaptively modelled as a function of the quantisation parameter and video sequence statistics. Simulation results show that this algorithm achieves significant complexity savings with a negligible loss in rate-distortion performance. The complexity reduction algorithm is further developed to achieve complexity-scalable control of the encoding process. The Lagrangian cost estimation is extended to incorporate computational complexity. A target level of complexity is maintained by using a feedback algorithm to update the Lagrange multiplier associated with complexity. Results indicate that scalable complexity control of the encoding process can be achieved whilst maintaining near optimal complexity-rate-distortion performance. A complexity management framework is proposed for maximising the perceptual quality of coded video in a real-time processing-power constrained environment. A real-time frame-level control algorithm and a per-frame complexity control algorithm are combined in order to manage the encoding process such that a high frame rate is maintained without significantly losing frame quality. Subjective evaluations show that the managed complexity approach results in higher perceptual quality compared to a reference encoder that drops frames in computationally constrained situations. These novel algorithms are likely to be useful in implementing real-time H. 264/AVC standard encoders in computationally constrained environments such as low-power mobile devices and general purpose computers
- …