118 research outputs found

    Quality Evaluations and algorithmic Improvement of the next Generation Video Coding - HEVC

    Get PDF
    The increased processing power and screen sizes of mobile devices has made it desirable to watch multimedia presentations on the go. On such devices the data network bandwidth is usually the limiting factor, which imposes a tradeoff between quality and resolution on the presented content. A new video compression system called High Efficiency Video Coding (HEVC) is currently under development. The vision of HEVC is to create a compression system that achieves the same quality at half the bit rate compared to the existing H.264/AVC standard [2].The goal of this thesis is to investigate how HEVC performs compared to H.264/AVC using mobile platforms and sport content as the scenario. The subjective test was conducted on an Apple iPad. It indicated that HEVC has a clear gain in compression compared to H.264/AVC. On average at a resolution of 640x368, HEVC achieved a good quality rating at approximately 550 kilobit per second while H.264/AVC did almost reach this quality at 1000 kilobit per second. However, it was shown that subjective quality gain varied over content.The objective measurements showed an overall reduction in bit rate of 32% forthe luma component. However, the reduction of bit rate was highly variable over content and resolution. A high correlation between the subjective and objective measurements was found, which indicates that it was almost a linear relationship between the reported subjective and objective results.In addition, a proposed deblocking filter was implemented. The filter applies a new filter function of the luma samples and performs line based filtering decision. On average the reduction in bit rate was reported to be 0.4%, with a maximum reduction of 0.8% for the luma component. The decoding time relative to the second version of the HEVC test model was reported to be 1.5% higher. This is most likely due to the line based filtering decision. The general impression of HEVC is that it has the ability to reach the stated vision, and perhaps even surpass, when finalized

    Video Content-Based QoE Prediction for HEVC Encoded Videos Delivered over IP Networks

    Get PDF
    The recently released High Efficiency Video Coding (HEVC) standard, which halves the transmission bandwidth requirement of encoded video for almost the same quality when compared to H.264/AVC, and the availability of increased network bandwidth (e.g. from 2 Mbps for 3G networks to almost 100 Mbps for 4G/LTE) have led to the proliferation of video streaming services. Based on these major innovations, the prevalence and diversity of video application are set to increase over the coming years. However, the popularity and success of current and future video applications will depend on the perceived quality of experience (QoE) of end users. How to measure or predict the QoE of delivered services becomes an important and inevitable task for both service and network providers. Video quality can be measured either subjectively or objectively. Subjective quality measurement is the most reliable method of determining the quality of multimedia applications because of its direct link to users’ experience. However, this approach is time consuming and expensive and hence the need for an objective method that can produce results that are comparable with those of subjective testing. In general, video quality is impacted by impairments caused by the encoder and the transmission network. However, videos encoded and transmitted over an error-prone network have different quality measurements even under the same encoder setting and network quality of service (NQoS). This indicates that, in addition to encoder settings and network impairment, there may be other key parameters that impact video quality. In this project, it is hypothesised that video content type is one of the key parameters that may impact the quality of streamed videos. Based on this assertion, parameters related to video content type are extracted and used to develop a single metric that quantifies the content type of different video sequences. The proposed content type metric is then used together with encoding parameter settings and NQoS to develop content-based video quality models that estimate the quality of different video sequences delivered over IP-based network. This project led to the following main contributions: (1) A new metric for quantifying video content type based on the spatiotemporal features extracted from the encoded bitstream. (2) The development of novel subjective test approach for video streaming services. (3) New content-based video quality prediction models for predicting the QoE of video sequences delivered over IP-based networks. The models have been evaluated using subjective and objective methods

    Proceedings of the Space Shuttle Integrated Electronics Conference, volume 3

    Get PDF
    Proceedings of space shuttle integrated electronics conference with emphasis on data systems design - Vol.

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Data analysis and machine learning approaches for time series pre- and post- processing pipelines

    Get PDF
    157 p.En el ámbito industrial, las series temporales suelen generarse de forma continua mediante sensores quecaptan y supervisan constantemente el funcionamiento de las máquinas en tiempo real. Por ello, esimportante que los algoritmos de limpieza admitan un funcionamiento casi en tiempo real. Además, amedida que los datos evolución, la estrategia de limpieza debe cambiar de forma adaptativa eincremental, para evitar tener que empezar el proceso de limpieza desde cero cada vez.El objetivo de esta tesis es comprobar la posibilidad de aplicar flujos de aprendizaje automática a lasetapas de preprocesamiento de datos. Para ello, este trabajo propone métodos capaces de seleccionarestrategias óptimas de preprocesamiento que se entrenan utilizando los datos históricos disponibles,minimizando las funciones de perdida empíricas.En concreto, esta tesis estudia los procesos de compresión de series temporales, unión de variables,imputación de observaciones y generación de modelos subrogados. En cada uno de ellos se persigue laselección y combinación óptima de múltiples estrategias. Este enfoque se define en función de lascaracterísticas de los datos y de las propiedades y limitaciones del sistema definidas por el usuario

    Steganography-based secret and reliable communications : improving steganographic capacity and imperceptibility

    Get PDF
    Unlike encryption, steganography hides the very existence of secret information rather than hiding its meaning only. Image based steganography is the most common system used since digital images are widely used over the Internet and Web. However, the capacity is mostly limited and restricted by the size of cover images. In addition, there is a tradeoff between both steganographic capacity and stego image quality. Therefore, increasing steganographic capacity and enhancing stego image quality are still challenges, and this is exactly our research main aim. Related to this, we also investigate hiding secret information in communication protocols, namely Simple Object Access Protocol (SOAP) message, rather than in conventional digital files. To get a high steganographic capacity, two novel steganography methods were proposed. The first method was based on using 16x16 non-overlapping blocks and quantisation table for Joint Photographic Experts Group (JPEG) compression instead of 8x8. Then, the quality of JPEG stego images was enhanced by using optimised quantisation tables instead of the default tables. The second method, the hybrid method, was based on using optimised quantisation tables and two hiding techniques: JSteg along with our first proposed method. To increase the steganographic capacity, the impact of hiding data within image chrominance was investigated and explained. Since peak signal-to-noise ratio (PSNR) is extensively used as a quality measure of stego images, the reliability of PSNR for stego images was also evaluated in the work described in this thesis. Finally, to eliminate any detectable traces that traditional steganography may leave in stego files, a novel and undetectable steganography method based on SOAP messages was proposed. All methods proposed have been empirically validated as to indicate their utility and value. The results revealed that our methods and suggestions improved the main aspects of image steganography. Nevertheless, PSNR was found not to be a reliable quality evaluation measure to be used with stego image. On the other hand, information hiding in SOAP messages represented a distinctive way for undetectable and secret communication.EThOS - Electronic Theses Online ServiceMinistry of Higher Education in SyriaUniversity of AleppoGBUnited Kingdo

    Steganography-based secret and reliable communications : improving steganographic capacity and imperceptibility

    Get PDF
    Unlike encryption, steganography hides the very existence of secret information rather than hiding its meaning only. Image based steganography is the most common system used since digital images are widely used over the Internet and Web. However, the capacity is mostly limited and restricted by the size of cover images. In addition, there is a tradeoff between both steganographic capacity and stego image quality. Therefore, increasing steganographic capacity and enhancing stego image quality are still challenges, and this is exactly our research main aim. Related to this, we also investigate hiding secret information in communication protocols, namely Simple Object Access Protocol (SOAP) message, rather than in conventional digital files. To get a high steganographic capacity, two novel steganography methods were proposed. The first method was based on using 16x16 non-overlapping blocks and quantisation table for Joint Photographic Experts Group (JPEG) compression instead of 8x8. Then, the quality of JPEG stego images was enhanced by using optimised quantisation tables instead of the default tables. The second method, the hybrid method, was based on using optimised quantisation tables and two hiding techniques: JSteg along with our first proposed method. To increase the steganographic capacity, the impact of hiding data within image chrominance was investigated and explained. Since peak signal-to-noise ratio (PSNR) is extensively used as a quality measure of stego images, the reliability of PSNR for stego images was also evaluated in the work described in this thesis. Finally, to eliminate any detectable traces that traditional steganography may leave in stego files, a novel and undetectable steganography method based on SOAP messages was proposed. All methods proposed have been empirically validated as to indicate their utility and value. The results revealed that our methods and suggestions improved the main aspects of image steganography. Nevertheless, PSNR was found not to be a reliable quality evaluation measure to be used with stego image. On the other hand, information hiding in SOAP messages represented a distinctive way for undetectable and secret communication.EThOS - Electronic Theses Online ServiceMinistry of Higher Education in SyriaUniversity of AleppoGBUnited Kingdo
    • …
    corecore