522 research outputs found

    End to end Multi-Objective Optimisation of H.264 and HEVC Codecs

    Get PDF
    All multimedia devices now incorporate video CODECs that comply with international video coding standards such as H.264 / MPEG4-AVC and the new High Efficiency Video Coding Standard (HEVC) otherwise known as H.265. Although the standard CODECs have been designed to include algorithms with optimal efficiency, large number of coding parameters can be used to fine tune their operation, within known constraints of for e.g., available computational power, bandwidth, consumer QoS requirements, etc. With large number of such parameters involved, determining which parameters will play a significant role in providing optimal quality of service within given constraints is a further challenge that needs to be met. Further how to select the values of the significant parameters so that the CODEC performs optimally under the given constraints is a further important question to be answered. This thesis proposes a framework that uses machine learning algorithms to model the performance of a video CODEC based on the significant coding parameters. Means of modelling both the Encoder and Decoder performance is proposed. We define objective functions that can be used to model the performance related properties of a CODEC, i.e., video quality, bit-rate and CPU time. We show that these objective functions can be practically utilised in video Encoder/Decoder designs, in particular in their performance optimisation within given operational and practical constraints. A Multi-objective Optimisation framework based on Genetic Algorithms is thus proposed to optimise the performance of a video codec. The framework is designed to jointly minimize the CPU Time, Bit-rate and to maximize the quality of the compressed video stream. The thesis presents the use of this framework in the performance modelling and multi-objective optimisation of the most widely used video coding standard in practice at present, H.264 and the latest video coding standard, H.265/HEVC. When a communication network is used to transmit video, performance related parameters of the communication channel will impact the end-to-end performance of the video CODEC. Network delays and packet loss will impact the quality of the video that is received at the decoder via the communication channel, i.e., even if a video CODEC is optimally configured network conditions will make the experience sub-optimal. Given the above the thesis proposes a design, integration and testing of a novel approach to simulating a wired network and the use of UDP protocol for the transmission of video data. This network is subsequently used to simulate the impact of packet loss and network delays on optimally coded video based on the framework previously proposed for the modelling and optimisation of video CODECs. The quality of received video under different levels of packet loss and network delay is simulated, concluding the impact on transmitted video based on their content and features

    Enhanced Statistical Modelling For Variable Bit Rate Video Traffic Generated From Scalable Video Codec

    Get PDF
    Mereka bentuk rangkaian yang berkesan dan berprestasi tinggi memerlukan pencirian dan pemodela punca trafik rangkaian yang tepat. Tesis ini menyediakan satu kajian tentang penghantaran, pemodelan dan analisis video variable bit rate (VBR) yang merupakan asas reka bentuk protokol dan penggunaan rangkaian yang cekap dalam penghantaran video. Dengan ini, satu model trafik video VBR yang dikodkan oleh scalable video codec (SVC) telah dicadangkan. EDAR (1) dapat menjana siri video dengan tepat di mana siri ini bersifat seakan-akan trafik video yang sebenar. Model ini telah disahkan dengan menggunakan pelbagai statistik untuk membandingkan jejak simulasi da asal. Pengesahan ini telah dilakukan melalui pengukuran grafik (Quantile-Quantile plot) dan statistik (Kolmogorov-Smirnov, Jumlah Ralat Berganda (SSE), dan Kecekapan Relatif (RE)) serta pengesahan secara bersilang. Designing an effective and high performance network requires an accurate characterization and modelling of the network traffic. This work involves the analysis and modelling of the Variable Bit Rate (VBR) of video traffic, usually described as the core of the protocol design and efficient network utilization for video transmissions. In this context, an Enhanced Discrete Autoregressive (EDAR (1)) model for the VBR video traffic model, which is encoded by a Scalable Video Codec (SVC), has been proposed. The EDAR (1) model was able to accurately generate video sequences, which are very close to the actual video traffic in terms of accuracy. The model is validated using statistical tests in order to compare simulated and original traces. The validation is done using graphical (Quantile-Quantile plot) and statistical measurements (Kolmogorov-Smirnov, Sum of Squared Error, and Relative Efficiency), as well as cross-validation

    Computational inference and control of quality in multimedia services

    Get PDF
    Quality is the degree of excellence we expect of a service or a product. It is also one of the key factors that determine its value. For multimedia services, understanding the experienced quality means understanding how the delivered delity, precision and reliability correspond to the users' expectations. Yet the quality of multimedia services is inextricably linked to the underlying technology. It is developments in video recording, compression and transport as well as display technologies that enables high quality multimedia services to become ubiquitous. The constant evolution of these technologies delivers a steady increase in performance, but also a growing level of complexity. As new technologies stack on top of each other the interactions between them and their components become more intricate and obscure. In this environment optimizing the delivered quality of multimedia services becomes increasingly challenging. The factors that aect the experienced quality, or Quality of Experience (QoE), tend to have complex non-linear relationships. The subjectively perceived QoE is hard to measure directly and continuously evolves with the user's expectations. Faced with the diculty of designing an expert system for QoE management that relies on painstaking measurements and intricate heuristics, we turn to an approach based on learning or inference. The set of solutions presented in this work rely on computational intelligence techniques that do inference over the large set of signals coming from the system to deliver QoE models based on user feedback. We furthermore present solutions for inference of optimized control in systems with no guarantees for resource availability. This approach oers the opportunity to be more accurate in assessing the perceived quality, to incorporate more factors and to adapt as technology and user expectations evolve. In a similar fashion, the inferred control strategies can uncover more intricate patterns coming from the sensors and therefore implement farther-reaching decisions. Similarly to natural systems, this continuous adaptation and learning makes these systems more robust to perturbations in the environment, longer lasting accuracy and higher eciency in dealing with increased complexity. Overcoming this increasing complexity and diversity is crucial for addressing the challenges of future multimedia system. Through experiments and simulations this work demonstrates that adopting an approach of learning can improve the sub jective and objective QoE estimation, enable the implementation of ecient and scalable QoE management as well as ecient control mechanisms

    Individual camera device identification from JPEG images

    Get PDF
    International audienceThe goal of this paper is to investigate the problem of source camera device identification for natural images in JPEG format. We propose an improved signal-dependent noise model describing the statistical distribution of pixels from a JPEG image. The noise model relies on the heteroscedastic noise parameters, that relates the variance of pixels’ noise with the expectation considered as unique fingerprints. It is also shown in the present paper that, non-linear response of pixels can be captured by characterizing the linear relation because those heteroscedastic parameters, which are used to identify source camera device. The identification problem is cast within the framework of hypothesis testing theory. In an ideal context where all model parameters are perfectly known, the Likelihood Ratio Test (LRT) is presented and its performance is theoretically established. The statistical performance of LRT serves as an upper bound of the detection power. In a practical identification, when the nuisance parameters are unknown, two generalized LRTs based on estimation of those parameters are established. Numerical results on simulated data and real natural images highlight the relevance of our proposed approach. While those results show a first positive proof of concept of the method, it still requires to be extended for a relevant comparison with PRNU-based approaches that benefit from years of experience

    Methods and Tools for Image and Video Quality Assessment

    Get PDF
    Disertační práce se zabývá metodami a prostředky pro hodnocení kvality obrazu ve videosekvencích, což je velmi aktuální téma, zažívající velký rozmach zejména v souvislosti s digitálním zpracováním videosignálů. Přestože již existuje relativně velké množství metod a metrik pro objektivní, tedy automatizované měření kvality videosekvencí, jsou tyto metody zpravidla založeny na porovnání zpracované (poškozené, například komprimací) a originální videosekvence. Metod pro hodnocení kvality videosekvení bez reference, tedy pouze na základě analýzy zpracovaného materiálu, je velmi málo. Navíc se takové metody převážně zaměřují na analýzu hodnot signálu (typicky jasu) v jednotlivých obrazových bodech dekódovaného signálu, což je jen těžko aplikovatelné pro moderní komprimační algoritmy jako je H.264/AVC, který používá sofistikovené techniky pro odstranění komprimačních artefaktů. V práci je nejprve podán stučný přehled dostupných metod pro objektivní hodnocení komprimovaných videosekvencí se zdůrazněním rozdílného principu metod využívajících referenční materiál a metod pracujících bez reference. Na základě analýzy možných přístupů pro hodnocení video sekvencí komprimovaných moderními komprimačními algoritmy je v dalším textu práce popsán návrh nové metody určené pro hodnocení kvality obrazu ve videosekvencích komprimovaných s využitím algoritmu H.264/AVC. Nová metoda je založena na sledování hodnot parametrů, které jsou obsaženy v transportním toku komprimovaného videa, a přímo souvisí s procesem kódování. Nejprve je provedena úvaha nad vlivem některých takových parametrů na kvalitu výsledného videa. Následně je navržen algoritmus, který s využitím umělé neuronové sítě určuje špičkový poměr signálu a šumu (peak signal-to-noise ratio -- PSNR) v komprimované videosekvenci -- plně referenční metrika je tedy nahrazována metrikou bez reference. Je ověřeno několik konfigurací umělých neuronových sítí od těch nejjednodušších až po třívrstvé dopředné sítě. Pro učení sítí a následnou analýzu jejich výkonnosti a věrnosti určení PSNR jsou vytvořeny dva soubory nekomprimovaných videosekvencí, které jsou následně komprimovány algoritmem H.264/AVC s proměnným nastavením kodéru. V závěrečné části práce je proveden rozbor chování nově navrženého algoritmu v případě, že se změní vlastnosti zpracovávaného videa (rozlišení, střih), případně kodéru (formát skupiny současně kódovaných snímků). Chování algoritmu je analyzováno až do plného vysokého rozlišení zdrojového signálu (full HD -1920 x 1080 obrazových bodů).The doctoral thesis is focused on methods and tools for image quality assessment in video sequences, which is a very up-to-date theme, undergoing a rapid evolution with respect to digital video signal processing, in particular. Although a variety of metrics for objective (automated) video sequence quality measurement has been developed recently, these methods are mostly based on comparison of the processed (damaged, e.g. with compression) and original video sequences. There are very few methods operating without reference, i.e. only on the processed video material. Moreover, such methods are usually analyzing signal values (typically luminance) in picture elements of the decoded signal, which is hardly applicable for modern compression algorithms such as the H.264/AVC as they use sophisticated techniques to remove compression artifacts. The thesis first gives a brief overview of the available metrics for objective quality measurements of compressed video sequences, emphasizing the different approach of full-reference and no-reference methods. Based on an analysis of possible ideas for measuring quality of video sequences compressed using modern compression algorithms, the thesis describes the design process of a new quality metric for video sequences compressed with the H.264/AVC algorithm. The new method is based on monitoring of several parameters, present in the transport stream of the compressed video and directly related to the encoding process. The impact of bitstream parameters on the video quality is considered first. Consequently, an algorithm is designed, employing an artificial neural network to estimate the peak signal-to-noise ratios (PSNR) of the compressed video sequences -- a full-reference metric is thus replaced by a no--reference metric. Several neural network configurations are verified, reaching from the simplest to three-layer feedforward networks. Two sets of video sequences are constructed to train the networks and analyze their performance and fidelity of estimated PSNRs. The sequences are compressed using the H.264/AVC algorithm with variable encoder configuration. The final part of the thesis deals with an analysis of behavior of the newly designed algorithm, provided the properties of the processed video are changed (resolution, cut) or encoder configuration is altered (format of group of pictures coded together). The analysis is done on video sequences with resolution up to full HD (1920 x 1080 pixels, progressive)

    Error resilient packet switched H.264 video telephony over third generation networks.

    Get PDF
    Real-time video communication over wireless networks is a challenging problem because wireless channels suffer from fading, additive noise and interference, which translate into packet loss and delay. Since modern video encoders deliver video packets with decoding dependencies, packet loss and delay can significantly degrade the video quality at the receiver. Many error resilience mechanisms have been proposed to combat packet loss in wireless networks, but only a few were specifically designed for packet switched video telephony over Third Generation (3G) networks. The first part of the thesis presents an error resilience technique for packet switched video telephony that combines application layer Forward Error Correction (FEC) with rateless codes, Reference Picture Selection (RPS) and cross layer optimization. Rateless codes have lower encoding and decoding computational complexity compared to traditional error correcting codes. One can use them on complexity constrained hand-held devices. Also, their redundancy does not need to be fixed in advance and any number of encoded symbols can be generated on the fly. Reference picture selection is used to limit the effect of spatio-temporal error propagation. Limiting the effect of spatio-temporal error propagation results in better video quality. Cross layer optimization is used to minimize the data loss at the application layer when data is lost at the data link layer. Experimental results on a High Speed Packet Access (HSPA) network simulator for H.264 compressed standard video sequences show that the proposed technique achieves significant Peak Signal to Noise Ratio (PSNR) and Percentage Degraded Video Duration (PDVD) improvements over a state of the art error resilience technique known as Interactive Error Control (IEC), which is a combination of Error Tracking and feedback based Reference Picture Selection. The improvement is obtained at a cost of higher end-to-end delay. The proposed technique is improved by making the FEC (Rateless code) redundancy channel adaptive. Automatic Repeat Request (ARQ) is used to adjust the redundancy of the Rateless codes according to the channel conditions. Experimental results show that the channel adaptive scheme achieves significant PSNR and PDVD improvements over the static scheme for a simulated Long Term Evolution (LTE) network. In the third part of the thesis, the performance of the previous two schemes is improved by making the transmitter predict when rateless decoding will fail. In this case, reference picture selection is invoked early and transmission of encoded symbols for that source block is aborted. Simulations for an LTE network show that this results in video quality improvement and bandwidth savings. In the last part of the thesis, the performance of the adaptive technique is improved by exploiting the history of the wireless channel. In a Rayleigh fading wireless channel, the RLC-PDU losses are correlated under certain conditions. This correlation is exploited to adjust the redundancy of the Rateless code and results in higher Rateless code decoding success rate and higher video quality. Simulations for an LTE network show that the improvement was significant when the packet loss rate in the two wireless links was 10%. To facilitate the implementation of the proposed error resilience techniques in practical scenarios, RTP/UDP/IP level packetization schemes are also proposed for each error resilience technique. Compared to existing work, the proposed error resilience techniques provide better video quality. Also, more emphasis is given to implementation issues in 3G networks
    corecore