522 research outputs found

    Error Correction and Concealment of Bock Based, Motion-Compensated Temporal Predition, Transform Coded Video

    Get PDF
    Error Correction and Concealment of Block Based, Motion-Compensated Temporal Prediction, Transform Coded Video David L. Robie 133 Pages Directed by Dr. Russell M. Mersereau The use of the Internet and wireless networks to bring multimedia to the consumer continues to expand. The transmission of these products is always subject to corruption due to errors such as bit errors or lost and ill-timed packets; however, in many cases, such as real time video transmission, retransmission request (ARQ) is not practical. Therefore receivers must be capable of recovering from corrupted data. Errors can be mitigated using forward error correction in the encoder or error concealment techniques in the decoder. This thesis investigates the use of forward error correction (FEC) techniques in the encoder and error concealment in the decoder in block-based, motion-compensated, temporal prediction, transform codecs. It will show improvement over standard FEC applications and improvements in error concealment relative to the Motion Picture Experts Group (MPEG) standard. To this end, this dissertation will describe the following contributions and proofs-of-concept in the area of error concealment and correction in block-based video transmission. A temporal error concealment algorithm which uses motion-compensated macroblocks from previous frames. A spatial error concealment algorithm which uses the Hough transform to detect edges in both foreground and background colors and using directional interpolation or directional filtering to provide improved edge reproduction. A codec which uses data hiding to transmit error correction information. An enhanced codec which builds upon the last by improving the performance of the codec in the error-free environment while maintaining excellent error recovery capabilities. A method to allocate Reed-Solomon (R-S) packet-based forward error correction that will decrease distortion (using a PSNR metric) at the receiver compared to standard FEC techniques. Finally, under the constraints of a constant bit rate, the tradeoff between traditional R-S FEC and alternate forward concealment information (FCI) is evaluated. Each of these developments is compared and contrasted to state of the art techniques and are able to show improvements using widely accepted metrics. The dissertation concludes with a discussion of future work.Ph.D.Committee Chair: Mersereau, Russell; Committee Member: Altunbasak, Yucel; Committee Member: Fekri, Faramarz; Committee Member: Lanterman, Aaron; Committee Member: Zhou, Haomi

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Detection of simple and complex deceits through facial micro-expressions: a comparison between human beings’ performances and machine learning techniques

    Get PDF
    Micro-expressions have gained increasing interest in the last few years, both in scientific and professional contexts. Theoretically, their emergence suggests ongoing concealments, making them arguably one of the most reliable cues for lie detection (e.g., Yan, Wang, Liu, Wu & Fu, 2014; Venkatesh, Ramachandra & Bours, 2019). Given their fast onset, they result almost imperceptible to the eye of an untrained subject, making it necessary to work on automatic detection tools. Machine learning models have shown promisingly results within this domain; thus, the aim of the study at hand, was to compare the performances human judges and machine learning models obtain on the same dataset of stimuli. Regrettably, machine learning performances have ended up being around the chance level, positing the question of why previous and a-like studies have collected better results. Briefly, insights on how to properly organize an experimental paradigm and collect a dataset for lie detection studies are discussed, while concluding that among other several necessary cues it is still crucial to consider micro-expressions when dealing with lie detection procedures

    Selected topics on distributed video coding

    Get PDF
    Distributed Video Coding (DVC) is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW), and Wyner and Ziv (WZ). While conventional coding has a rigid complexity allocation as most of the complex tasks are performed at the encoder side, DVC enables a flexible complexity allocation between the encoder and the decoder. The most novel and interesting case is low complexity encoding and complex decoding, which is the opposite of conventional coding. While the latter is suitable for applications where the cost of the decoder is more critical than the encoder's one, DVC opens the door for a new range of applications where low complexity encoding is required and the decoder's complexity is not critical. This is interesting with the deployment of small and battery-powered multimedia mobile devices all around in our daily life. Further, since DVC operates as a reversed-complexity scheme when compared to conventional coding, DVC also enables the interesting scenario of low complexity encoding and decoding between two ends by transcoding between DVC and conventional coding. More specifically, low complexity encoding is possible by DVC at one end. Then, the resulting stream is decoded and conventionally re-encoded to enable low complexity decoding at the other end. Multiview video is attractive for a wide range of applications such as free viewpoint television, which is a system that allows viewing the scene from a viewpoint chosen by the viewer. Moreover, multiview can be beneficial for monitoring purposes in video surveillance. The increased use of multiview video systems is mainly due to the improvements in video technology and the reduced cost of cameras. While a multiview conventional codec will try to exploit the correlation among the different cameras at the encoder side, DVC allows for separate encoding of correlated video sources. Therefore, DVC requires no communication between the cameras in a multiview scenario. This is an advantage since communication is time consuming (i.e. more delay) and requires complex networking. Another appealing feature of DVC is the fact that it is based on a statistical framework. Moreover, DVC behaves as a natural joint source-channel coding solution. This results in an improved error resilience performance when compared to conventional coding. Further, DVC-based scalable codecs do not require a deterministic knowledge of the lower layers. In other words, the enhancement layers are completely independent from the base layer codec. This is called the codec-independent scalability feature, which offers a high flexibility in the way the various layers are distributed in a network. This thesis addresses the following topics: First, the theoretical foundations of DVC as well as the practical DVC scheme used in this research are presented. The potential applications for DVC are also outlined. DVC-based schemes use conventional coding to compress parts of the data, while the rest is compressed in a distributed fashion. Thus, different conventional codecs are studied in this research as they are compared in terms of compression efficiency for a rich set of sequences. This includes fine tuning the compression parameters such that the best performance is achieved for each codec. Further, DVC tools for improved Side Information (SI) and Error Concealment (EC) are introduced for monoview DVC using a partially decoded frame. The improved SI results in a significant gain in reconstruction quality for video with high activity and motion. This is done by re-estimating the erroneous motion vectors using the partially decoded frame to improve the SI quality. The latter is then used to enhance the reconstruction of the finally decoded frame. Further, the introduced spatio-temporal EC improves the quality of decoded video in the case of erroneously received packets, outperforming both spatial and temporal EC. Moreover, it also outperforms error-concealed conventional coding in different modes. Then, multiview DVC is studied in terms of SI generation, which differentiates it from the monoview case. More specifically, different multiview prediction techniques for SI generation are described and compared in terms of prediction quality, complexity and compression efficiency. Further, a technique for iterative multiview SI is introduced, where the final SI is used in an enhanced reconstruction process. The iterative SI outperforms the other SI generation techniques, especially for high motion video content. Finally, fusion techniques of temporal and inter-view side informations are introduced as well, which improves the performance of multiview DVC over monoview coding. DVC is also used to enable scalability for image and video coding. Since DVC is based on a statistical framework, the base and enhancement layers are completely independent, which is an interesting property called codec-independent scalability. Moreover, the introduced DVC scalable schemes show a good robustness to errors as the quality of decoded video steadily decreases with error rate increase. On the other hand, conventional coding exhibits a cliff effect as the performance drops dramatically after a certain error rate value. Further, the issue of privacy protection is addressed for DVC by transform domain scrambling, which is used to alter regions of interest in video such that the scene is still understood and privacy is preserved as well. The proposed scrambling techniques are shown to provide a good level of security without impairing the performance of the DVC scheme when compared to the one without scrambling. This is particularly attractive for video surveillance scenarios, which is one of the most promising applications for DVC. Finally, a practical DVC demonstrator built during this research is described, where the main requirements as well as the observed limitations are presented. Furthermore, it is defined in a setup being as close as possible to a complete real application scenario. This shows that it is actually possible to implement a complete end-to-end practical DVC system relying only on realistic assumptions. Even though DVC is inferior in terms of compression efficiency to the state of the art conventional coding for the moment, strengths of DVC reside in its good error resilience properties and the codec-independent scalability feature. Therefore, DVC offers promising possibilities for video compression with transmission over error-prone environments requirement as it significantly outperforms conventional coding in this case

    Multimedia over wireless ip networks:distortion estimation and applications.

    Get PDF
    2006/2007This thesis deals with multimedia communication over unreliable and resource constrained IP-based packet-switched networks. The focus is on estimating, evaluating and enhancing the quality of streaming media services with particular regard to video services. The original contributions of this study involve mainly the development of three video distortion estimation techniques and the successive definition of some application scenarios used to demonstrate the benefits obtained applying such algorithms. The material presented in this dissertation is the result of the studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years multimedia communication over wired and wireless packet based networks is exploding. Applications such as BitTorrent, music file sharing, multimedia podcasting are the main source of all traffic on the Internet. Internet radio for example is now evolving into peer to peer television such as CoolStreaming. Moreover, web sites such as YouTube have made publishing videos on demand available to anyone owning a home video camera. Another challenge in the multimedia evolution is inside the house where videos are distributed over local WiFi networks to many end devices around the house. More in general we are assisting an all media over IP revolution, with radio, television, telephony and stored media all being delivered over IP wired and wireless networks. All the presented applications require an extreme high bandwidth and often a low delay especially for interactive applications. Unfortunately the Internet and the wireless networks provide only limited support for multimedia applications. Variations in network conditions can have considerable consequences for real-time multimedia applications and can lead to unsatisfactory user experience. In fact, multimedia applications are usually delay sensitive, bandwidth intense and loss tolerant applications. In order to overcame this limitations, efficient adaptation mechanism must be derived to bridge the application requirements with the transport medium characteristics. Several approaches have been proposed for the robust transmission of multimedia packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques are based on developing efficient QoS architectures at the network layer or at the data link layer where routers or specialized devices apply different forwarding behaviors to packets depending on the value of some field in the packet header. Using such network architecture, video packets are assigned to classes, in order to obtain a different treatment by the network; in particular, packets assigned to the most privileged class will be lost with a very small probability, while packets belonging to the lowest priority class will experience the traditional best–effort service. But the key problem in this solution is how to assign optimally video packets to the network classes. One way to perform the assignment is to proceed on a packet-by-packet basis, to exploit the highly non-uniform distortion impact of compressed video. Working on the distortion impact of each individual video packet has been shown in recent years to deliver better performance than relying on the average error sensitivity of each bitstream element. The distortion impact of a video packet can be expressed as the distortion that would be introduced at the receiver by its loss, taking into account the effects of both error concealment and error propagation due to temporal prediction. The estimation algorithms proposed in this dissertation are able to reproduce accurately the distortion envelope deriving from multiple losses on the network and the computational complexity required is negligible in respect to those proposed in literature. Several tests are run to validate the distortion estimation algorithms and to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained using the developed algorithms. The packet distortion impact is inserted in each video packet and transmitted over the network where specialized agents manage the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily the distortion impact estimated by the transmitter. The results obtained will show that, in each scenario, a significant improvement may be obtained with respect to traditional transmission policies. The thesis is organized in two parts. The first provides the background material and represents the basics of the following arguments, while the other is dedicated to the original results obtained during the research activity. Referring to the first part in the first chapter it summarized an introduction to the principles and challenges for the multimedia transmission over packet networks. The most recent advances in video compression technologies are detailed in the second chapter, focusing in particular on aspects that involve the resilience to packet loss impairments. The third chapter deals with the main techniques adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in network adaptive media transport detailing the techniques that prioritize the video packet flow. The fifth chapter makes a literature review of the existing distortion estimation techniques focusing mainly on their limitation aspects. The second part of the thesis describes the original results obtained in the modelling of the video distortion deriving from the transmission over an error prone network. In particular, the sixth chapter presents three new distortion estimation algorithms able to estimate the video quality and shows the results of some validation tests performed to measure the accuracy of the employed algorithms. The seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side. Finally, the eight chapter summarizes the thesis contributions and remarks the most important conclusions. It also derives some directions for future improvements. The intent of the entire work presented hereafter is to develop some video distortion estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda. L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima. I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione. Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer to peer per più avanzati per la diffusione della TV via web come CoolStreaming. Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/ distribuzione di video creati da chiunque abbia una semplice video camera. Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo assistendo è la diffusione dei video anche all’interno delle case dove i contenuti multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale. Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale. E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video. Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima. La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante l’intera attività di ricerca. In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali. La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite. In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati possono essere utilizzati per migliorare in maniera significativa la qualità percepita dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli sviluppi futuri dell’attività di ricerca. L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni scenari applicativi di utilizzo.XIX Ciclo197

    Packet prioritizing and delivering for multimedia streaming

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Image Quality Improvement of Medical Images using Deep Learning for Computer-aided Diagnosis

    Get PDF
    Retina image analysis is an important screening tool for early detection of multiple dis eases such as diabetic retinopathy which greatly impairs visual function. Image analy sis and pathology detection can be accomplished both by ophthalmologists and by the use of computer-aided diagnosis systems. Advancements in hardware technology led to more portable and less expensive imaging devices for medical image acquisition. This promotes large scale remote diagnosis by clinicians as well as the implementation of computer-aided diagnosis systems for local routine disease screening. However, lower cost equipment generally results in inferior quality images. This may jeopardize the reliability of the acquired images and thus hinder the overall performance of the diagnos tic tool. To solve this open challenge, we carried out an in-depth study on using different deep learning-based frameworks for improving retina image quality while maintaining the underlying morphological information for the diagnosis. Our results demonstrate that using a Cycle Generative Adversarial Network for unpaired image-to-image trans lation leads to successful transformations of retina images from a low- to a high-quality domain. The visual evidence of this improvement was quantitatively affirmed by the two proposed validation methods. The first used a retina image quality classifier to confirm a significant prediction label shift towards a quality enhance. On average, a 50% increase of images being classified as high-quality was verified. The second analysed the perfor mance modifications of a diabetic retinopathy detection algorithm upon being trained with the quality-improved images. The latter led to strong evidence that the proposed solution satisfies the requirement of maintaining the images’ original information for diagnosis, and that it assures a pathology-assessment more sensitive to the presence of pathological signs. These experimental results confirm the potential effectiveness of our solution in improving retina image quality for diagnosis. Along with the addressed con tributions, we analysed how the construction of the data sets representing the low-quality domain impacts the quality translation efficiency. Our findings suggest that by tackling the problem more selectively, that is, constructing data sets more homogeneous in terms of their image defects, we can obtain more accentuated quality transformations

    Low delay video coding

    Get PDF
    Analogue wireless cameras have been employed for decades, however they have not become an universal solution due to their difficulties of set up and use. The main problem is the link robustness which mainly depends on the requirement of a line-of-sight view between transmitter and receiver, a working condition not always possible. Despite the use of tracking antenna system such as the Portable Intelligent Tracking Antenna (PITA [1]), if strong multipath fading occurs (e.g. obstacles between transmitter and receiver) the picture rapidly falls apart. Digital wireless cameras based on Orthogonal Frequency Division Multiplexing (OFDM) modulation schemes give a valid solution for the above problem. OFDM offers strong multipath protection due to the insertion of the guard interval; in particular, the OFDM-based DVB-T standard has proven to offer excellent performance for the broadcasting of multimedia streams with bit rates over 10 Mbps in difficult terrestrial propagation channels, for fixed and portable applications. However, in typical conditions, the latency needed to compress/decompress a digital video signal at Standard Definition (SD) resolution is of the order of 15 frames, which corresponds to ≃ 0.5 sec. This delay introduces a serious problem when wireless and wired cameras have to be interfaced. Cabled cameras do not use compression, because the cable which directly links transmitter and receiver does not impose restrictive bandwidth constraints. Therefore, the only latency that affects a cable cameras link system is the on cable propagation delay, almost not significant, when switching between wired and wireless cameras, the residual latency makes it impossible to achieve the audio-video synchronization, with consequent disagreeable effects. A way to solve this problem is to provide a low delay digital processing scheme based on a video coding algorithm which avoids massive intermediate data storage. The analysis of the last MPEG based coding standards puts in evidence a series of problems which limits the real performance of a low delay MPEG coding system. The first effort of this work is to study the MPEG standard to understand its limit from both the coding delay and implementation complexity points of views. This thesis also investigates an alternative solution based on HERMES codec, a proprietary algorithm which is described implemented and evaluated. HERMES achieves better results than MPEG in terms of latency and implementation complexity, at the price of higher compression ratios, which means high output bit rates. The use of HERMES codec together with an enhanced OFDM system [2] leads to a competitive solution for wireless digital professional video applications
    corecore