117 research outputs found
Advances in Video Coding for Broadcast Applications
1Department of Biomedic Engineering, Electronics and Telecommunications, Universita Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy 2Video Broadcasting Coding, Basingstoke, UK 3STMicroelectronics, Research & Innovation, 20041 Milano, Italy 4 Image and Video Processing Group (GPIV), Multimedia Applications and Telecommunications Institute (iTEAM), Technical University of Valencia, Camino de Vera, 46022 Valencia, Spain 5Electrical and Computer Engineering, Indiana University-Purdue University, 723 West Michigan Street, SL160, Indianapolis, IN 46202, US
Multimedia over wireless ip networks:distortion estimation and applications.
2006/2007This thesis deals with multimedia communication over unreliable and resource
constrained IP-based packet-switched networks. The focus is on estimating, evaluating
and enhancing the quality of streaming media services with particular regard
to video services. The original contributions of this study involve mainly the
development of three video distortion estimation techniques and the successive
definition of some application scenarios used to demonstrate the benefits obtained
applying such algorithms. The material presented in this dissertation is the result
of the studies performed within the Telecommunication Group of the Department
of Electronic Engineering at the University of Trieste during the course of Doctorate
in Information Engineering.
In recent years multimedia communication over wired and wireless packet based
networks is exploding. Applications such as BitTorrent, music file sharing, multimedia
podcasting are the main source of all traffic on the Internet. Internet radio
for example is now evolving into peer to peer television such as CoolStreaming.
Moreover, web sites such as YouTube have made publishing videos on demand
available to anyone owning a home video camera. Another challenge in the multimedia
evolution is inside the house where videos are distributed over local WiFi
networks to many end devices around the house. More in general we are assisting
an all media over IP revolution, with radio, television, telephony and stored media
all being delivered over IP wired and wireless networks. All the presented applications
require an extreme high bandwidth and often a low delay especially for
interactive applications. Unfortunately the Internet and the wireless networks provide
only limited support for multimedia applications. Variations in network conditions
can have considerable consequences for real-time multimedia applications
and can lead to unsatisfactory user experience. In fact, multimedia applications
are usually delay sensitive, bandwidth intense and loss tolerant applications. In order
to overcame this limitations, efficient adaptation mechanism must be derived
to bridge the application requirements with the transport medium characteristics.
Several approaches have been proposed for the robust transmission of multimedia
packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques
are based on developing efficient QoS architectures at the network layer or at the
data link layer where routers or specialized devices apply different forwarding
behaviors to packets depending on the value of some field in the packet header.
Using such network architecture, video packets are assigned to classes, in order
to obtain a different treatment by the network; in particular, packets assigned to
the most privileged class will be lost with a very small probability, while packets
belonging to the lowest priority class will experience the traditional best–effort
service. But the key problem in this solution is how to assign optimally video
packets to the network classes. One way to perform the assignment is to proceed
on a packet-by-packet basis, to exploit the highly non-uniform distortion impact
of compressed video. Working on the distortion impact of each individual video
packet has been shown in recent years to deliver better performance than relying
on the average error sensitivity of each bitstream element. The distortion impact
of a video packet can be expressed as the distortion that would be introduced at
the receiver by its loss, taking into account the effects of both error concealment
and error propagation due to temporal prediction.
The estimation algorithms proposed in this dissertation are able to reproduce accurately
the distortion envelope deriving from multiple losses on the network and
the computational complexity required is negligible in respect to those proposed in
literature. Several tests are run to validate the distortion estimation algorithms and
to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained
using the developed algorithms. The packet distortion impact is inserted in each
video packet and transmitted over the network where specialized agents manage
the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily
the distortion impact estimated by the transmitter. The results obtained will show
that, in each scenario, a significant improvement may be obtained with respect to
traditional transmission policies.
The thesis is organized in two parts. The first provides the background material
and represents the basics of the following arguments, while the other is dedicated
to the original results obtained during the research activity.
Referring to the first part in the first chapter it summarized an introduction to
the principles and challenges for the multimedia transmission over packet networks.
The most recent advances in video compression technologies are detailed
in the second chapter, focusing in particular on aspects that involve the resilience
to packet loss impairments. The third chapter deals with the main techniques
adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in
network adaptive media transport detailing the techniques that prioritize the video
packet flow. The fifth chapter makes a literature review of the existing distortion
estimation techniques focusing mainly on their limitation aspects.
The second part of the thesis describes the original results obtained in the modelling
of the video distortion deriving from the transmission over an error prone
network. In particular, the sixth chapter presents three new distortion estimation
algorithms able to estimate the video quality and shows the results of some validation
tests performed to measure the accuracy of the employed algorithms. The
seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side.
Finally, the eight chapter summarizes the thesis contributions and remarks the
most important conclusions. It also derives some directions for future improvements.
The intent of the entire work presented hereafter is to develop some video distortion
estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti
multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda.
L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento
della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima.
I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione.
Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando
parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer
to peer per più avanzati per la diffusione della TV via web come CoolStreaming.
Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/
distribuzione di video creati da chiunque abbia una semplice video camera.
Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo
assistendo è la diffusione dei video anche all’interno delle case dove i contenuti
multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti
IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti
sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle
applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale.
Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di
banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite
che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per
migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano
dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche
di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale.
E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo
per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti
rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video.
Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa
nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima.
La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce
il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre
la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante
l’intera attività di ricerca.
In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto
viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione
video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle
tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali.
La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite.
In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre
fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno
proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati
possono essere utilizzati per migliorare in maniera significativa la qualità percepita
dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli
sviluppi futuri dell’attività di ricerca.
L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti
dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni
scenari applicativi di utilizzo.XIX Ciclo197
Error-resilient multi-view video plus depth based 3-D video coding
Three Dimensional (3-D) video, by definition, is a collection of signals that can provide depth perception of a 3-D scene. With the development of 3-D display
technologies and interactive multimedia systems, 3-D video has attracted significant interest from both industries and academia with a variety of applications. In order to provide desired services in various 3-D video applications, the multiview video plus depth (MVD) representation, which can facilitate the generation of virtual views, has been determined to be the best format for 3-D video data.
Similar to 2-D video, compressed 3-D video is highly sensitive to transmission errors due to errors propagated from the current frame to the future predicted frames. Moreover, since the virtual views required for auto-stereoscopic displays are rendered from the compressed texture videos and depth maps, transmission
errors of the distorted texture videos and depth maps can be further propagated to the virtual views. Besides, the distortions in texture and depth show different
effects on the rendering views. Therefore, compared to the reliability of the transmission of the 2-D video, error-resilient texture video and depth map coding
are facing major new challenges.
This research concentrates on improving the error resilience performance of MVD-based 3-D video in packet loss scenarios. Based on the analysis of the propagating behaviour of transmission errors, a Wyner-Ziv (WZ)-based error-resilient algorithm is first designed for coding of the multi-view video data or depth data. In this scheme, an auxiliary redundant stream encoded according to WZ principle
is employed to protect a primary stream encoded with standard multi-view video coding codec. Then, considering the fact that different combinations of texture and depth coding mode will exhibit varying robustness to transmission errors, a rate-distortion optimized mode switching scheme is proposed to strike the optimal trade-off between robustness and compression effciency. In this approach,
the texture and depth modes are jointly optimized by minimizing the overall distortion of both the coded and synthesized views subject to a given bit rate. Finally, this study extends the research on the reliable transmission of view synthesis prediction (VSP)-based 3-D video. In order to mitigate the prediction position error caused by packet losses in the depth map, a novel disparity vector correction algorithm is developed, where the corrected disparity vector is calculated from the depth error. To facilitate decoder error concealment, the depth
error is recursively estimated at the decoder.
The contributions of this dissertation are multifold. First, the proposed WZbased error-resilient algorithm can accurately characterize the effect of transmission
error on multi-view distortion at the transform domain in consideration of both temporal and inter-view error propagation, and based on the estimated distortion,
this algorithm can perform optimal WZ bit allocation at the encoder through explicitly developing a sophisticated rate allocation strategy. This proposed algorithm is able to provide a finer granularity in performing rate adaptivity
and unequal error protection for multi-view data, not only at the frame level, but also at the bit-plane level. Secondly, in the proposed mode switching scheme, a
new analytic model is formulated to optimally estimate the view synthesis distortion due to packet losses, in which the compound impact of the transmission distortions of both the texture video and the depth map on the quality of the
synthesized view is mathematically analysed. The accuracy of this view synthesis distortion model is demonstrated via simulation results and, further, the estimated distortion is integrated into a rate-distortion framework for optimal
mode switching to achieve substantial performance gains over state-of-the-art algorithms. Last, but not least, this dissertation provides a preliminary investigation
of VSP-based 3-D video over unreliable channel. In the proposed disparity vector correction algorithm, the pixel-level depth map error can be precisely estimated at the decoder without the deterministic knowledge of the error-free reconstructed depth. The approximation of the innovation term involved in depth error estimation is proved theoretically. This algorithm is very useful to conceal
the position-erroneous pixels whose disparity vectors are correctly received
Content-Aware Multimedia Communications
The demands for fast, economic and reliable dissemination of multimedia
information are steadily growing within our society. While people and
economy increasingly rely on communication technologies, engineers still
struggle with their growing complexity.
Complexity in multimedia communication originates from several sources. The
most prominent is the unreliability of packet networks like the Internet.
Recent advances in scheduling and error control mechanisms for streaming
protocols have shown that the quality and robustness of multimedia delivery
can be improved significantly when protocols are aware of the content they
deliver. However, the proposed mechanisms require close cooperation between
transport systems and application layers which increases the overall system
complexity. Current approaches also require expensive metrics and focus on
special encoding formats only. A general and efficient model is missing so
far.
This thesis presents efficient and format-independent solutions to support
cross-layer coordination in system architectures. In particular, the first
contribution of this work is a generic dependency model that enables
transport layers to access content-specific properties of media streams,
such as dependencies between data units and their importance. The second
contribution is the design of a programming model for streaming
communication and its implementation as a middleware architecture. The
programming model hides the complexity of protocol stacks behind simple
programming abstractions, but exposes cross-layer control and monitoring
options to application programmers. For example, our interfaces allow
programmers to choose appropriate failure semantics at design time while
they can refine error protection and visibility of low-level errors at
run-time.
Based on some examples we show how our middleware simplifies the
integration of stream-based communication into large-scale application
architectures. An important result of this work is that despite cross-layer
cooperation, neither application nor transport protocol designers
experience an increase in complexity. Application programmers can even
reuse existing streaming protocols which effectively increases system
robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und
zuverlässiger
Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen
Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser
Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte
befriedigen als auch die wachsende Komplexität der Systeme beherrschen.
Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist
nicht trivial. Einer der prominentesten Gründe dafür ist die
Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste
und schwankende Laufzeiten können die Darstellungsqualität massiv
beeinträchtigen. Wie jüngste Entwicklungen im Bereich der
Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der
Übertragung effizient kontrollierbar, wenn Streamingprotokolle
Informationen über den Inhalt der transportierten Daten ausnutzen.
Existierende Ansätze, die den Inhalt von Multimediadatenströmen
beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren
spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert
ihren praktischen Nutzen deutlich. Außerdem erfordert der
Informationsaustausch eine enge Kooperation zwischen Applikationen und
Transportschichten. Da allerdings die Schnittstellen aktueller
Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die
Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen
werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität
eines Systems dadurch weiter erhöhen kann.
Das zentrale Ziel dieser Dissertation ist es deshalb,
schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der
Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum
aktuellen Stand der Forschung. Erstens definiert sie ein universelles
Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und
Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten
können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens
beschreibt die Arbeit das Noja Programmiermodell für multimediale
Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle
multimedialer Ströme, die die Koordination von Streamingprotokollen mit
Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete
Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten
Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
Recommended from our members
Scalable and network aware video coding for advanced communications over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and user’s acceptable quality of service.
In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques.
To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission.
Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service.
To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream
Recommended from our members
Research and developments of Dirac video codec
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.In digital video compression, apart from storage, successful transmission of the compressed video
data over the bandwidth limited erroneous channels is another important issue. To enable a video
codec for broadcasting application, it is required to implement the corresponding coding tools (e.g.
error-resilient coding, rate control etc.). They are normally non-normative parts of a video codec and
hence their specifications are not defined in the standard. In Dirac as well, the original codec is
optimized for storage purpose only and so, several non-normative part of the encoding tools are still
required in order to be able to use in other types of application.
Being the "Research and Developments of the Dirac Video Codec" as the research title, phase I of
the project is mainly focused on the error-resilient transmission over a noisy channel. The error-resilient
coding method used here is a simple and low complex coding scheme which provides the
error-resilient transmission of the compressed video bitstream of Dirac video encoder over the packet
erasure wired network. The scheme combines source and channel coding approach where error-resilient
source coding is achieved by data partitioning in the wavelet transformed domain and
channel coding is achieved through the application of either Rate-Compatible Punctured
Convolutional (RCPC) Code or Turbo Code (TC) using un-equal error protection between header plus
MV and data. The scheme is designed mainly for the packet-erasure channel, i.e. targeted for the
Internet broadcasting application.
But, for a bandwidth limited channel, it is still required to limit the amount of bits generated from
the encoder depending on the available bandwidth in addition to the error-resilient coding. So, in the
2nd phase of the project, a rate control algorithm is presented. The algorithm is based upon the Quality
Factor (QF) optimization method where QF of the encoded video is adaptively changing in order to
achieve average bitrate which is constant over each Group of Picture (GOP). A relation between the
bitrate, R and the QF, which is called Rate-QF (R-QF) model is derived in order to estimate the
optimum QF of the current encoding frame for a given target bitrate, R.
In some applications like video conferencing, real-time encoding and decoding with minimum
delay is crucial, but, the ability to do real-time encoding/decoding is largely determined by the
complexity of the encoder/decoder. As we all know that motion estimation process inside the encoder
is the most time consuming stage. So, reducing the complexity of the motion estimation stage will
certainly give one step closer to the real-time application. So, as a partial contribution toward realtime
application, in the final phase of the research, a fast Motion Estimation (ME) strategy is designed
and implemented. It is the combination of modified adaptive search plus semi-hierarchical way of
motion estimation. The same strategy was implemented in both Dirac and H.264 in order to
investigate its performance on different codecs. Together with this fast ME strategy, a method which
is called partial cost function calculation in order to further reduce down the computational load of the
cost function calculation was presented. The calculation is based upon the pre-defined set of patterns
which were chosen in such a way that they have as much maximum coverage as possible over the
whole block.
In summary, this research work has contributed to the error-resilient transmission of compressed
bitstreams of Dirac video encoder over a bandwidth limited error prone channel. In addition to this,
the final phase of the research has partially contributed toward the real-time application of the Dirac
video codec by implementing a fast motion estimation strategy together with partial cost function
calculation idea.BBC R&D and Brunel University
- …