1,118 research outputs found
Mechanisms for QoE optimisation of video traffic: a review paper
Transmission of video traffic over the Internet has grown exponentially in the past few years with no sign of waning. This increasing demand for video services has changed user expectation of quality. Various mechanisms have been proposed to optimise the Quality of Experience (QoE) of end usersâ video. Studying these approaches are necessary for new methods to be proposed or combination of existing ones to be tailored. We discuss challenges facing the optimisation of QoE for video traffic in this paper. It surveys and classifies these mechanisms based on their functions. The limitation of each of them is identified and future directions are highlighted
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
Recommended from our members
3D multiple description coding for error resilience over wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Mobile communications has gained a growing interest from both customers and service providers alike in the last 1-2 decades. Visual information is used in many application domains such as remote health care, video âon demand, broadcasting, video surveillance etc. In order to enhance the visual effects of digital video content, the depth perception needs to be provided with the actual visual content. 3D video has earned a significant interest from the research community in recent years, due to the tremendous impact it leaves on viewers and its enhancement of the userâs quality of experience (QoE). In the near future, 3D video is likely to be used in most video applications, as it offers a greater sense of immersion and perceptual experience. When 3D video is compressed and transmitted over error prone channels, the associated packet loss leads to visual quality degradation. When a picture is lost or corrupted so severely that the concealment result is not acceptable, the receiver typically pauses video playback and waits for the next INTRA picture to resume decoding. Error propagation caused by employing predictive coding may degrade the video quality severely. There are several ways used to mitigate the effects of such transmission errors. One widely used technique in International Video Coding Standards is error resilience.
The motivation behind this research work is that, existing schemes for 2D colour video compression such as MPEG, JPEG and H.263 cannot be applied to 3D video content. 3D video signals contain depth as well as colour information and are bandwidth demanding, as they require the transmission of multiple high-bandwidth 3D video streams. On the other hand, the capacity of wireless channels is limited and wireless links are prone to various types of errors caused by noise, interference, fading, handoff, error burst and network congestion. Given the maximum bit rate budget to represent the 3D scene, optimal bit-rate allocation between texture and depth information rendering distortion/losses should be minimised. To mitigate the effect of these errors on the perceptual 3D video quality, error resilience video coding needs to be investigated further to offer better quality of experience (QoE) to end users.
This research work aims at enhancing the error resilience capability of compressed 3D video, when transmitted over mobile channels, using Multiple Description Coding (MDC) in order to improve better userâs quality of experience (QoE).
Furthermore, this thesis examines the sensitivity of the human visual system (HVS) when employed to view 3D video scenes. The approach used in this study is to use subjective testing in order to rate peopleâs perception of 3D video under error free and error prone conditions through the use of a carefully designed bespoke questionnaire.Petroleum Technology Development Fund (PTDF
Content-Aware Multimedia Communications
The demands for fast, economic and reliable dissemination of multimedia
information are steadily growing within our society. While people and
economy increasingly rely on communication technologies, engineers still
struggle with their growing complexity.
Complexity in multimedia communication originates from several sources. The
most prominent is the unreliability of packet networks like the Internet.
Recent advances in scheduling and error control mechanisms for streaming
protocols have shown that the quality and robustness of multimedia delivery
can be improved significantly when protocols are aware of the content they
deliver. However, the proposed mechanisms require close cooperation between
transport systems and application layers which increases the overall system
complexity. Current approaches also require expensive metrics and focus on
special encoding formats only. A general and efficient model is missing so
far.
This thesis presents efficient and format-independent solutions to support
cross-layer coordination in system architectures. In particular, the first
contribution of this work is a generic dependency model that enables
transport layers to access content-specific properties of media streams,
such as dependencies between data units and their importance. The second
contribution is the design of a programming model for streaming
communication and its implementation as a middleware architecture. The
programming model hides the complexity of protocol stacks behind simple
programming abstractions, but exposes cross-layer control and monitoring
options to application programmers. For example, our interfaces allow
programmers to choose appropriate failure semantics at design time while
they can refine error protection and visibility of low-level errors at
run-time.
Based on some examples we show how our middleware simplifies the
integration of stream-based communication into large-scale application
architectures. An important result of this work is that despite cross-layer
cooperation, neither application nor transport protocol designers
experience an increase in complexity. Application programmers can even
reuse existing streaming protocols which effectively increases system
robustness.Der Bedarf unsere Gesellschaft nach kostengĂźnstiger und
zuverlässiger
Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen
Kommunikationstechnologien abhängig machen, mßssen die Ingenieure dieser
Technologien sowohl den Bedarf nach schneller EinfĂźhrung neuer Produkte
befriedigen als auch die wachsende Komplexität der Systeme beherrschen.
Gerade die Ăbertragung multimedialer Inhalte wie Video und Audiodaten ist
nicht trivial. Einer der prominentesten GrĂźnde dafĂźr ist die
Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste
und schwankende Laufzeiten kÜnnen die Darstellungsqualität massiv
beeinträchtigen. Wie jßngste Entwicklungen im Bereich der
Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der
Ăbertragung effizient kontrollierbar, wenn Streamingprotokolle
Informationen Ăźber den Inhalt der transportierten Daten ausnutzen.
Existierende Ansätze, die den Inhalt von MultimediadatenstrÜmen
beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren
spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert
ihren praktischen Nutzen deutlich. AuĂerdem erfordert der
Informationsaustausch eine enge Kooperation zwischen Applikationen und
Transportschichten. Da allerdings die Schnittstellen aktueller
Systemarchitekturen nicht darauf vorbereitet sind, mĂźssen entweder die
Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen
werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität
eines Systems dadurch weiter erhĂśhen kann.
Das zentrale Ziel dieser Dissertation ist es deshalb,
schichtenĂźbergreifende Koordination bei gleichzeitiger Reduzierung der
Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum
aktuellen Stand der Forschung. Erstens definiert sie ein universelles
Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und
Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten
kĂśnnen dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens
beschreibt die Arbeit das Noja Programmiermodell fĂźr multimediale
Middleware. Noja definiert Abstraktionen zur Ăbertragung und Kontrolle
multimedialer StrĂśme, die die Koordination von Streamingprotokollen mit
Applikationen ermĂśglichen. Zum Beispiel kĂśnnen Programmierer geeignete
Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten
Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere
Recommended from our members
Scalable and network aware video coding for advanced communications over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis work addresses the issues concerned with the provision of scalable video services over heterogeneous networks particularly with regards to dynamic adaptation and userâs acceptable quality of service.
In order to provide and sustain an adaptive and network friendly multimedia communication service, a suite of techniques that achieved automatic scalability and adaptation are developed. These techniques are evaluated objectively and subjectively to assess the Quality of Service (QoS) provided to diverse users with variable constraints and dynamic resources. The research ensured the consideration of various levels of user acceptable QoS The techniques are further evaluated with view to establish their performance against state of the art scalable and non-scalable techniques.
To further improve the adaptability of the designed techniques, several experiments and real time simulations are conducted with the aim of determining the optimum performance with various coding parameters and scenarios. The coding parameters and scenarios are evaluated and analyzed to determine their performance using various types of video content and formats. Several algorithms are developed to provide a dynamic adaptation of coding tools and parameters to specific video content type, format and bandwidth of transmission.
Due to the nature of heterogeneous networks where channel conditions, terminals, users capabilities and preferences etc are unpredictably changing, hence limiting the adaptability of a specific technique adopted, a Dynamic Scalability Decision Making Algorithm (SADMA) is developed. The algorithm autonomously selects one of the designed scalability techniques basing its decision on the monitored and reported channel conditions. Experiments were conducted using a purpose-built heterogeneous network simulator and the network-aware selection of the scalability techniques is based on real time simulation results. A technique with a minimum delay, low bit-rate, low frame rate and low quality is adopted as a reactive measure to a predicted bad channel condition. If the use of the techniques is not favoured due to deteriorating channel conditions reported, a reduced layered stream or base layer is used. If the network status does not allow the use of the base layer, then the stream uses parameter identifiers with high efficiency to improve the scalability and adaptation of the video service.
To further improve the flexibility and efficiency of the algorithm, a dynamic de-blocking filter and lambda value selection are analyzed and introduced in the algorithm. Various methods, interfaces and algorithms are defined for transcoding from one technique to another and extracting sub-streams when the network conditions do not allow for the transmission of the entire bit-stream
Motion Scalability for Video Coding with Flexible Spatio-Temporal Decompositions
PhDThe research presented in this thesis aims to extend the scalability range of the
wavelet-based video coding systems in order to achieve fully scalable coding with a
wide range of available decoding points. Since the temporal redundancy regularly
comprises the main portion of the global video sequence redundancy, the techniques
that can be generally termed motion decorrelation techniques have a central role in
the overall compression performance. For this reason the scalable motion modelling
and coding are of utmost importance, and specifically, in this thesis possible
solutions are identified and analysed.
The main contributions of the presented research are grouped into two
interrelated and complementary topics. Firstly a flexible motion model with rateoptimised
estimation technique is introduced. The proposed motion model is based
on tree structures and allows high adaptability needed for layered motion coding. The
flexible structure for motion compensation allows for optimisation at different stages
of the adaptive spatio-temporal decomposition, which is crucial for scalable coding
that targets decoding on different resolutions. By utilising an adaptive choice of
wavelet filterbank, the model enables high compression based on efficient mode
selection. Secondly, solutions for scalable motion modelling and coding are
developed. These solutions are based on precision limiting of motion vectors and
creation of a layered motion structure that describes hierarchically coded motion.
The solution based on precision limiting relies on layered bit-plane coding of motion
vector values. The second solution builds on recently established techniques that
impose scalability on a motion structure. The new approach is based on two major
improvements: the evaluation of distortion in temporal Subbands and motion search
in temporal subbands that finds the optimal motion vectors for layered motion
structure.
Exhaustive tests on the rate-distortion performance in demanding scalable video
coding scenarios show benefits of application of both developed flexible motion
model and various solutions for scalable motion coding
Recommended from our members
Design of interface selection protocols for multi-homed wireless networks
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University on 10 December 2010.The IEEE 802.11/802.16 standards conformant wireless communication stations have multi-homing transmission capability. To achieve greater communication efficiency, multi-homing capable stations use handover mechanism to select appropriate transmission channel according to variations in the channel quality. This thesis presents three internal-linked handover schemes, (1) Interface Selection Protocol (ISP), belonging to Wireless Local Area Network (WLAN)- Worldwide Interoperability for Microwave Access (WiMAX) environment (2) Fast Channel Scanning (FCS) and (3) Traffic Manager (TM), (2) and (3) belonging to WiMAX Environment. The proposed schemes in this thesis use a novel mechanism of providing a reliable communication route. This solution is based on a cross-layer communication framework, where the interface selection module uses various network related parameters from Medium Access Control (MAC) sub-layer/Physical Layer (PHY) across the protocol suite for decision making at the Network layer. The proposed solutions are highly responsive when compared with existing multi-homed schemes; responsiveness is one of the key factors in the design of such protocols. Selected route under these schemes is based on the most up to date link-layer information. Therefore, such a route is not only reliable in terms of route optimization but it also fulfils the application demands in terms of throughput and delay. Design of ISP protocol use probing frames during the route discovery process. The 802.11 mandates the use of different rates for data transmission frames. The ISP-metric can be incorporated into various routing aspects and its applicability is determined by the possibility of provision of MAC dependent parameters that are used to determine the best path metric values. In many cases, higher device density, interference and mobility cause variable medium access delays. It causes creation of âunreachable zonesâ, where destination is marked as unreachable. However, by use of the best path metric, the destination has been made reachable, anytime and anywhere, because of the intelligent use of the probing frames and interface selection algorithm implemented. The IEEE 802.16e introduces several MAC level queues for different access categories, maintaining service requirement within these queues; which imply that frames from a higher priority queue, i.e. video frames, are serviced more frequently than those belonging to lower priority queues. Such an enhancement at the MAC sub-layer introduces uneven queuing delays. Conventional routing protocols are unaware of such MAC specific constraints and as a result, these factors are not considered which result in channel performance degradation. To meet such challenges, the thesis presents FCS and TM schemes for WiMAX. For FCS, Its solution is to improve the mobile WiMAX handover and address the scanning latency. Since minimum scanning time is the most important issue in the handover process. This handover scheme aims to utilize the channel efficiently and apply such a procedure to reduce the time it takes to scan the neighboring access stations. TM uses MAC and physical layer (PHY) specific information in the interface metric and maintains a separate path to destination by applying an alternative interface operation. Simulation tests and comparisons with existing multi-homed protocols and handover schemes demonstrate the effectiveness of incorporating the medium dependent parameters. Moreover, show that suggested schemes, have shown better performance in terms of end-to-end delay and throughput, with efficiency up to 40% in specific test scenarios
Scalable Video Streaming with Prioritised Network Coding on End-System Overlays
PhDDistribution over the internet is destined to become a standard approach for live broadcasting
of TV or events of nation-wide interest. The demand for high-quality live video
with personal requirements is destined to grow exponentially over the next few years. Endsystem
multicast is a desirable option for relieving the content server from bandwidth bottlenecks
and computational load by allowing decentralised allocation of resources to the users
and distributed service management. Network coding provides innovative solutions for a
multitude of issues related to multi-user content distribution, such as the coupon-collection
problem, allocation and scheduling procedure. This thesis tackles the problem of streaming
scalable video on end-system multicast overlays with prioritised push-based streaming.
We analyse the characteristic arising from a random coding process as a linear channel
operator, and present a novel error detection and correction system for error-resilient decoding,
providing one of the first practical frameworks for Joint Source-Channel-Network
coding. Our system outperforms both network error correction and traditional FEC coding
when performed separately. We then present a content distribution system based on endsystem
multicast. Our data exchange protocol makes use of network coding as a way to
collaboratively deliver data to several peers. Prioritised streaming is performed by means
of hierarchical network coding and a dynamic chunk selection for optimised rate allocation
based on goodput statistics at application layer. We prove, by simulated experiments, the
efficient allocation of resources for adaptive video delivery. Finally we describe the implementation
of our coding system. We highlighting the use rateless coding properties, discuss
the application in collaborative and distributed coding systems, and provide an optimised
implementation of the decoding algorithm with advanced CPU instructions. We analyse
computational load and packet loss protection via lab tests and simulations, complementing
the overall analysis of the video streaming system in all its components
- âŚ