55 research outputs found
Evaluation of unidirectional background push content download services for the delivery of television programs
Este trabajo de tesis presenta los servicios de descarga de contenido en modo push como un
mecanismo eficiente para el envío de contenido de televisión pre-producido sobre redes de
difusión. Hoy en día, los operadores de red dedican una cantidad considerable de recursos
de red a la entrega en vivo de contenido televisivo, tanto sobre redes de difusión como
sobre conexiones unidireccionales. Esta oferta de servicios responde únicamente a
requisitos comerciales: disponer de los contenidos televisivos en cualquier momento y
lugar. Sin embargo, desde un punto de vista estrictamente académico, el envío en vivo es
únicamente un requerimiento para el contenido en vivo, no para contenidos que ya han sido
producidos con anterioridad a su emisión. Más aún, la difusión es solo eficiente cuando el
contenido es suficientemente popular.
Los servicios bajo estudio en esta tesis utilizan capacidad residual en redes de difusión para
enviar contenido pre-producido para que se almacene en los equipos de usuario. La
propuesta se justifica únicamente por su eficiencia. Por un lado, genera valor de recursos de
red que no se aprovecharían de otra manera. Por otro lado, realiza la entrega de contenidos
pre-producidos y populares de la manera más eficiente: sobre servicios de descarga de
contenidos en difusión.
Los resultados incluyen modelos para la popularidad y la duración de contenidos, valiosos
para cualquier trabajo de investigación basados en la entrega de contenidos televisivos.
Además, la tesis evalúa la capacidad residual disponible en redes de difusión, por medio de
estudios empíricos. Después, estos resultados son utilizados en simulaciones que evalúan
las prestaciones de los servicios propuestos en escenarios diferentes y para aplicaciones
diferentes. La evaluación demuestra que este tipo de servicios son un recurso muy útil para
la entrega de contenido televisivo.This thesis dissertation presents background push Content Download Services as an
efficient mechanism to deliver pre-produced television content through existing broadcast
networks. Nowadays, network operators dedicate a considerable amount of network
resources to live streaming live, through both broadcast and unicast connections. This
service offering responds solely to commercial requirements: Content must be available
anytime and anywhere. However, from a strictly academic point of view, live streaming is
only a requirement for live content and not for pre-produced content. Moreover,
broadcasting is only efficient when the content is sufficiently popular.
The services under study in this thesis use residual capacity in broadcast networks to push
popular, pre-produced content to storage capacity in customer premises equipment. The
proposal responds only to efficiency requirements. On one hand, it creates value from
network resources otherwise unused. On the other hand, it delivers popular pre-produced
content in the most efficient way: through broadcast download services.
The results include models for the popularity and the duration of television content,
valuable for any research work dealing with file-based delivery of television content. Later,
the thesis evaluates the residual capacity available in broadcast networks through empirical
studies. These results are used in simulations to evaluate the performance of background
push content download services in different scenarios and for different applications. The
evaluation proves that this kind of services can become a great asset for the delivery of
television contentFraile Gil, F. (2013). Evaluation of unidirectional background push content download services for the delivery of television programs [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31656TESI
Recommended from our members
Performance analysis of an ATM network with multimedia traffic: a simulation study
Traffic and congestion control are important in enabling ATM networks to maintain the Quality of Service (QoS) required by end users. A Call Admission Control (CAC) strategy ensures that the network has sufficient resources available at the start of each call, but this does not prevent a traffic source from violating the negotiated contract. A policing strategy (User Parameter Control (UPC)) is also required to enforce the negotiated rates for a particular connection and to protect conforming users from network overload.
The aim of this work is to investigate traffic policing and bandwidth management at the User to Network Interface (UNI). A policing function is proposed which is based on the leaky bucket (LB) which offers improved performance for both real time (RT) traffic such as speech and video and non-real time (non-RT) traffic, mainly data by taking into account the QoS requirements. A video cell in violation of the negotiated bit rate causes the remainder of the slice to be discarded. This 'tail clipping' provides protection for the decoder from damaged video slices. Speech cells are coded using a frequency domain coder, which places the most significant bits of a double speech sample into a high priority cell and the least significant bits into a high priority cell. In the case of congestion, the low priority cell can be discarded with little impact on the intelligibility of the received speech. However, data cells require loss-free delivery and are buffered rather than being discarded or tagged for subsequent deletion. This triple strategy is termed the super leaky bucket (SLB).
Separate queues for RT and non-RT traffic, are also proposed at the multiplexer, with non pre-emptive priority service for RT traffic if the queue exceeds a predetermined threshold. If the RT queue continues to grow beyond a second threshold, then all low priority cells (mainly speech) are discarded. This scheme protects non-RT traffic from being tagged and subsequently discarded, by queueing the cells and also by throttling back non-RT sources during periods of congestion. It also prevents the RT cells from being delayed excessively in the multiplexer queue.
A simulation model has been designed and implemented to test the proposal. Realistic sources have been incorporated into the model to simulate the types of traffic which could be expected on an ATM network.
The results show that the S-LB outperforms the standard LB for video cells. The number of cells discarded and the resulting number of damaged video slices are significantly reduced. Dual queues with cyclic service at the multiplexer also reduce the delays experienced by RT cells. The QoS for all categories of traffic is preserved
Adaptation of variable-bit-rate compressed video for transport over a constant-bit-rate communication channel in broadband networks.
by Chi-yin Tse.Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 118-[121]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Video Compression and Transport --- p.2Chapter 1.2 --- VBR-CBR Adaptation of Video Traffic --- p.5Chapter 1.3 --- Research Contributions --- p.7Chapter 1.3.1 --- Spatial Smoothing: Video Aggregation --- p.8Chapter 1.3.2 --- Temporal Smoothing: A Control-Theoretic Study。 --- p.8Chapter 1.4 --- Organization of Thesis --- p.9Chapter 2 --- Preliminaries --- p.13Chapter 2.1 --- MPEG Compression Scheme --- p.13Chapter 2.2 --- Problems of Transmitting MPEG Video --- p.17Chapter 2.3 --- Two-layer Coding and Transport Strategy --- p.19Chapter 2.3.1 --- Framework of MPEG-based Layering --- p.19Chapter 2.3.2 --- Transmission of GS and ES --- p.20Chapter 2.3.3 --- Problems of Two-layer Video Transmission --- p.20Chapter 3 --- Video Aggregation --- p.24Chapter 3.1 --- Motivation and Basic Concept of Video Aggregation --- p.25Chapter 3.1.1 --- Description of Video Aggregation --- p.28Chapter 3.2 --- MPEG Video Aggregation System --- p.29Chapter 3.2.1 --- Shortcomings of the MPEG Video Bundle Scenario with Two-Layer Coding and Cell-Level Multiplexing --- p.29Chapter 3.2.2 --- MPEG Video Aggregation --- p.31Chapter 3.2.3 --- MPEG Video Aggregation System Architecture --- p.33Chapter 3.3 --- Variations of MPEG Video Aggregation System --- p.35Chapter 3.4 --- Experimental Results --- p.38Chapter 3.4.1 --- Comparison of Video Aggregation and Cell-level Multi- plexing --- p.40Chapter 3.4.2 --- Varying Amount of the Allocated Bandwidth --- p.48Chapter 3.4.3 --- Varying Number of Sequences --- p.50Chapter 3.5 --- Conclusion --- p.53Chapter 3.6 --- Appendix: Alternative Implementation of MPEG Video Aggre- gation --- p.53Chapter 3.6.1 --- Profile Approach --- p.54Chapter 3.6.2 --- Bit-Plane Approach --- p.54Chapter 4 --- A Control-Theoretic Study of Video Traffic Adaptation --- p.58Chapter 4.1 --- Review of Previous Adaptation Schemes --- p.60Chapter 4.1.1 --- A Generic Model for Adaptation Scheme --- p.60Chapter 4.1.2 --- Objectives of Adaptation Controller --- p.61Chapter 4.2 --- Motivation for Control-Theoretic Study --- p.64Chapter 4.3 --- Linear Feedback Controller Model --- p.64Chapter 4.3.1 --- Encoder Model --- p.65Chapter 4.3.2 --- Adaptation Controller Model --- p.69Chapter 4.4 --- Analysis --- p.72Chapter 4.4.1 --- Stability --- p.73Chapter 4.4.2 --- Robustness against Coding-mode Switching --- p.83Chapter 4.4.3 --- Unit-Step Responses and Unit-Sample Responses --- p.84Chapter 4.5 --- Implementation --- p.91Chapter 4.6 --- Experimental Results --- p.95Chapter 4.6.1 --- Overall Performance of the Adaptation Scheme --- p.97Chapter 4.6.2 --- Weak-Control verus Strong-Control --- p.99Chapter 4.6.3 --- Varying Amount of Reserved Bandwidth --- p.101Chapter 4.7 --- Conclusion --- p.103Chapter 4.8 --- Appendix I: Further Research --- p.103Chapter 4.9 --- Appendix II: Review of Previous Adaptation Schemes --- p.106Chapter 4.9.1 --- Watanabe. et. al.'s Scheme --- p.106Chapter 4.9.2 --- MPEG's Scheme --- p.107Chapter 4.9.3 --- Lee et.al.'s Modification --- p.109Chapter 4.9.4 --- Chen's Adaptation Scheme --- p.110Chapter 5 --- Conclusion --- p.116Bibliography --- p.11
Dynamic bandwidth allocation in ATM networks
Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes
Robust and efficient video/image transmission
The Internet has become a primary medium for information transmission. The unreliability of channel conditions, limited channel bandwidth and explosive growth of information transmission requests, however, hinder its further development. Hence, research on robust and efficient delivery of video/image content is demanding nowadays.
Three aspects of this task, error burst correction, efficient rate allocation and random error protection are investigated in this dissertation. A novel technique, called successive packing, is proposed for combating multi-dimensional (M-D) bursts of errors. A new concept of basis interleaving array is introduced. By combining different basis arrays, effective M-D interleaving can be realized. It has been shown that this algorithm can be implemented only once and yet optimal for a set of error bursts having different sizes for a given two-dimensional (2-D) array.
To adapt to variable channel conditions, a novel rate allocation technique is proposed for FineGranular Scalability (FGS) coded video, in which real data based rate-distortion modeling is developed, constant quality constraint is adopted and sliding window approach is proposed to adapt to the variable channel conditions. By using the proposed technique, constant quality is realized among frames by solving a set of linear functions. Thus, significant computational simplification is achieved compared with the state-of-the-art techniques. The reduction of the overall distortion is obtained at the same time. To combat the random error during the transmission, an unequal error protection (UEP) method and a robust error-concealment strategy are proposed for scalable coded video bitstreams
Low delay video coding
Analogue wireless cameras have been employed for decades, however they have not become an universal solution due to their difficulties of set up and use. The main problem is the link robustness which mainly depends on the requirement of a line-of-sight view between transmitter and receiver, a working condition not always possible. Despite the use of tracking antenna system such as the Portable Intelligent Tracking Antenna (PITA [1]), if strong multipath fading occurs (e.g. obstacles between transmitter and receiver) the picture rapidly falls apart. Digital wireless cameras based on Orthogonal Frequency Division Multiplexing (OFDM) modulation schemes give a valid solution for the above problem. OFDM offers strong multipath protection due to the insertion of the guard interval; in particular, the OFDM-based DVB-T standard has proven to offer excellent performance for the broadcasting of multimedia streams with bit rates over 10 Mbps in difficult terrestrial propagation channels, for fixed and portable applications. However, in typical conditions, the latency needed to compress/decompress a digital video signal at Standard Definition (SD) resolution is of the order of 15 frames, which corresponds to ≃ 0.5 sec. This delay introduces a serious problem when wireless and wired cameras have to be interfaced. Cabled cameras do not use compression, because the cable which directly links transmitter and receiver does not impose restrictive bandwidth constraints. Therefore, the only latency that affects a cable cameras link system is the on cable propagation delay, almost not significant, when switching between wired and wireless cameras, the residual latency makes it impossible to achieve the audio-video synchronization, with consequent disagreeable effects. A way to solve this problem is to provide a low delay digital processing scheme based on a video coding algorithm which avoids massive intermediate data storage. The analysis of the last MPEG based coding standards puts in evidence a series of problems which limits the real performance of a low delay MPEG coding system. The first effort of this work is to study the MPEG standard to understand its limit from both the coding delay and implementation complexity points of views. This thesis also investigates an alternative solution based on HERMES codec, a proprietary algorithm which is described implemented and evaluated. HERMES achieves better results than MPEG in terms of latency and implementation complexity, at the price of higher compression ratios, which means high output bit rates. The use of HERMES codec together with an enhanced OFDM system [2] leads to a competitive solution for wireless digital professional video applications
Supporting real time video over ATM networks
Includes bibliographical references.In this project, we propose and evaluate an approach to delimit and tag such independent video slice at the ATM layer for early discard. This involves the use of a tag cell differentiated from the rest of the data by its PTI value and a modified tag switch to facilitate the selective discarding of affected cells within each video slice as opposed to dropping of cells at random from multiple video frames
Some aspects of traffic control and performance evaluation of ATM networks
The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation
Designing new network adaptation and ATM adaptation layers for interactive multimedia applications
Multimedia services, audiovisual applications composed of a combination of discrete and continuous data streams, will be a major part of the traffic flowing in the next generation of high speed networks. The cornerstones for multimedia are Asynchronous Transfer Mode (ATM) foreseen as the technology for the future Broadband Integrated Services Digital Network (B-ISDN) and audio and video compression algorithms such as MPEG-2 that reduce applications bandwidth requirements. Powerful desktop computers available today can integrate seamlessly the network access and the applications and thus bring the new multimedia services to home and business users. Among these services, those based on multipoint capabilities are expected to play a major role. Interactive multimedia applications unlike traditional data transfer applications have stringent simultaneous requirements in terms of loss and delay jitter due to the nature of audiovisual information. In addition, such stream-based applications deliver data at a variable rate, in particular if a constant quality is required. ATM, is able to integrate traffic of different nature within a single network creating interactions of different types that translate into delay jitter and loss. Traditional protocol layers do not have the appropriate mechanisms to provide the required network quality of service (QoS) for such interactive variable bit rate (VBR) multimedia multipoint applications. This lack of functionalities calls for the design of protocol layers with the appropriate functions to handle the stringent requirements of multimedia. This thesis contributes to the solution of this problem by proposing new Network Adaptation and ATM Adaptation Layers for interactive VBR multimedia multipoint services. The foundations to build these new multimedia protocol layers are twofold; the requirements of real-time multimedia applications and the nature of compressed audiovisual data. On this basis, we present a set of design principles we consider as mandatory for a generic Multimedia AAL capable of handling interactive VBR multimedia applications in point-to-point as well as multicast environments. These design principles are then used as a foundation to derive a first set of functions for the MAAL, namely; cell loss detection via sequence numbering, packet delineation, dummy cell insertion and cell loss correction via RSE FEC techniques. The proposed functions, partly based on some theoretical studies, are implemented and evaluated in a simulated environment. Performances are evaluated from the network point of view using classic metrics such as cell and packet loss. We also study the behavior of the cell loss process in order to evaluate the efficiency to be expected from the proposed cell loss correction method. We also discuss the difficulties to map network QoS parameters to user QoS parameters for multimedia applications and especially for video information. In order to present a complete performance evaluation that is also meaningful to the end-user, we make use of the MPQM metric to map the obtained network performance results to a user level. We evaluate the impact that cell loss has onto video and also the improvements achieved with the MAAL. All performance results are compared to an equivalent implementation based on AAL5, as specified by the current ITU-T and ATM Forum standards. An AAL has to be by definition generic. But to fully exploit the functionalities of the AAL layer, it is necessary to have a protocol layer that will efficiently interface the network and the applications. This role is devoted to the Network Adaptation Layer. The network adaptation layer (NAL) we propose, aims at efficiently interface the applications to the underlying network to achieve a reliable but low overhead transmission of video streams. Since this requires an a priori knowledge of the information structure to be transmitted, we propose the NAL to be codec specific. The NAL targets interactive multimedia applications. These applications share a set of common requirements independent of the encoding scheme used. This calls for the definition of a set of design principles that should be shared by any NAL even if the implementation of the functions themselves is codec specific. On the basis of the design principles, we derive the common functions that NALs have to perform which are mainly two; the segmentation and reassembly of data packets and the selective data protection. On this basis, we develop an MPEG-2 specific NAL. It provides a perceptual syntactic information protection, the PSIP, which results in an intelligent and minimum overhead protection of video information. The PSIP takes advantage of the hierarchical organization of the compressed video data, common to the majority of the compression algorithms, to perform a selective data protection based on the perceptual relevance of the syntactic information. The transmission over the combined NAL-MAAL layers shows significant improvement in terms of CLR and perceptual quality compared to equivalent transmissions over AAL5 with the same overhead. The usage of the MPQM as a performance metric, which is one of the main contributions of this thesis, leads to a very interesting observation. The experimental results show that for unexpectedly high CLRs, the average perceptual quality remains close to the original value. The economical potential of such an observation is very important. Given that the data flows are VBR, it is possible to improve network utilization by means of statistical multiplexing. It is therefore possible to reduce the cost per communication by increasing the number of connections with a minimal loss in quality. This conclusion could not have been derived without the combined usage of perceptual and network QoS metrics, which have been able to unveil the economic potential of perceptually protected streams. The proposed concepts are finally tested in a real environment where a proof-of-concept implementation of the MAAL has shown a behavior close to the simulated results therefore validating the proposed multimedia protocol layers
- …