35 research outputs found

    Maximizing the number of users in an interactive video-on-demand system

    Get PDF
    Video prefetching is a technique that has been proposed for the transmission of variable-bit-rate (VBR) videos over packet-switched networks. The objective of these protocols is to prefetch future frames at the customers' set-top box (STB) during light load periods. Experimental results have shown that video prefetching is very effective and it achieves much higher network utilization (and potentially larger number of simultaneous connections) than the traditional video smoothing schemes. The previously proposed prefetching algorithms, however, can only be efficiently implemented when there is one centralized server. In a distributed environment there is a large degradation in their performance. In this paper we introduce a new scheme that utilizes smoothing along with prefetching, to overcome the problem of distributed prefetching. We will show that our scheme performs almost as well as the centralized prefetching protocol even though it is implemented in a distributed environment. In addition, we will introduce a call admission control algorithm for a fully interactive Video-on-Demand (VoD) system that utilizes this concept of distributed video prefetching. Using the theory of effective bandwidths, we will develop an admission control algorithm for new requests, based on the user's viewing behavior and the required Quality of Service (QoS).published_or_final_versio

    Layer-based coding, smoothing, and scheduling of low-bit-rate video for teleconferencing over tactical ATM networks

    Get PDF
    This work investigates issues related to distribution of low bit rate video within the context of a teleconferencing application deployed over a tactical ATM network. The main objective is to develop mechanisms that support transmission of low bit rate video streams as a series of scalable layers that progressively improve quality. The hierarchical nature of the layered video stream is actively exploited along the transmission path from the sender to the recipients to facilitate transmission. A new layered coder design tailored to video teleconferencing in the tactical environment is proposed. Macroblocks selected due to scene motion are layered via subband decomposition using the fast Haar transform. A generalized layering scheme groups the subbands to form an arbitrary number of layers. As a layering scheme suitable for low motion video is unsuitable for static slides, the coder adapts the layering scheme to the video content. A suboptimal rate control mechanism that reduces the kappa dimensional rate distortion problem resulting from the use of multiple quantizers tailored to each layer to a 1 dimensional problem by creating a single rate distortion curve for the coder in terms of a suboptimal set of kappa dimensional quantizer vectors is investigated. Rate control is thus simplified into a table lookup of a codebook containing the suboptimal quantizer vectors. The rate controller is ideal for real time video and limits fluctuations in the bit stream with no corresponding visible fluctuations in perceptual quality. A traffic smoother prior to network entry is developed to increase queuing and scheduler efficiency. Three levels of smoothing are studied: frame, layer, and cell interarrival. Frame level smoothing occurs via rate control at the application. Interleaving and cell interarrival smoothing are accomplished using a leaky bucket mechanism inserted prior to the adaptation layer or within the adaptation layerhttp://www.archive.org/details/layerbasedcoding00parkLieutenant Commander, United States NavyApproved for public release; distribution is unlimited

    Video traffic modeling and delivery

    Get PDF
    Video is becoming a major component of the network traffic, and thus there has been a great interest to model video traffic. It is known that video traffic possesses short range dependence (SRD) and long range dependence (LRD) properties, which can drastically affect network performance. By decomposing a video sequence into three parts, according to its motion activity, Markov-modulated self-similar process model is first proposed to capture autocorrelation function (ACF) characteristics of MPEG video traffic. Furthermore, generalized Beta distribution is proposed to model the probability density functions (PDFs) of MPEG video traffic. It is observed that the ACF of MPEG video traffic fluctuates around three envelopes, reflecting the fact that different coding methods reduce the data dependency by different amount. This observation has led to a more accurate model, structurally modulated self-similar process model, which captures the ACF of the traffic, both SRD and LRD, by exploiting the MPEG structure. This model is subsequently simplified by simply modulating three self-similar processes, resulting in a much simpler model having the same accuracy as the structurally modulated self-similar process model. To justify the validity of the proposed models for video transmission, the cell loss ratios (CLRs) of a server with a limited buffer size driven by the empirical trace are compared to those driven by the proposed models. The differences are within one order, which are hardly achievable by other models, even for the case of JPEG video traffic. In the second part of this dissertation, two dynamic bandwidth allocation algorithms are proposed for pre-recorded and real-time video delivery, respectively. One is based on scene change identification, and the other is based on frame differences. The proposed algorithms can increase the bandwidth utilization by a factor of two to five, as compared to the constant bit rate (CBR) service using peak rate assignment

    Continuous-Time Collaborative Prefetching of Continuous Media

    Full text link

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Resource dimensioning in a mixed traffic environment

    Get PDF
    An important goal of modern data networks is to support multiple applications over a single network infrastructure. The combination of data, voice, video and conference traffic, each requiring a unique Quality of Service (QoS), makes resource dimensioning a very challenging task. To guarantee QoS by mere over-provisioning of bandwidth is not viable in the long run, as network resources are expensive. The aim of proper resource dimensioning is to provide the required QoS while making optimal use of the allocated bandwidth. Dimensioning parameters used by service providers today are based on best practice recommendations, and are not necessarily optimal. This dissertation focuses on resource dimensioning for the DiffServ network architecture. Four predefined traffic classes, i.e. Real Time (RT), Interactive Business (IB), Bulk Business (BB) and General Data (GD), needed to be dimensioned in terms of bandwidth allocation and traffic regulation. To perform this task, a study was made of the DiffServ mechanism and the QoS requirements of each class. Traffic generators were required for each class to perform simulations. Our investigations show that the dominating Transport Layer protocol for the RT class is UDP, while TCP is mostly used by the other classes. This led to a separate analysis and requirement for traffic models for UDP and TCP traffic. Analysis of real-world data shows that modern network traffic is characterized by long-range dependency, self-similarity and a very bursty nature. Our evaluation of various traffic models indicates that the Multi-fractal Wavelet Model (MWM) is best for TCP due to its ability to capture long-range dependency and self-similarity. The Markov Modulated Poisson Process (MMPP) is able to model occasional long OFF-periods and burstiness present in UDP traffic. Hence, these two models were used in simulations. A test bed was implemented to evaluate performance of the four traffic classes defined in DiffServ. Traffic was sent through the test bed, while delay and loss was measured. For single class simulations, dimensioning values were obtained while conforming to the QoS specifications. Multi-class simulations investigated the effects of statistical multiplexing on the obtained values. Simulation results for various numerical provisioning factors (PF) were obtained. These factors are used to determine the link data rate as a function of the required average bandwidth and QoS. The use of class-based differentiation for QoS showed that strict delay and loss bounds can be guaranteed, even in the presence of very high (up to 90%) bandwidth utilization. Simulation results showed small deviations from best practice recommendation PF values: A value of 4 is currently used for both RT and IB classes, while 2 is used for the BB class. This dissertation indicates that 3.89 for RT, 3.81 for IB and 2.48 for BB achieve the prescribed QoS more accurately. It was concluded that either the bandwidth distribution among classes, or quality guarantees for the BB class should be adjusted since the RT and IB classes over-performed while BB under-performed. The results contribute to the process of resource dimensioning by adding value to dimensioning parameters through simulation rather than mere intuition or educated guessing.Dissertation (MEng (Electronic Engineering))--University of Pretoria, 2007.Electrical, Electronic and Computer Engineeringunrestricte

    A taxonomy of the parameters used by decision methods for adaptive video transmission

    No full text
    International audienceNowadays, video data transfers account for much of the Internet traffic and a huge number of users use this service on a daily base. Even if videos are usually stored in several bitrates on servers, the video sending rate does not take into account network conditions which are changing dynamically during transmission. Therefore, the best bitrate is not used which causes sub-optimal video quality when the video bitrate is under the available bandwidth or packet loss when it is over it. One solution is to deploy adaptive video, which adapts video parameters such as bitrate or frame resolution to network conditions. Many ideas are proposed in the literature, yet no paper provides a global view on adaptation methods in order to classify them. This article fills this gap by discussing several adaptation methods through a taxonomy of the parameters used for adaptation. We show that, in the research community, the sender generally takes the decision of adaptation whereas in the solutions supported by major current companies the receiver takes this decision. We notably suggest, without evaluation, a valuable and realistic adaptation method, gathering the advantages of the presented methods

    Statistical characterisation and stochastic modelling of 1-layer variable bit rate H.261 video codec traffic

    Get PDF
    The Integrated Services Digital Network(ISDN) is under re-design to provide flexibility which will ensure efficient network utilisation in the provision of broadband services. The main broadband services envisaged for provision on the Broadband ISDN(B-ISDN) are : Videophone; Videoconferencing; Television and High Definition TV. The B-ISDN will be a packet switched network where the packets(cells) will be transferred by the Asynchronous Transfer Mode(ATM) concept. Unlike voice and data services, the impact video services will have on the BISDN is unknown and hence loss of information is difficult to predict. Present videophone terminals are based on the CCITT H.261 Video Coding standard hence the picture quality is variable because video codec traffic is transmitted at a constant rate. To maintain a constant quality picture the codec output data must be transmitted at a variable rate or alternatively, for constant rate video codecs extra information must be made available to achieve constant picture quality. This latter technique is 2- Layer video coding where the first layer transmits at a constant rate and the second layer at a variable rate. The ATM B-ISDN promises constant picture quality video services, therefore to achieve this aim the impact variable rate video sources will have on the network must be determined by network simulation, thus variable rate video source models must be derived. To statistically characterise and stochastically model 1-Layer VBR(Variable Bit Rate) H.261 Video Codec traffic, here a videophone sequence is analysed by two alternative strategies : Talk-Listen and Motion Level. This analysis also found that 2-Layer H.261 Video Codec traffic can be stochastically modelled via a 1-Layer VBR H.261 Video Codec traffic model. Numerous hierarchical stochastic models with the ability to capture the statistical characteristics of long video sequences, in particular the short-term and long-term autocorrelations, are presented. One such model was simulated and the resulting simulated traffic was analysed to confirm the advantage hierarchical stochastic models have over non-hierarchical stochastic models in modelling video source traffic
    corecore