274 research outputs found

    Joint in-network video rate adaptation and measurement-based admission control: algorithm design and evaluation

    Get PDF
    The important new revenue opportunities that multimedia services offer to network and service providers come with important management challenges. For providers, it is important to control the video quality that is offered and perceived by the user, typically known as the quality of experience (QoE). Both admission control and scalable video coding techniques can control the QoE by blocking connections or adapting the video rate but influence each other's performance. In this article, we propose an in-network video rate adaptation mechanism that enables a provider to define a policy on how the video rate adaptation should be performed to maximize the provider's objective (e.g., a maximization of revenue or QoE). We discuss the need for a close interaction of the video rate adaptation algorithm with a measurement based admission control system, allowing to effectively orchestrate both algorithms and timely switch from video rate adaptation to the blocking of connections. We propose two different rate adaptation decision algorithms that calculate which videos need to be adapted: an optimal one in terms of the provider's policy and a heuristic based on the utility of each connection. Through an extensive performance evaluation, we show the impact of both algorithms on the rate adaptation, network utilisation and the stability of the video rate adaptation. We show that both algorithms outperform other configurations with at least 10 %. Moreover, we show that the proposed heuristic is about 500 times faster than the optimal algorithm and experiences only a performance drop of approximately 2 %, given the investigated video delivery scenario

    A Hierarchical Scheduling Model for Dynamic Soft-Realtime System

    Get PDF
    We present a new hierarchical approximation and scheduling approach for applications and tasks with multiple modes on a single processor. Our model allows for a temporal and spatial distribution of the feasibility problem for a variable set of tasks with non-deterministic and fluctuating costs at runtime. In case of overloads an optimal degradation strategy selects one of several application modes or even temporarily deactivates applications. Hence, transient and permanent bottlenecks can be overcome with an optimal system quality, which is dynamically decided. This paper gives the first comprehensive and complete overview of all aspects of our research, including a novel CBS concept to confine entire applications, an evaluation of our system by using a video-on-demand application, an outline for adding further resource dimension, and aspects of our protoype implementation based on RTSJ

    Distributed multimedia systems

    Get PDF
    A distributed multimedia system (DMS) is an integrated communication, computing, and information system that enables the processing, management, delivery, and presentation of synchronized multimedia information with quality-of-service guarantees. Multimedia information may include discrete media data, such as text, data, and images, and continuous media data, such as video and audio. Such a system enhances human communications by exploiting both visual and aural senses and provides the ultimate flexibility in work and entertainment, allowing one to collaborate with remote participants, view movies on demand, access on-line digital libraries from the desktop, and so forth. In this paper, we present a technical survey of a DMS. We give an overview of distributed multimedia systems, examine the fundamental concept of digital media, identify the applications, and survey the important enabling technologies.published_or_final_versio

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented

    Quality of Service based Retrieval Strategy for Distributed Video on Demand on Multiple Servers

    Get PDF
    The recent advances and development of inexpensive computers and high speed networking technology have enabled the Video on Demand (VoD) application to connect to shared-computing servers, replacing the traditional computing environments where each application was having its own dedicated computing hardware. The VoD application enables the viewer to select, from a list of video files, his favorite video file and watch its reproduction at will. Early video on demand applications were based on single video server where video streams are initiated from a single server, then with the increase in the number of the clients who became interested in VoD services, the focus became on Distributed VoD architectures (DVoD) where the context of distribution may be distributed system components, distributed streaming servers, distributed media content etc.The VoD server must handle several issues in order to be able to present a successful service. It has to receive the clients’ requests and analyze them, calculate the necessary resources for each request, and decide whether a request can be admitted or not. Once the request is admitted, the server must schedule the request, retrieve the required video data and send the video data in a timely manner so that the client does not suffer data starvation in his buffer during the video reproduction. So, the overall objective of a VoD service provider is to provide a better Quality of Service (QoS). Some issues related to QoS are-efficient use of bandwidth, providing better throughput etc.One of the important issues is to retrieve the video data from the servers in minimum time and to start the playback of the video at client side with a minimum waiting time. The overall time elapsed in retrieving the video data and starting the playback is known as access time. The thesis presents an efficient retrieval strategy for a distributed VoD environment where the basic objective is to minimize the access time by maintaining the presentation continuity at the client side. We have neglected some of the network parameters which may affect the access time, by assuming a high speed network between the servers and the client. The performance of the strategy has been analyzed and is compared with the referred PAR (Play After Retrieval) strategy. Further, the strategy is also analyzed under availability condition which is a more realistic approach

    A Robust Wireless Mesh Access Environment For Mobile Video Users

    Get PDF
    The rapid advances in networking technology have enabled large-scale deployments of online video streaming services in today\u27s Internet. In particular, wireless Internet access technology has been one of the most transforming and empowering technologies in recent years. We have witnessed a dramatic increase in the number of mobile users who access online video services through wireless access networks, such as wireless mesh networks and 3G cellular networks. Unlike in wired environment, using a dedicated stream for each video service request is very expensive for wireless networks. This simple strategy also has limited scalability when popular content is demanded by a large number of users. It is desirable to have a robust wireless access environment that can sustain a sudden spurt of interest for certain videos due to, say a current event. Moreover, due to the mobility of the video users, smooth streaming performance during the handoff is a key requirement to the robustness of the wireless access networks for mobile video users. In this dissertation, the author focuses on the robustness of the wireless mesh access (WMA) environment for mobile video users. Novel video sharing techniques are proposed to reduce the burden of video streaming in different WMA environments. The author proposes a cross-layer framework for scalable Video-on-Demand (VOD) service in multi-hop WiMax mesh networks. The author also studies the optimization problems for video multicast in a general wireless mesh networks. The WMA environment is modeled as a connected graph with a video source in one of the nodes and the video requests randomly generated from other nodes in the graph. The optimal video multicast problem in such environment is formulated as two sub-problems. The proposed solutions of the sub-problems are justified using simulation and numerical study. In the case of online video streaming, online video server does not cooperate with the access networks. In this case, the centralized data sharing technique fails since they assume the cooperation between the video server and the network. To tackle this problem, a novel distributed video sharing technique called Dynamic Stream Merging (DSM) is proposed. DSM improves the robustness of the WMA environment without the cooperation from the online video server. It optimizes the per link sharing performance with small time complexity and message complexity. The performance of DSM has been studied using simulations in Network Simulator 2 (NS2) as well as real experiments in a wireless mesh testbed. The Mobile YouTube website (http://m.youtube.com) is used as the online video website in the experiment. Last but not the least; a cross-layer scheme is proposed to avoid the degradation on the video quality during the handoff in the WMA environment. Novel video quality related triggers and the routing metrics at the mesh routers are utilized in the handoff decision making process. A redirection scheme is also proposed to eliminate packet loss caused by the handoff

    Quality-driven management of video streaming services in segment-based cache networks

    Get PDF

    Shared content addressing protocol (SCAP): optimizing multimedia content distribution at the transport layer

    Get PDF
    In recent years, the networking community has put a significant research effort in identifying new ways to distribute content to multiple users in a better-than-unicast manner. Scalable delivery is more important now video is the dominant traffic type and further growth is expected. To make content distribution scalable, in-network optimization functions are needed such as caches. The established transport layer protocols are end-to-end and do not allow optimizing transport below the application layer, hence the popularity of overlay application layer solutions located in the network. In this paper, we introduce a novel transport protocol, the Shared Content Addressing Protocol (SCAP) that allows in-network intermediate elements to participate in optimizing the delivery process, using only the transport layer. SCAP runs on top of standard IP networks, and SCAP optimization functions can be plugged-in the network transparently as needed. As such, only transport protocol based intermediate functions need to be deployed in the network, and the applications can stay at the topological end points. We define and evaluate a prototype version of the SCAP protocol using both simulation and a prototype implementation of a transparent SCAP-only intermediate optimization function

    An HTTP/2 push-based approach for low-latency live streaming with super-short segments

    Get PDF
    Over the last years, streaming of multimedia content has become more prominent than ever. To meet increasing user requirements, the concept of HTTP Adaptive Streaming (HAS) has recently been introduced. In HAS, video content is temporally divided into multiple segments, each encoded at several quality levels. A rate adaptation heuristic selects the quality level for every segment, allowing the client to take into account the observed available bandwidth and the buffer filling level when deciding the most appropriate quality level for every new video segment. Despite the ability of HAS to deal with changing network conditions, a low average quality and a large camera-to-display delay are often observed in live streaming scenarios. In the meantime, the HTTP/2 protocol was standardized in February 2015, providing new features which target a reduction of the page loading time in web browsing. In this paper, we propose a novel push-based approach for HAS, in which HTTP/2's push feature is used to actively push segments from server to client. Using this approach with video segments with a sub-second duration, referred to as super-short segments, it is possible to reduce the startup time and end-to-end delay in HAS live streaming. Evaluation of the proposed approach, through emulation of a multi-client scenario with highly variable bandwidth and latency, shows that the startup time can be reduced with 31.2% compared to traditional solutions over HTTP/1.1 in mobile, high-latency networks. Furthermore, the end-to-end delay in live streaming scenarios can be reduced with 4 s, while providing the content at similar video quality
    corecore