19,334 research outputs found

    Control of Multiple Remote Servers for Quality-Fair Delivery of Multimedia Contents

    Full text link
    This paper proposes a control scheme for the quality-fair delivery of several encoded video streams to mobile users sharing a common wireless resource. Video quality fairness, as well as similar delivery delays are targeted among streams. The proposed controller is implemented within some aggregator located near the bottleneck of the network. The transmission rate among streams is adapted based on the quality of the already encoded and buffered packets in the aggregator. Encoding rate targets are evaluated by the aggregator and fed back to each remote video server (fully centralized solution), or directly evaluated by each server in a distributed way (partially distributed solution). Each encoding rate target is adjusted for each stream independently based on the corresponding buffer level or buffering delay in the aggregator. Communication delays between the servers and the aggregator are taken into account. The transmission and encoding rate control problems are studied with a control-theoretic perspective. The system is described with a multi-input multi-output model. Proportional Integral (PI) controllers are used to adjust the video quality and control the aggregator buffer levels. The system equilibrium and stability properties are studied. This provides guidelines for choosing the parameters of the PI controllers. Experimental results show the convergence of the proposed control system and demonstrate the improvement in video quality fairness compared to a classical transmission rate fair streaming solution and to a utility max-min fair approach

    A Framework for Quality-Driven Delivery in Distributed Multimedia Systems

    Get PDF
    In this paper, we propose a framework for Quality-Driven Delivery (QDD) in distributed multimedia environments. Quality-driven delivery refers to the capacity of a system to deliver documents, or more generally objects, while considering the users expectations in terms of non-functional requirements. For this QDD framework, we propose a model-driven approach where we focus on QoS information modeling and transformation. QoS information models and meta-models are used during different QoS activities for mapping requirements to system constraints, for exchanging QoS information, for checking compatibility between QoS information and more generally for making QoS decisions. We also investigate which model transformation operators have to be implemented in order to support some QoS activities such as QoS mapping

    Efficient memory management in VOD disk array servers usingPer-Storage-Device buffering

    Get PDF
    We present a buffering technique that reduces video-on-demand server memory requirements in more than one order of magnitude. This technique, Per-Storage-Device Buffering (PSDB), is based on the allocation of a fixed number of buffers per storage device, as opposed to existing solutions based on per-stream buffering allocation. The combination of this technique with disk array servers is studied in detail, as well as the influence of Variable Bit Streams. We also present an interleaved data placement strategy, Constant Time Length Declustering, that results in optimal performance in the service of VBR streams. PSDB is evaluated by extensive simulation of a disk array server model that incorporates a simulation based admission test.This research was supported in part by the National R&D Program of Spain, Project Number TIC97-0438.Publicad

    Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform

    Full text link
    This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources.Comment: 9 page
    • 

    corecore