164 research outputs found

    Control of Multiple Remote Servers for Quality-Fair Delivery of Multimedia Contents

    Full text link
    This paper proposes a control scheme for the quality-fair delivery of several encoded video streams to mobile users sharing a common wireless resource. Video quality fairness, as well as similar delivery delays are targeted among streams. The proposed controller is implemented within some aggregator located near the bottleneck of the network. The transmission rate among streams is adapted based on the quality of the already encoded and buffered packets in the aggregator. Encoding rate targets are evaluated by the aggregator and fed back to each remote video server (fully centralized solution), or directly evaluated by each server in a distributed way (partially distributed solution). Each encoding rate target is adjusted for each stream independently based on the corresponding buffer level or buffering delay in the aggregator. Communication delays between the servers and the aggregator are taken into account. The transmission and encoding rate control problems are studied with a control-theoretic perspective. The system is described with a multi-input multi-output model. Proportional Integral (PI) controllers are used to adjust the video quality and control the aggregator buffer levels. The system equilibrium and stability properties are studied. This provides guidelines for choosing the parameters of the PI controllers. Experimental results show the convergence of the proposed control system and demonstrate the improvement in video quality fairness compared to a classical transmission rate fair streaming solution and to a utility max-min fair approach

    Control of Distributed Servers for Quality-Fair Delivery of Multiple Video Streams

    No full text
    International audienceThis paper proposes a quality-fair video delivery system able to transmit several encoded video streams to mobile users sharing some wireless resource. Video quality fairness, as well as similar delivery delay is targeted among streams. The proposed control system is implemented within some aggregator located near the bottleneck of the network. This is done by allocating the transmission rate among streams based on the quality of the already encoded and buffered packets in the aggregator. Encoding rate targets are evaluated by the aggregator and fed back to each remote video server, or directly evaluated by each server in a distributed way. Each encoding rate target is adjusted for each stream independently based on the corresponding buffering delay in the aggregator. The transmission and encoding rate control problems are addressed with a control-theoretic perspective. The system is described with a multi-input multi-output model and several Proportional Integral (PI) controllers are used to adjust the video quality as well as the buffering delay. The study of the system equilibrium and stability provides guidelines for choosing the parameters of the PI controllers. Experimental results show that better quality fairness is obtained compared to classical transmission rate fair streaming solutions while keeping similar buffering delays

    Resource Management in Distributed Camera Systems

    Get PDF
    The aim of this work is to investigate different methods to solve the problem of allocating the correct amount of resources (network bandwidth and storage space) to video camera systems. Here we explore the intersection between two research areas: automatic control and game theory. Camera systems are a good example of the emergence of the Internet of Things (IoT) and its impact on our daily lives and the environment. We aim to improve today’s systems, shift from resources over-provisioning to allocate dynamically resources where they are needed the most. We optimize the storage and bandwidth allocation of camera systems to limit the impact on the environment as well as provide the best visual quality attainable with the resource limitations. This thesis is written as a collection of papers. It begins by introducing the problem with today’s camera systems, and continues with background information about resource allocation, automatic control and game theory. The third chapter de- scribes the models of the considered systems, their limitations and challenges. It then continues by providing more background on the automatic control and game theory techniques used in the proposed solutions. Finally, the proposed solutions are provided in five papers.Paper I proposes an approach to estimate the amount of data needed by surveillance cameras given camera and scenario parameters. This model is used for calculating the quasi Worst-Case Transmission Times of videos over a network. Papers II and III apply control concepts to camera network storage and bandwidth assignment. They provide simple, yet elegant solutions to the allocation of these resources in distributed camera systems. Paper IV com- bines pricing theory with control techniques to force the video quality of cam- era systems to converge to a common value based solely on the compression parameter of the provided videos. Paper V uses the VCG auction mechanism to solve the storage space allocation problem in competitive camera systems. It allows for a better system-wide visual quality than a simple split allocation given the limited system knowledge, trust and resource constraints

    On Transmission System Design for Wireless Broadcasting

    Get PDF
    This thesis considers aspects related to the design and standardisation of transmission systems for wireless broadcasting, comprising terrestrial and mobile reception. The purpose is to identify which factors influence the technical decisions and what issues could be better considered in the design process in order to assess different use cases, service scenarios and end-user quality. Further, the necessity of cross-layer optimisation for efficient data transmission is emphasised and means to take this into consideration are suggested. The work is mainly related terrestrial and mobile digital video broadcasting systems but many of the findings can be generalised also to other transmission systems and design processes. The work has led to three main conclusions. First, it is discovered that there are no sufficiently accurate error criteria for measuring the subjective perceived audiovisual quality that could be utilised in transmission system design. Means for designing new error criteria for mobile TV (television) services are suggested and similar work related to other services is recommended. Second, it is suggested that in addition to commercial requirements there should be technical requirements setting the frame work for the design process of a new transmission system. The technical requirements should include the assessed reception conditions, technical quality of service and service functionalities. Reception conditions comprise radio channel models, receiver types and antenna types. Technical quality of service consists of bandwidth, timeliness and reliability. Of these, the thesis focuses on radio channel models and errorcriteria (reliability) as two of the most important design challenges and provides means to optimise transmission parameters based on these. Third, the thesis argues that the most favourable development for wireless broadcasting would be a single system suitable for all scenarios of wireless broadcasting. It is claimed that there are no major technical obstacles to achieve this and that the recently published second generation digital terrestrial television broadcasting system provides a good basis. The challenges and opportunities of a universal wireless broadcasting system are discussed mainly from technical but briefly also from commercial and regulatory aspectSiirretty Doriast

    Application of a Bi-Geometric Transparent Composite Model to HEVC: Residual Data Modelling and Rate Control

    Get PDF
    Among various transforms, the discrete cosine transform (DCT) is the most widely used one in multimedia compression technologies for different image or video coding standards. During the development of image or video compression, a lot of interest has been attracted to understand the statistical distribution of DCT coefficients, which would be useful to design compression techniques, such as quantization, entropy coding and rate control. Recently, a bi-geometric transparent composite model (BGTCM) has been developed to provide modelling of distribution of DCT coefficients with both simplicity and accuracy. It has been reported that for DCT coefficients obtained from original images, which is applied in image coding, a transparent composite model (TCM) can provide better modelling than Laplacian. In video compression, such as H.264/AVC, DCT is performed on residual images obtained after prediction with different transform sizes. What's more, in high efficiency video coding(HEVC) which is the newest video coding standard, besides DCT as the main transform tool, discrete sine transform (DST) and transform skip (TS) techniques are possibly performed on residual data in small blocks. As such, the distribution of transformed residual data differs from that of transformed original image data. In this thesis, the distribution of coefficients, including those from all DCT, DST and TS blocks, is analysed based on BGTCM. To be specific, firstly, the distribution of all the coefficients from the whole frame is examined. Secondly, in HEVC, the entropy coding is implemented based on the new encoding concept, coefficient group (CG) with size 4*4, where quantized coefficients are encoded with context models based on their scan indices in each CG. To simulate the encoding process, coefficients at the same scan indices among different CGs are grouped together to form a set. Distribution of coefficients in each set is analysed. Based on our result, BGTCM is better than other widely used distributions, such as Laplacian and Cauchy distributions, in both x^2 and KL-divergence testing. Furthermore, unlike the way based on Laplacian and Cauchy distribution, the BGTCM can be used to model rate-quantization (R-Q) and distortion-quantization (D-Q) models without approximation expressions. R-Q and D-Q models based on BGTCM can reflect the distribution of coefficients, which are important in rate control. In video coding, rate control involves these two models to generate a suitable quantization parameter without multi-passes encoding in order to maintain the coding efficiency and to generate required rate to satisfy rate requirement. In this thesis, based on BGTCM, rate control in HEVC is revised with much increase in coding efficiency and decrease in rate fluctuation in terms of rate variance among frames for constant bit rate requirement.1 yea
    • …
    corecore