185 research outputs found

    Foveated Video Streaming for Cloud Gaming

    Get PDF
    Video gaming is generally a computationally intensive application and to provide a pleasant user experience specialized hardware like Graphic Processing Units may be required. Computational resources and power consumption are constraints which limit visually complex gaming on, for example, laptops, tablets and smart phones. Cloud gaming may be a possible approach towards providing a pleasant gaming experience on thin clients which have limited computational and energy resources. In a cloud gaming architecture, the game-play video is rendered and encoded in the cloud and streamed to a client where it is displayed. User inputs are captured at the client and streamed back to the server, where they are relayed to the game. High quality of experience requires the streamed video to be of high visual quality which translates to substantial downstream bandwidth requirements. The visual perception of the human eye is non-uniform, being maximum along the optical axis of the eye and dropping off rapidly away from it. This phenomenon, called foveation, makes the practice of encoding all areas of a video frame with the same resolution wasteful. In this thesis, foveated video streaming from a cloud gaming server to a cloud gaming client is investigated. A prototype cloud gaming system with foveated video streaming is implemented. The cloud gaming server of the prototype is configured to encode gameplay video in a foveated fashion based on gaze location data provided by the cloud gaming client. The effect of foveated encoding on the output bitrate of the streamed video is investigated. Measurements are performed using games from various genres and with different player points of view to explore changes in video bitrate with different parameters of foveation. Latencies involved in foveated video streaming for cloud gaming, including latency of the eye tracker used in the thesis, are also briefly discussed

    Effective and Economical Content Delivery and Storage Strategies for Cloud Systems

    Get PDF
    Cloud computing has proved to be an effective infrastructure to host various applications and provide reliable and stable services. Content delivery and storage are two main services provided by the cloud. A high-performance cloud can reduce the cost of both cloud providers and customers, while providing high application performance to cloud clients. Thus, the performance of such cloud-based services is closely related to three issues. First, when delivering contents from the cloud to users or transferring contents between cloud datacenters, it is important to reduce the payment costs and transmission time. Second, when transferring contents between cloud datacenters, it is important to reduce the payment costs to the internet service providers (ISPs). Third, when storing contents in the datacenters, it is crucial to reduce the file read latency and power consumption of the datacenters. In this dissertation, we study how to effectively deliver and store contents on the cloud, with a focus on cloud gaming and video streaming services. In particular, we aim to address three problems. i) Cost-efficient cloud computing system to support thin-client Massively Multiplayer Online Game (MMOG): how to achieve high Quality of Service (QoS) in cloud gaming and reduce the cloud bandwidth consumption; ii) Cost-efficient inter-datacenter video scheduling: how to reduce the bandwidth payment cost by fully utilizing link bandwidth when cloud providers transfer videos between datacenters; iii) Energy-efficient adaptive file replication: how to adapt to time-varying file popularities to achieve a good tradeoff between data availability and efficiency, as well as reduce the power consumption of the datacenters. In this dissertation, we propose methods to solve each of aforementioned challenges on the cloud. As a result, we build a cloud system that has a cost-efficient system to support cloud clients, an inter-datacenter video scheduling algorithm for video transmission on the cloud and an adaptive file replication algorithm for cloud storage system. As a result, the cloud system not only benefits the cloud providers in reducing the cloud cost, but also benefits the cloud customers in reducing their payment cost and improving high cloud application performance (i.e., user experience). Finally, we conducted extensive experiments on many testbeds, including PeerSim, PlanetLab, EC2 and a real-world cluster, which demonstrate the efficiency and effectiveness of our proposed methods. In our future work, we will further study how to further improve user experience in receiving contents and reduce the cost due to content transfer

    An Overview of the Networking Issues of Cloud Gaming: A Literature Review

    Get PDF
    With the increasing prevalence of video games comes innovations that aim to evolve them. Cloud gaming is poised as the next phase of gaming. It enables users to play video games on any internet-enabled device. Such improvement could, therefore, enhance the processing power of existing devices and solve the need to spend large amounts of money on the latest gaming equipment. However, others argue that it may be far from being practically functional. Since cloud gaming places dependency on networks, new issues emerge. In relation, this paper is a review of the networking perspective of cloud gaming. Specifically, the paper analyzes its issues and challenges along with possible solutions. In order to accomplish the study, a literature review was performed. Results show that there are numerous issues and challenges regarding cloud gaming networks. Generally, cloud gaming has problems with its network quality of service (QoS) and quality of experience (QoE). The poor QoS and QoE of cloud gaming can be linked to unsatisfactory latency, bandwidth, delay, packet loss, and graphics quality. Moreover, the cost of providing the service and the complexity of implementing cloud gaming were considered challenges. For these issues and challenges, solutions were found. The solutions include lag or latency compensation, compression with encoding techniques, client computing power, edge computing, machine learning, frame adaption, and GPU-based server selection. However, these have limitations and may not always be applicable. Thus, even if solutions exist, it would be beneficial to analyze the networking side of cloud gaming further

    Mukautuvien videon toisto algoritmien evaluointi obiilipilivipelaamisessa

    Get PDF
    Mobile cloud gaming has recently gained popularity as a result of improvements in the quality of internet connections and mobile networks. Under stable conditions, current LTE networks can provide a suitable platform for the demanding requirements of mobile cloud gaming. However, since the quality of mobile network connections constantly change, the network may be unable to always provide the best possible service to all clients. Thus, the ability to adapt is necessary for a mobile cloud gaming platform in order to compensate for changing bandwidth conditions in mobile networks. One approach for doing this is to change the quality of the video stream to match the available bandwidth of the network. This thesis evaluates an adaptive streaming method implemented on a mobile cloud gaming platform called GamingAnywhere and provides an alternative approach for estimating the available bandwidth by measuring the signal strength values of a mobile device. Experimentation was conducted in a real LTE network to determine the best approach in reconfiguring the encoder of the video stream to match the bandwidth of the network. The results show that increasing the constant-rate-factor parameter of the video encoder by 12 reduces the necessary bandwidth to about half. Thus, changing this video encoder parameter provides an effective means to compensate for significant changes in the bandwidth. However, high values of the constant-rate-factor parameter can considerably reduce the quality of the video stream. Thus, the frame rate of the video should be lowered if the constant-rate-factor already has a high value.Mobiilipilvipelaaminen on viimeaikoina kerännyt suosiota parantuneiden internet yhteyksien ja mobiiliverkkojen ansioista. Normaali olosuhteissa nykyiset LTE verkot tarjoavat sopivan alustan mobiili pilvipelaamisen koviin vaatimuksiin. Mobiiliverkkojen yhteyden laatu kuitenkin vaihtelee jatkuvasti ja kaikille käyttäjille ei voida aina tarjota parasta mahdollista yhteyttä. Mukautuminen vaihtelevaan yhteyden laatuun on siis tarpeellista pilvipelaamisalustalle. Tämän voi tehdä esimerkiksi muuttamalla videon kuvanlaatua sopivaksi käytössä olevaan kaistaan. Tässä työssä arvioidaan GamingAnywhere alustalle toteutettu mukautuva videon toistomenetelmä ja esitellään vaihtoehtoinen tapa arvioida käytettävissä olevaa kaistaa mittaamalla mobiilisignaalin vahvuutta mobiililaitteessa. Aidossa LTE verkossa suoritettujen kokeiden avulla selvitettiin paras tapa konfiguroida video enkooderi mukautumaan käytettävissä olevaan kaistan määrään. Tuloksista selviää, että constant-rate-factor-parametrin arvon nostaminen kahdellatoista laskee tarvittavan kaistan määrän noin puoleen. Se on siis tehokkain tapa mukautua merkittäviin muutoksiin kaistan leveydessä. Liian suuret constant-rate-factor-parametrin arvot kuitenkin heikentävät kuvanlaatua merkittävästi, joten kuvataajuutta voi myös alentaa jos parametrin arvo on jo liian suuri

    Energy-aware Adaptive Multimedia for Game-based e-learning

    Full text link
    Thanks to their motivational potential, video games have started to be increasingly used for e-learning. However, as e-learning gradually shifts towards mobile learning, there is a growing need for innovative techniques to deliver rich learning material such as educational games to resource-constrained devices. In particular, the limited battery capacity of mobile devices stands out as a key issue that can significantly limit players' access to educational games. This paper proposes an Energy-aware Adaptive Multimedia Game-based E-learning (EAMGBL) framework that aims to enable energy-efficient educational games delivery to mobile devices over wireless networks. The framework builds on top of the idea to render the game on the server side and stream a recording of it to the player's device over the Internet. To reduce the mobile device energy consumption and enable the player to play for longer, the proposed framework proposes to adapt both the educational game elements as well as the game's recorded multimedia stream

    SoC-Cluster as an Edge Server: an Application-driven Measurement Study

    Full text link
    Huge electricity consumption is a severe issue for edge data centers. To this end, we propose a new form of edge server, namely SoC-Cluster, that orchestrates many low-power mobile system-on-chips (SoCs) through an on-chip network. For the first time, we have developed a concrete SoC-Cluster server that consists of 60 Qualcomm Snapdragon 865 SoCs in a 2U rack. Such a server has been commercialized successfully and deployed in large scale on edge clouds. The current dominant workload on those deployed SoC-Clusters is cloud gaming, as mobile SoCs can seamlessly run native mobile games. The primary goal of this work is to demystify whether SoC-Cluster can efficiently serve more general-purpose, edge-typical workloads. Therefore, we built a benchmark suite that leverages state-of-the-art libraries for two killer edge workloads, i.e., video transcoding and deep learning inference. The benchmark comprehensively reports the performance, power consumption, and other application-specific metrics. We then performed a thorough measurement study and directly compared SoC-Cluster with traditional edge servers (with Intel CPU and NVIDIA GPU) with respect to physical size, electricity, and billing. The results reveal the advantages of SoC-Cluster, especially its high energy efficiency and the ability to proportionally scale energy consumption with various incoming loads, as well as its limitations. The results also provide insightful implications and valuable guidance to further improve SoC-Cluster and land it in broader edge scenarios

    An objective and subjective quality assessment for passive gaming video streaming

    Get PDF
    Gaming video streaming has become increasingly popular in recent times. Along with the rise and popularity of cloud gaming services and e-sports, passive gaming video streaming services such as Twitch.tv, YouTubeGaming, etc. where viewers watch the gameplay of other gamers, have seen increasing acceptance. Twitch.tv alone has over 2.2 million monthly streamers and 15 million daily active users with almost a million average concurrent users, making Twitch.tv the 4th biggest internet traffic generator, just after Netflix, YouTube and Apple. Despite the increasing importance and popularity of such live gaming video streaming services, they have until recently not caught the attention of the quality assessment research community. For the continued success of such services, it is imperative to maintain and satisfy the end user Quality of Experience (QoE), which can be measured using various Video Quality Assessment (VQA) methods. Gaming videos are synthetic and artificial in nature and have different streaming requirements as compared to traditional non-gaming content. While there exist a lot of subjective and objective studies in the field of quality assessment of Video-on-demand (VOD) streaming services, such as Netflix and YouTube, along with the design of many VQA metrics, no work has been done previously towards quality assessment of live passive gaming video streaming applications. The research work in this thesis tries to address this gap by using various subjective and objective quality assessment studies. A codec comparison using the three most popular and widely used compression standards is performed to determine their compression efficiency. Furthermore, a subjective and objective comparative study is carried out to find out the difference between gaming and non-gaming videos in terms of the trade-off between quality and data-rate after compression. This is followed by the creation of an open source gaming video dataset, which is then used for a performance evaluation study of the eight most popular VQA metrics. Different temporal pooling strategies and content based classification approaches are evaluated to assess their effect on the VQA metrics. Finally, due to the low performance of existing No-Reference (NR) VQA metrics on gaming video content, two machine learning based NR models are designed using NR features and existing NR metrics, which are shown to outperform existing NR metrics while performing on par with state-of-the-art Full-Reference (FR) VQA metrics

    Improving Adaptive Real-Time Video Communication Via Cross-layer Optimization

    Full text link
    Effective Adaptive BitRate (ABR) algorithm or policy is of paramount importance for Real-Time Video Communication (RTVC) amid this pandemic to pursue uncompromised quality of experience (QoE). Existing ABR methods mainly separate the network bandwidth estimation and video encoder control, and fine-tune video bitrate towards estimated bandwidth, assuming the maximization of bandwidth utilization yields the optimal QoE. However, the QoE of a RTVC system is jointly determined by the quality of compressed video, fluency of video playback, and interaction delay. Solely maximizing the bandwidth utilization without comprehensively considering compound impacts incurred by both network and video application layers, does not assure the satisfactory QoE. And the decoupling of network and video layer further exacerbates the user experience due to network-codec incoordination. This work therefore proposes the Palette, a reinforcement learning based ABR scheme that unifies the processing of network and video application layers to directly maximize the QoE formulated as the weighted function of video quality, stalling rate and delay. To this aim, a cross-layer optimization is proposed to derive fine-grained compression factor of upcoming frame(s) using cross-layer observations like network conditions, video encoding parameters, and video content complexity. As a result, Palette manages to resolve the network-codec incoordination and to best catch up with the network fluctuation. Compared with state-of-the-art schemes in real-world tests, Palette not only reduces 3.1%-46.3% of the stalling rate, 20.2%-50.8% of the delay, but also improves 0.2%-7.2% of the video quality with comparable bandwidth consumption, under a variety of application scenarios

    A high performance vector rendering pipeline

    Get PDF
    Vector images are images which encode visible surfaces of a 3D scene, in a resolution independent format. Prior to this work generation of such an image was not real time. As such the benefits of using them in the graphics pipeline were not fully expressed. In this thesis we propose methods for addressing the following questions. How can we introduce vector images into the graphics pipeline, namingly, how can we produce them in real time. How can we take advantage of resolution independence, and how can we render vector images to a pixel display as efficiently as possible and with the highest quality. There are three main contributions of this work. We have designed a real time vector rendering system. That is, we present a GPU accelerated pipeline which takes as an input a scene with 3D geometry, and outputs a vector image. We call this system SVGPU: Scalable Vector Graphics on the GPU. As mentioned vector images are resolution independent. We have designed a cloud pipeline for streaming vector images. That is, we present system design and optimizations for streaming vector images across interconnection networks, which reduces the bandwidth required for transporting real time 3D content from server to client. Lastly, in this thesis we introduce another added benefit of vector images. We have created a method for rendering them with the highest possible quality. That is, we have designed a new set of operations on vector images, which allows us to anti-alias them during rendering to a canonical 2D image. Our contributions provide the system design, optimizations, and algorithms required to bring vector image utilization and benefits much closer to the real time graphics pipeline. Together they form an end to end pipeline to this purpose, i.e. "A High Performance Vector Rendering Pipeline.
    corecore