9 research outputs found

    A network analysis on cloud gaming: Stadia, GeForce Now and PSNow

    Get PDF
    Cloud gaming is a new class of services that promises to revolutionize the videogame market. It allows the user to play a videogame with basic equipment while using a remote server for the actual execution. The multimedia content is streamed through the network from the server to the user. This service requires low latency and a large bandwidth to work properly with low response time and high-definition video. Three of the leading tech companies, (Google, Sony and NVIDIA) entered this market with their own products, and others, like Microsoft and Amazon, are planning to launch their own platforms in the near future. However, these companies released so far little information about their cloud gaming operation and how they utilize the network. In this work, we study these new cloud gaming services from the network point of view. We collect more than 200 packet traces under different application settings and network conditions for 3 cloud gaming services, namely Stadia from Google, GeForce Now from NVIDIA and PS Now from Sony. We analyze the employed protocols and the workload they impose on the network. We find that GeForce Now and Stadia use the RTP protocol to stream the multimedia content, with the latter relying on the standard WebRTC APIs. They result in bandwidth-hungry and consume up to 45 Mbit/s, depending on the network and video quality. PS Now instead uses only undocumented protocols and never exceeds 13 Mbit/s

    Internet multimedia traffic classification from QoS perspective using semi-supervised dictionary learning models

    Get PDF
    To address the issue of finegrained classification of Internet multimedia traffic from a Quality of Service (QoS) perspective with a suitable granularity, this paper defines a new set of QoS classes and presents a modified K-Singular Value Decomposition (K-SVD) method for multimedia identification. After analyzing several instances of typical Internet multimedia traffic captured in a campus network, this paper defines a new set of QoS classes according to the difference in downstream/upstream rates and proposes a modified K-SVD method that can automatically search for underlying structural patterns in the QoS characteristic space. We define bag-QoS-words as the set of specific QoS local patterns, which can be expressed by core QoS characteristics. After the dictionary is constructed with an excess quantity of bag-QoS-words, Locality Constrained Feature Coding (LCFC) features of QoS classes are extracted. By associating a set of characteristics with a percentage of error, an objective function is formulated. In accordance with the modified K-SVD, Internet multimedia traffic can be classified into a corresponding QoS class with a linear Support Vector Machines (SVM) classifier. Our experimental results demonstrate the feasibility of the proposed classification method

    On the Quality of Service of Cloud Gaming Systems

    Full text link

    Thin to win? Network performance analysis of the OnLive thin client game system

    No full text

    A Performance Comparison of VMware GPU Virtualization Techniques in Cloud Gaming

    Get PDF
    Cloud gaming is an application deployment scenario which runs an interactive gaming application remotely in a cloud according to the commands received from a thin client and streams the scenes as a video sequence back to the client over the Internet, and it is of interest to both research community and industry. The academic community has developed some open-source cloud gaming systems such as GamingAnywhere for research study, while some industrial pioneers such as Onlive and Gaikai have succeeded in gaining a large user base in the cloud gaming market. Graphical Processing Unit (GPU) virtualization plays an important role in such an environment as it is a critical component that allows virtual machines to run 3D applications with performance guarantees. Currently, GPU pass-through and GPU sharing are the two main techniques of GPU virtualization. The former enables a single virtual machine to access a physical GPU directly and exclusively, while the latter makes a physical GPU shareable by multiple virtual machines. VMware Inc., one of the most popular virtualization solution vendors, has provided concrete implementations of GPU pass-through and GPU sharing. In particular, it provides a GPU pass-through solution called Virtual Dedicated Graphics Acceleration (vDGA) and a GPU-sharing solution called Virtual Shared Graphics Acceleration (vSGA). Moreover, VMware Inc. recently claimed it realized another GPU sharing solution called vGPU. Nevertheless, the feasibility and performance of these solutions in cloud gaming has not been studied yet. In this work, an experimental study is conducted to evaluate the feasibility and performance of GPU pass-through and GPU sharing solutions offered by VMware in cloud gaming scenarios. The primary results confirm that vDGA and vGPU techniques can fit the demands of cloud gaming. In particular, these two solutions achieved good performance in the tested graphics card benchmarks, and gained acceptable image quality and response delay for the tested games

    Improving Usability of Mobile Applications Through Speculation and Distraction Minimization

    Full text link
    We live in a world where mobile computing systems are increasingly integrated with our day-to-day activities. People use mobile applications virtually everywhere they go, executing them on mobile devices such as smartphones, tablets, and smart watches. People commonly interact with mobile applications while performing other primary tasks such as walking and driving (e.g., using turn-by-turn directions while driving a car). Unfortunately, as an application becomes more mobile, it can experience resource scarcity (e.g., poor wireless connectivity) that is atypical in a traditional desktop environment. When critical resources become scarce, the usability of the mobile application deteriorates significantly. In this dissertation, I create system support that enables users to interact smoothly with mobile applications when wireless network connectivity is poor and when the user’s attention is limited. First, I show that speculative execution can mitigate user-perceived delays in application responsiveness caused by high-latency wireless network connectivity. I focus on cloud-based gaming, because the smooth usability of such application is highly dependent on low latency. User studies have shown that players are sensitive to as little as 60 ms of additional latency and are aggravated at latencies in excess of 100ms. For cloud-based gaming, which relies on powerful servers to generate high-graphics quality gaming content, a slow network frustrates the user, who must wait a long time to see input actions reflected in the game. I show that by predicting the user’s future gaming inputs and by performing visual misprediction compensation at the client, cloud-based gaming can maintain good usability even with 120 ms of network latency. Next, I show that the usability of mobile applications in an attention-limited environment (i.e., driving a vehicle) can be improved by automatically checking whether interfaces meet best-practice guidelines and by adding attention-aware scheduling of application interactions. When a user is driving, any application that demands too much attention is an unsafe distraction. I first develop a model checker that systematically explores all reachable screens for an application and determines whether the application conforms to best-practice vehicular UI guidelines. I find that even well- known vehicular applications (e.g., Google Maps and TomTom) can often demand too much of the driver’s attention. Next, I consider the case where applications run in the background and initiate interactions with the driver. I show that by quantifying the driver’s available attention and the attention demand of an interaction, real-time scheduling can be used to prevent attention overload in varying driving conditions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136989/1/kyminlee_1.pd

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control
    corecore