22 research outputs found

    Network Traffic Adaptation For Cloud Games

    Get PDF
    With the arrival of cloud technology, game accessibility and ubiquity have a bright future; Games can be hosted in a centralize server and accessed through the Internet by a thin client on a wide variety of devices with modest capabilities: cloud gaming. However, current cloud gaming systems have very strong requirements in terms of network resources, thus reducing the accessibility and ubiquity of cloud games, because devices with little bandwidth and people located in area with limited and unstable network connectivity, cannot take advantage of these cloud services. In this paper we present an adaptation technique inspired by the level of detail (LoD) approach in 3D graphics. It delivers multiple platform accessibility and network adaptability, while improving user's quality of experience (QoE) by reducing the impact of poor and unstable network parameters (delay, packet loss, jitter) on game interactivity. We validate our approach using a prototype game in a controlled environment and characterize the user QoE in a pilot experiment. The results show that the proposed framework provides a significant QoE enhancement

    Scalable high-capacity high-fan-out optical networks for constrained environments

    Get PDF
    The investigations carried out as part of the dissertation address the architecture and application of optical access networks pertaining to high-capacity and high fan-out applications such as in-flight entertainment (IFE) and video-gaming environment. High-capacity and high-fan-out optical networks have a multitude of applications such as expo-centers, train area networks (TAN), video gaming competitions and other applications that require large number of connected users. For the purpose of keeping the scope of the dissertation within limit however, we have concentrated this work on IFE systems. IFE systems present unique challenges at physical and application layers alike. In-flight entertainment (IFE) systems have been a part of passengers' experience for a while now. Currently available systems can be considered a bare-bone at best due to lack of adequate performance and support infrastructure. According to electronic arts (EA), one of the largest developers of video games in the world, an increase in demand for electronically distributed video games will exceed boxed games in just a matter of few years. This also shows a shifting trend towards the electronic distribution of video game content as opposed to physical distribution. Against the same backdrop, the dissertation project involved defining a novel system architecture and capacity based on the requirements for development of novel physical layer architecture utilizing optical networks for high-speed and high-fan-out distribution of content. At the physical layer of the stacked communication model a novel high-fan-out optical network was proposed and simulated for high data-rates. Having defined the physical layer, protocol stack was identified through rigorous observations and data traffic analysis from a large set of traffic traces obtained from various sources in order to understand the distribution and behavior of video game related traffic compared with regular internet traffic. Data requirements were laid down based on analysis keeping in mind that bandwidth requirements are increasing at a tremendous pace and that the network should be able to support future high-definition and 3D gaming as well. Based on the data analysis, analytical models and latency analysis models were also developed for bandwidth allocation in the high-fan-out network architectures. Analytical modeling gives an insight into the performance of the technique as a function of incoming traffic whereas latency analysis exposes the delay factors involved in running the technique over time. "State-full bandwidth allocation" (SBA) was proposed as part of the network layer design for upstream transmission. The novel technique involves keeping state information from previous states for future allocation. The results show that the proposed high-fan-out high-capacity physical layer architecture can be used to distribute video-gaming related content. Also, latency analysis and design and development of a novel SBA algorithm were carried out. Results were quiet promising, in that; a large number of users can be supported on the same single channel network. SBA criteria can be applied to multi-channel networks such as the physical architecture proposed / simulated and investigated in this project. In summary, the project involved design of a novel physical layer; network layer and protocol stack of the communication model and verification by simulations and mathematical modeling while adhering to application layer requirements

    Effective and Economical Content Delivery and Storage Strategies for Cloud Systems

    Get PDF
    Cloud computing has proved to be an effective infrastructure to host various applications and provide reliable and stable services. Content delivery and storage are two main services provided by the cloud. A high-performance cloud can reduce the cost of both cloud providers and customers, while providing high application performance to cloud clients. Thus, the performance of such cloud-based services is closely related to three issues. First, when delivering contents from the cloud to users or transferring contents between cloud datacenters, it is important to reduce the payment costs and transmission time. Second, when transferring contents between cloud datacenters, it is important to reduce the payment costs to the internet service providers (ISPs). Third, when storing contents in the datacenters, it is crucial to reduce the file read latency and power consumption of the datacenters. In this dissertation, we study how to effectively deliver and store contents on the cloud, with a focus on cloud gaming and video streaming services. In particular, we aim to address three problems. i) Cost-efficient cloud computing system to support thin-client Massively Multiplayer Online Game (MMOG): how to achieve high Quality of Service (QoS) in cloud gaming and reduce the cloud bandwidth consumption; ii) Cost-efficient inter-datacenter video scheduling: how to reduce the bandwidth payment cost by fully utilizing link bandwidth when cloud providers transfer videos between datacenters; iii) Energy-efficient adaptive file replication: how to adapt to time-varying file popularities to achieve a good tradeoff between data availability and efficiency, as well as reduce the power consumption of the datacenters. In this dissertation, we propose methods to solve each of aforementioned challenges on the cloud. As a result, we build a cloud system that has a cost-efficient system to support cloud clients, an inter-datacenter video scheduling algorithm for video transmission on the cloud and an adaptive file replication algorithm for cloud storage system. As a result, the cloud system not only benefits the cloud providers in reducing the cloud cost, but also benefits the cloud customers in reducing their payment cost and improving high cloud application performance (i.e., user experience). Finally, we conducted extensive experiments on many testbeds, including PeerSim, PlanetLab, EC2 and a real-world cluster, which demonstrate the efficiency and effectiveness of our proposed methods. In our future work, we will further study how to further improve user experience in receiving contents and reduce the cost due to content transfer

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    Neural Video Recovery for Cloud Gaming

    Full text link
    Cloud gaming is a multi-billion dollar industry. A client in cloud gaming sends its movement to the game server on the Internet, which renders and transmits the resulting video back. In order to provide a good gaming experience, a latency below 80 ms is required. This means that video rendering, encoding, transmission, decoding, and display have to finish within that time frame, which is especially challenging to achieve due to server overload, network congestion, and losses. In this paper, we propose a new method for recovering lost or corrupted video frames in cloud gaming. Unlike traditional video frame recovery, our approach uses game states to significantly enhance recovery accuracy and utilizes partially decoded frames to recover lost portions. We develop a holistic system that consists of (i) efficiently extracting game states, (ii) modifying H.264 video decoder to generate a mask to indicate which portions of video frames need recovery, and (iii) designing a novel neural network to recover either complete or partial video frames. Our approach is extensively evaluated using iPhone 12 and laptop implementations, and we demonstrate the utility of game states in the game video recovery and the effectiveness of our overall design

    Comparative Study of Anti-cheat Methods in Video Games

    Get PDF
    Online gaming is more popular than ever and many video game companies are reliant on the cash flow generated by online games. If a video game company wants its game to be successful, the game has to be resilient against cheating, the presence of which can ruin an otherwise successful game. Cheating in a video game can bankrupt an entire company as the non-cheating players leave the game because of unscrupulous individuals using cheats to gain an unfair advantage. Cheating can also involve criminal activity where maliciously acquired in-game items are traded against real money online. Commercial cheat programs are sold on online black markets and are available even to players who have no deep technical knowledge. The widespread availability and easy accessibility of cheats compounds the issue. This thesis will categorize different anti-cheat techniques and give a brief history of anti-cheat starting from the early 1980s. The history section describes how the fight against online cheating began and how it has evolved over the years. This thesis will compare different anti-cheat methods, both on the client-side and server-side, and draw conclusions about their viability. It will also look at scenarios where different anti-cheat methods are combined to create more powerful systems. All the anti-cheat methods will be evaluated based on five different criteria on a scale of 1 to 4, with one being the lowest score and four the highest. The thesis will use a custom-built client-server game as an example to illustrate many of the anti-cheat techniques. Requirements of different types of games, such as first-person shooters and strategy games, will also be considered when reviewing the anti-cheat techniques. Lastly, the thesis will look into the future of anti-cheat and introduce video game streaming and the use of machine learning as possible new solutions to tackle cheating. The conclusion will summarize the advantages and disadvantages of different methods and show which techniques are preferable based on the analysis

    Interactive Visualization on High-Resolution Tiled Display Walls with Network Accessible Compute- and Display-Resources

    Get PDF
    Papers number 2-7 and appendix B and C of this thesis are not available in Munin: 2. Hagen, T-M.S., Johnsen, E.S., Stødle, D., Bjorndalen, J.M. and Anshus, O.: 'Liberating the Desktop', First International Conference on Advances in Computer-Human Interaction (2008), pp 89-94. Available at http://dx.doi.org/10.1109/ACHI.2008.20 3. Tor-Magne Stien Hagen, Oleg Jakobsen, Phuong Hoai Ha, and Otto J. Anshus: 'Comparing the Performance of Multiple Single-Cores versus a Single Multi-Core' (manuscript)4. Tor-Magne Stien Hagen, Phuong Hoai Ha, and Otto J. Anshus: 'Experimental Fault-Tolerant Synchronization for Reliable Computation on Graphics Processors' (manuscript) 5. Tor-Magne Stien Hagen, Daniel Stødle and Otto J. Anshus: 'On-Demand High-Performance Visualization of Spatial Data on High-Resolution Tiled Display Walls', Proceedings of the International Conference on Imaging Theory and Applications and International Conference on Information Visualization Theory and Applications (2010), pages 112-119. Available at http://dx.doi.org/10.5220/0002849601120119 6. Bård Fjukstad, Tor-Magne Stien Hagen, Daniel Stødle, Phuong Hoai Ha, John Markus Bjørndalen and Otto Anshus: 'Interactive Weather Simulation and Visualization on a Display Wall with Many-Core Compute Nodes', Para 2010 – State of the Art in Scientific and Parallel Computing. Available at http://vefir.hi.is/para10/extab/para10-paper-60 7. Tor-Magne Stien Hagen, Daniel Stødle, John Markus Bjørndalen, and Otto Anshus: 'A Step towards Making Local and Remote Desktop Applications Interoperable with High-Resolution Tiled Display Walls', Lecture Notes in Computer Science (2011), Volume 6723/2011, 194-207. Available at http://dx.doi.org/10.1007/978-3-642-21387-8_15The vast volume of scientific data produced today requires tools that can enable scientists to explore large amounts of data to extract meaningful information. One such tool is interactive visualization. The amount of data that can be simultaneously visualized on a computer display is proportional to the display’s resolution. While computer systems in general have seen a remarkable increase in performance the last decades, display resolution has not evolved at the same rate. Increased resolution can be provided by tiling several displays in a grid. A system comprised of multiple displays tiled in such a grid is referred to as a display wall. Display walls provide orders of magnitude more resolution than typical desktop displays, and can provide insight into problems not possible to visualize on desktop displays. However, their distributed and parallel architecture creates several challenges for designing systems that can support interactive visualization. One challenge is compatibility issues with existing software designed for personal desktop computers. Another set of challenges include identifying characteristics of visualization systems that can: (i) Maintain synchronous state and display-output when executed over multiple display nodes; (ii) scale to multiple display nodes without being limited by shared interconnect bottlenecks; (iii) utilize additional computational resources such as desktop computers, clusters and supercomputers for workload distribution; and (iv) use data from local and remote compute- and data-resources with interactive performance. This dissertation presents Network Accessible Compute (NAC) resources and Network Accessible Display (NAD) resources for interactive visualization of data on displays ranging from laptops to high-resolution tiled display walls. A NAD is a display having functionality that enables usage over a network connection. A NAC is a computational resource that can produce content for network accessible displays. A system consisting of NACs and NADs is either push-based (NACs provide NADs with content) or pull-based (NADs request content from NACs). To attack the compatibility challenge, a push-based system was developed. The system enables several simultaneous users to mirror multiple regions from the desktop of their computers (NACs) onto nearby NADs (among others a 22 megapixel display wall) without requiring usage of separate DVI/VGA cables, permanent installation of third party software or opening firewall ports. The system has lower performance than that of a DVI/VGA cable approach, but increases flexibility such as the possibility to share network accessible displays from multiple computers. At a resolution of 800 by 600 pixels, the system can mirror dynamic content between a NAC and a NAD at 38.6 frames per second (FPS). At 1600x1200 pixels, the refresh rate is 12.85 FPS. The bottleneck of the system is frame buffer capturing and encoding/decoding of pixels. These two functional parts are executed in sequence, limiting the usage of additional CPU cores. By pipelining and executing these parts on separate CPU cores, higher frame rates can be expected and by a factor of two in the best case. To attack all presented challenges, a pull-based system, WallScope, was developed. WallScope enables interactive visualization of local and remote data sets on high-resolution tiled display walls. The WallScope architecture comprises a compute-side and a display-side. The compute-side comprises a set of static and dynamic NACs. Static NACs are considered permanent to the system once added. This type of NAC typically has strict underlying security and access policies. Examples of such NACs are clusters, grids and supercomputers. Dynamic NACs are compute resources that can register on-the-fly to become compute nodes in the system. Examples of this type of NAC are laptops and desktop computers. The display-side comprises of a set of NADs and a data set containing data customized for the particular application domain of the NADs. NADs are based on a sort-first rendering approach where a visualization client is executed on each display-node. The state of these visualization clients is provided by a separate state server, enabling central control of load and refresh-rate. Based on the state received from the state server, the visualization clients request content from the data set. The data set is live in that it translates these requests into compute messages and forwards them to available NACs. Results of the computations are returned to the NADs for the final rendering. The live data set is close to the NADs, both in terms of bandwidth and latency, to enable interactive visualization. WallScope can visualize the Earth, gigapixel images, and other data available through the live data set. When visualizing the Earth on a 28-node display wall by combining the Blue Marble data set with the Landsat data set using a set of static NACs, the bottleneck of WallScope is the computation involved in combining the data sets. However, the time used to combine data sets on the NACs decreases by a factor of 23 when going from 1 to 26 compute nodes. The display-side can decode 414.2 megapixels of images per second (19 frames per second) when visualizing the Earth. The decoding process is multi-threaded and higher frame rates are expected using multi-core CPUs. WallScope can rasterize a 350-page PDF document into 550 megapixels of image-tiles and display these image-tiles on a 28-node display wall in 74.66 seconds (PNG) and 20.66 seconds (JPG) using a single quad-core desktop computer as a dynamic NAC. This time is reduced to 4.20 seconds (PNG) and 2.40 seconds (JPG) using 28 quad-core NACs. This shows that the application output from personal desktop computers can be decoupled from the resolution of the local desktop and display for usage on high-resolution tiled display walls. It also shows that the performance can be increased by adding computational resources giving a resulting speedup of 17.77 (PNG) and 8.59 (JPG) using 28 compute nodes. Three principles are formulated based on the concepts and systems researched and developed: (i) Establishing the end-to-end principle through customization, is a principle stating that the setup and interaction between a display-side and a compute-side in a visualization context can be performed by customizing one or both sides; (ii) Personal Computer (PC) – Personal Compute Resource (PCR) duality states that a user’s computer is both a PC and a PCR, implying that desktop applications can be utilized locally using attached interaction devices and display(s), or remotely by other visualization systems for domain specific production of data based on a user’s personal desktop install; and (iii) domain specific best-effort synchronization stating that for distributed visualization systems running on tiled display walls, state handling can be performed using a best-effort synchronization approach, where visualization clients eventually will get the correct state after a given period of time. Compared to state-of-the-art systems presented in the literature, the contributions of this dissertation enable utilization of a broader range of compute resources from a display wall, while at the same time providing better control over where to provide functionality and where to distribute workload between compute-nodes and display-nodes in a visualization context

    Service-centric networking

    Get PDF
    This chapter introduces a new paradigm for service centric networking. Building upon recent proposals in the area of information centric networking, a similar treatment of services – where networked software functions, rather than content, are dynamically deployed, replicated and invoked – is discussed. Service-centric networking provides the mechanisms required to deploy replicated service instances across highly distributed networked cloud infrastructures and to route client requests to the closest instance while providing more efficient network infrastructure usage, improved QoS and new business opportunities for application and service providers. </jats:p

    Reconciling Intellectual and Personal Property

    Get PDF
    This Article examines both the forces undermining copy ownership and the important functions it serves within the copyright system in order to construct a workable notion of consumer property rights in digital media. Part I begins by examining the relationship between intellectual and personal property. Sometimes courts have treated those rights as inseparable, as if transfer of a copy entails transfer of the intangible right, or retention of the copyright entails ongoing control over particular copies. But Congress and most courts have recognized personal and intellectual property as interests that can be transferred separately. Although the better view, this approach frequently overstates the independence of copyrights and rights in copies. Those interests interact; each helps to define the boundary of the other. The exhaustion principle, though historically associated with a clear distinction between copy and copyright, is in fact the primary tool in copyright law for mediating the somewhat indistinct line separating the copy and the work. Part II begins to outline the breakdown of this once stable equilibrium, focusing on the erosion of the notion of consumer ownership. In recent decades, courts have created two distinct regimes for resolving questions of copy ownership: one that applies to software and one that applies to everything else. The software regime endorses rightsholders’ efforts to “license” particular copies of their works, in contrast to the general skepticism with which courts regard such efforts. This dichotomy is driven in part by software exceptionalism—the notion that for a variety of reasons software should be treated differently. But the growing acceptance of the licensing model also reflects changing views of property. Those shifts opened the door to the substitution of statutory property rights with unilateral contract terms. As the line separating software from other media becomes increasingly blurred, the thinking reflected in the software cases suggests a creeping erosion of copy ownership. But as Part III details, the erosion of ownership is only half of the problem. The copy, once the uncontroversial locus of consumer property rights, has transformed as well. Copies were once persistent, valuable, readily identifiable, and easily accounted. But the days of the unitary copy are numbered. Today, copies are discarnate, ephemeral, ubiquitous, and of little value in themselves. In part these changes reflect shifting consumer demand. But more importantly, they signal the increasing disconnect between copyright law’s conception of the copy and today’s technological reality. We are witnessing a blurring of the formerly clear distinction between the intangible work and the tangible copy, a distinction that has been central to copyright law’s approach to balancing intellectual and personal property rights. In light of the challenge of squaring the existing doctrinal framework for exhaustion with these developments, we turn in Part IV to an examination of the functions served by copy ownership. With a better understanding of the role of copy ownership within the copyright system, we will be better positioned to craft an approach to consumer property rights in the post-copy era. We identify three primary functions of copy ownership. First, locating consumer rights in a particular copy helps preserve the rivalry that distinguishes real property from intellectual property, thus preventing consumer rights from unduly interfering with the ability of the copyright holder’s to exploit the work. Second, copy ownership encourages consumers to participate in copyright markets rather than rely on unauthorized sources of content. Third, a stable and reliable notion of copy ownership reduces information cost externalities by eliminating idiosyncratic transfers of rights in copies. Part V argues that while these three functions historically have been bound up in the single, unitary copy that defined distribution in the nineteenth and twentieth centuries, the copy is not essential to achieving those goals. Instead, the copy served as a token, signaling that each of the three functional concerns at the heart of exhaustion was satisfied. With this new understanding of the place of the copy, we outline the structure of a new exhaustion doctrine that more carefully and transparently interrogates exhaustion’s underlying policy concerns, rather than using the unitary copy as a proxy for consumer rights
    corecore