96 research outputs found

    A cloud gaming framework for dynamic graphical rendering towards achieving distributed game engines

    Get PDF
    Cloud gaming in recent years has gained growing success in delivering games-as-a-service by leveraging cloud resources. Existing cloud gaming frameworks deploy the entire game engine within Virtual Machines (VMs) due to the tight-coupling of game engine subsystems (graphics, physics, AI). The effectiveness of such an approach is heavily dependant on the cloud VM providing consistently high levels of performance, availability, and reliability. However this assumption is difficult to guarantee due to QoS degradation within, and outside of, the cloud - from system failure, network connectivity, to consumer datacaps - all of which may result in game service outage. We present a cloud gaming framework that creates a distributed game engine via loose-coupling the graphical renderer from the game engine, allowing for its execution across cloud VMs and client devices dynamically. Our framework allows games to operate during performance degradation and cloud service failure, enabling game developers to exploit heterogeneous graphical APIs unrestricted from Operating System and hardware constraints. Our initial experiments show that our framework improves game frame rates by up to 33% via frame interlacing between cloud and client systems

    Mobile graphics: SIGGRAPH Asia 2017 course

    Get PDF
    Peer ReviewedPostprint (published version

    FleXR: A System Enabling Flexibly Distributed Extended Reality

    Full text link
    Extended reality (XR) applications require computationally demanding functionalities with low end-to-end latency and high throughput. To enable XR on commodity devices, a number of distributed systems solutions enable offloading of XR workloads on remote servers. However, they make a priori decisions regarding the offloaded functionalities based on assumptions about operating factors, and their benefits are restricted to specific deployment contexts. To realize the benefits of offloading in various distributed environments, we present a distributed stream processing system, FleXR, which is specialized for real-time and interactive workloads and enables flexible distributions of XR functionalities. In building FleXR, we identified and resolved several issues of presenting XR functionalities as distributed pipelines. FleXR provides a framework for flexible distribution of XR pipelines while streamlining development and deployment phases. We evaluate FleXR with three XR use cases in four different distribution scenarios. In the results, the best-case distribution scenario shows up to 50% less end-to-end latency and 3.9x pipeline throughput compared to alternatives.Comment: 11 pages, 11 figures, conference pape

    Triangle Dropping: An occluded-geometry predictor for energy-efficient mobile GPUs

    Get PDF
    This article proposes a novel micro-architecture approach for mobile GPUs aimed at early removing the occluded geometry in a scene by leveraging frame-to-frame coherence, thus reducing the overall energy consumption. Mobile GPUs commonly implement a Tile-Based Rendering (TBR) architecture that differentiates two main phases: the Geometry Pipeline, where all the geometry of a scene is processed; and the Raster Pipeline, where primitives are rendered in a framebuffer. After the Geometry Pipeline, only non-culled primitives inside the camera’s frustum are stored into the Parameter Buffer, a data structure stored in DRAM. However, among the non-culled primitives there is a significant amount that are rendered but non-visible at all, resulting in useless computations. On average, 60% of those primitives are completely occluded in our benchmarks. Despite TBR architectures use on-chip caches for the Parameter Buffer, about 46% of the DRAM traffic still comes from accesses to such buffer. The proposed Triangle Dropping technique leverages the visibility information computed along the Raster Pipeline to predict the primitives’ visibility in the next frame to early discard those that will be totally occluded, drastically reducing Parameter Buffer accesses. On average, our approach achieves overall 14.5% energy savings, 28.2% energy-delay product savings, and a speedup of 20.2%.This work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant no. 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00 (AEI/FEDER, EU), and the ICREA Academia program. D. Corbalán-Navarro has been also supported by a PhD research fellowship from the University of Murcia’s “Plan Propio de Investigación.Peer ReviewedPostprint (author's final draft

    Vr & Web Gui Shell: Interactive Web-System For Virtual Reality

    Get PDF
    In this article, we provide the reader with an idea of realisation an interactive web system for virtual reality (VR). First, we discuss the role of web system for VR, and its relevance and importance. Then, we provide a brief overview of the existing technologies, which allows to implement this idea at the software level. Particularly, we review the possibilities of implementing the functions of the system, which will be gathered in one place: display of standard video and video 360 °, with dynamic playlist and descriptions windows; online maps, where the mode of «street view » will be in the focus of attention; output web pages, based on automatic transformation or ready-made «VR mode » of WebVR technology; and the adaptation of the browser to VR, which should solve one of the important problems – user-friendly navigation and ensuring high performance because VR in web has a lot of extra interlayers, instructions translation and etc., and this all decreases the performance. Thus, we are attempting to swap them and make a web-system in the virtual reality system. At the same time, we present our concepts, which clearly demonstrate our idea. Finally, we offer some possible ways of our web-system development in the future

    Shader optimization and specialization

    Get PDF
    In the field of real-time graphics for computer games, performance has a significant effect on the player’s enjoyment and immersion. Graphics processing units (GPUs) are hardware accelerators that run small parallelized shader programs to speed up computationally expensive rendering calculations. This thesis examines optimizing shader programs and explores ways in which data patterns on both the CPU and GPU can be analyzed to automatically speed up rendering in games. Initially, the effect of traditional compiler optimizations on shader source-code was explored. Techniques such as loop unrolling or arithmetic reassociation provided speed-ups on several devices, but different GPU hardware responded differently to each set of optimizations. Analyzing execution traces from numerous popular PC games revealed that much of the data passed from CPU-based API calls to GPU-based shaders is either unused, or remains constant. A system was developed to capture this constant data and fold it into the shaders’ source-code. Re-running the game’s rendering code using these specialized shader variants resulted in performance improvements in several commercial games without impacting their visual quality

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography Selbstständigkeitserklärun

    Cross-platform 2D game framework

    Get PDF
    One of the most useful tools for game development is a game framework. It is usually a complex software which offers abstraction of game components such as rendering, physics, sound, user input or AI. The goal of this thesis is to create a simple game framework for 2D games, focused on performance, extensibility and multiplatformity. A second goal of this thesis is implementation of an example game for demonstration of functions and functionality of the framework. Programming language C++ was chosen for development of the framework along with a portion of SDL library. Target platforms were chosen to be Windows and Linux. The example game was successfully implemented and tested on both platforms using most of the framework's capabilities.Jedním z nejužitečnějších nástrojů pro usnadnění vývoje her je herní framework. Jde o obvykle složitý software, který vývojářům poskytuje abstrakci nad herními komponentami jako například vykreslování, fyzika, zvuk, uživatelský vstup nebo umělá inteligence. Cílem této práce je vytvoření jednoduchého herního frameworku pro 2D hry zaměřeného na rychlost, rozšířitelnost a multiplatformnost. Druhým cílem práce je implementace ukázkové hry pro demonstraci funkcí a funkčnosti enginu. K vytvoření enginu byl použit jazyk C++ a část knihovny SDL. Za cílové platformy byly vybrány Windows a Linux. Hra byla úspěšně implementována a zprovozněná na obou platformách za využití téměř všech možností enginu.Department of Software and Computer Science EducationKatedra softwaru a výuky informatikyMatematicko-fyzikální fakultaFaculty of Mathematics and Physic

    A Framework for Client-Server Objects Streaming in VR

    Get PDF
    In the wake of the technological improvements of the last decade, such as the development of the network infrastructure, and thanks to the need for connection caused by the lockdowns used to restrict the spread of the Covid-19 Pandemic, there has been an increased interest in VR and AR technologies. Many tech companies are investing in designing their version of "Metaverse": a multi-user virtual environment in which people can remotely interact and share experiences. All these prototypes have the same problem: before entering a room or world, the client device needs to download the data of the entire environment, this slows down the user experience and can clog up the memory of the local device. The aim of this thesis work is to develop a possible solution to overcome these problems, namely create a 3D Objects Streaming System, that shares and keeps synced with the Clients only the entities that they can currently see, perceive, and interact with.In the wake of the technological improvements of the last decade, such as the development of the network infrastructure, and thanks to the need for connection caused by the lockdowns used to restrict the spread of the Covid-19 Pandemic, there has been an increased interest in VR and AR technologies. Many tech companies are investing in designing their version of "Metaverse": a multi-user virtual environment in which people can remotely interact and share experiences. All these prototypes have the same problem: before entering a room or world, the client device needs to download the data of the entire environment, this slows down the user experience and can clog up the memory of the local device. The aim of this thesis work is to develop a possible solution to overcome these problems, namely create a 3D Objects Streaming System, that shares and keeps synced with the Clients only the entities that they can currently see, perceive, and interact with
    corecore