216 research outputs found

    QuadStream: {A} Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction

    Get PDF

    A framework for realistic real-time walkthroughs in a VR distributed environment

    Get PDF
    Virtual and augmented reality (VR/AR) are increasingly being used in various business scenarios and are important driving forces in technology development. However the usage of these technologies in the home environment is restricted due to several factors including lack of low-cost (from the client point of view) highperformance solutions. In this paper we present a general client/server rendering architecture based on Real-Time concepts, including support for a wide range of client platforms and applications. The idea of focusing on the real-time behaviour of all components involved in distributed IP-based VR scenarios is new and has not been addressed before, except for simple sub-solutions. This is considered as “the most significant problem with the IP environment” [1]. Thus, the most important contribution of this research will be the holistic approach, in which networking, end-systems and rendering aspects are integrated into a cost-effective infrastructure for building distributed real-time VR applications on IP-based networks

    AN INTERACTIVE REMOTE VISUALIZATION SYSTEM FOR MOBILE APPLICATION ACCESS

    Get PDF
    This paper introduces a remote visualization approach that enables the visualization of data sets on mobile devices or in web environments. With this approach the necessary computing power can be outsourced to a server environment. The developed system allows the rendering of 2D and 3D graphics on mobile phones or web browsers with high quality independent of the size of the original data set. Compared to known terminal server or other proprietary remote systems our approach offers a very simple way to integrate with a large variety of applications which makes it useful for real-life application scenarios in business processes

    Workload balancing in distributed virtual reality environments

    Get PDF
    Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    Fair Use, Fair Play: Video Game Performances and Let\u27s Plays as Transformative Use

    Get PDF
    With the advent of social video upload sites like YouTube, what constitutes fair use has become a hotly debated and often litigated subject. Major content rights holders in the movie and music industry assert ownership rights of content on video upload platforms, and the application of the fair use doctrine to such content is largely unclear. Amid these disputes over what constitutes fair use, new genres of digital content have arrived in the form of “Let’s Play” videos and other related media. In particular, “Let’s Plays”—videos in which prominent gamers play video games for the entertainment of others—are big business in the streaming and video upload world. Many video game producers vigorously assert the right to prevent the publishing of Let’s Play videos or to demand a cut of the revenues. This article discusses who legally possesses the right to distribute or profit from Let’s Play content under current law, and the way that courts ought to approach these disputes consistent with the principles of copyright protection. I conclude that the nature of video game content produces conceptual challenges not necessarily present in movies and music, and that these differences have a bearing on fair use analysis as it applies to Let’s Play videos

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Selection strategies for peer-to-peer 3D streaming

    Full text link
    In multi-user networked virtual environments such as Sec-ond Life, 3D streaming techniques have been used to pro-gressively download and render 3D objects and terrain, so that a full download or prior installation is not necessary. As existing client-server architectures may not scale easily, 3D streaming based on peer-to-peer (P2P) delivery is recently proposed to allow users to acquire 3D content from other users instead of the server. However, discovering the peers who possess relevant data and have enough bandwidth to answer data requests is non-trivial. A naive query-response approach thus may be inefficient and could incur unnec-essary latency and message overhead. In this paper, we propose a peer selection strategy for P2P-based 3D stream-ing, where peers exchange information on content availabil-ity incrementally with neighbors. Requestors can thus dis-cover suppliers quickly and avoid time-consuming queries. A multi-level area of interest (AOI) request is also adopted to avoid request contention due to concentrated requests. Simulation results show that our strategies achieve better system scalability and streaming performance than a naive query-response approach
    corecore