5,973 research outputs found

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Consensus Based Networking of Distributed Virtual Environments

    Get PDF
    Distributed Virtual Environments (DVEs) are challenging to create as the goals of consistency and responsiveness become contradictory under increasing latency. DVEs have been considered as both distributed transactional databases and force-reflection systems. Both are good approaches, but they do have drawbacks. Transactional systems do not support Level 3 (L3) collaboration: manipulating the same degree-of-freedom at the same time. Force-reflection requires a client-server architecture and stabilisation techniques. With Consensus Based Networking (CBN), we suggest DVEs be considered as a distributed data-fusion problem. Many simulations run in parallel and exchange their states, with remote states integrated with continous authority. Over time the exchanges average out local differences, performing a distribued-average of a consistent, shared state. CBN aims to build simulations that are highly responsive, but consistent enough for use cases such as the piano-movers problem. CBN's support for heterogeneous nodes can transparently couple different input methods, avoid the requirement of determinism, and provide more options for personal control over the shared experience. Our work is early, however we demonstrate many successes, including L3 collaboration in room-scale VR, 1000's of interacting objects, complex configurations such as stacking, and transparent coupling of haptic devices. These have been shown before, but each with a different technique; CBN supports them all within a single, unified system

    Effects of Local Latency on Games

    Get PDF
    Video games are a major type of entertainment for millions of people, and feature a wide variety genres. Many genres of video games require quick reactions, and in these games it is critical for player performance and player experience that the game is responsive. One of the major contributing factors that can make games less responsive is local latency — the total delay between input and a resulting change to the screen. Local latency is produced by a combination of delays from input devices, software processing, and displays. Due to latency, game companies spend considerable time and money play-testing their games to ensure the game is both responsive and that the in-game difficulty is reasonable. Past studies have made it clear that local latency negatively affects both player performance and experience, but there is still little knowledge about local latency’s exact effects on games. In this thesis, we address this problem by providing game designers with more knowledge about local latency’s effects. First, we performed a study to examine latency’s effects on performance and experience for popular pointing input devices used with games. Our results show significant differences between devices based on the task and the amount of latency. We then provide design guidelines based on our findings. Second, we performed a study to understand latency’s effects on ‘atoms’ of interaction in games. The study varied both latency and game speed, and found game speed to affect a task’s sensitivity to latency. Third, we used our findings to build a model to help designers quickly identify latency-sensitive game atoms, thus saving time during play-testing. We built and validated a model that predicts errors rates in a game atom based on latency and game speed. Our work helps game designers by providing new insight into latency’s varied effects and by modelling and predicting those effect

    Referee-based architectures for massively multiplayer online games

    Get PDF
    Network computer games are played amongst players on different hosts across the Internet. Massively Multiplayer Online Games (MMOG) are network games in which thousands of players participate simultaneously in each instance of the virtual world. Current commercial MMOG use a Client/Server (C/S) architecture in which the server simulates and validates the game, and notifies players about the current game state. While C/S is very popular, it has several limitations: (i) C/S has poor scalability as the server is a bandwidth and processing bottleneck; (ii) all updates must be routed through the server, reducing responsiveness; (iii) players with lower client-to-server delay than their opponents have an unfair advantage as they can respond to game events faster; and (iv) the server is a single point of failure.The Mirrored Server (MS) architecture uses multiple mirrored servers connected via a private network. MS achieves better scalability, responsiveness, fairness, and reliability than C/S; however, as updates are still routed through the mirrored servers the problems are not eliminated. P2P network game architectures allow players to exchange updates directly, maximising scalability, responsiveness, and fairness, while removing the single point of failure. However, P2P games are vulnerable to cheating. Several P2P architectures have been proposed to detect and/or prevent game cheating. Nevertheless, they only address a subset of cheating methods. Further, these solutions require costly distributed validation algorithms that increase game delay and bandwidth, and prevent players with high latency from participating.In this thesis we propose a new cheat classification that reflects the levels in which the cheats occur: game, application, protocol, or infrastructure. We also propose three network game architectures: the Referee Anti-Cheat Scheme (RACS), the Mirrored Referee Anti-Cheat Scheme (MRACS), and the Distributed Referee Anti-Cheat Scheme (DRACS); which maximise game scalability, responsiveness, and fairness, while maintaining cheat detection/prevention equal to that in C/S. Each proposed architecture utilises one or more trusted referees to validate the game simulation - similar to the server in C/S - while allowing players to exchange updates directly - similar to peers in P2P.RACS is a hybrid C/S and P2P architecture that improves C/S by using a referee in the server. RACS allows honest players to exchange updates directly between each other, with a copy sent to the referee for validation. By allowing P2P communication RACS has better responsiveness and fairness than C/S. Further, as the referee is not required to forward updates it has better bandwidth and processing scalability. The RACS protocol could be applied to any existing C/S game. Compared to P2P protocols RACS has lower delay, and allows players with high delay to participate. Like in many P2P architectures, RACS divides time into rounds. We have proposed two efficient solutions to find the optimal round length such that the total system delay is minimised.MRACS combines the RACS and MS architectures. A referee is used at each mirror to validate player updates, while allowing players to exchange updates directly. By using multiple mirrored referees the bandwidth required by each referee, and the player-to mirror delays, are reduced; improving the scalability, responsiveness and fairness of RACS, while removing its single point of failure. Direct communication MRACS improves MS in terms of its responsiveness, fairness, and scalability. To maximise responsiveness, we have defined and solved the Client-to-Mirror Assignment (CMA) problem to assign clients to mirrors such that the total delay is minimised, and no mirror is overloaded. We have proposed two sets of efficient solutions: the optimal J-SA/L-SA and the faster heuristic J-Greedy/L-Greedy to solve CMA.DRACS uses referees distributed to player hosts to minimise the publisher / developer infrastructure, and maximise responsiveness and/or fairness. To prevent colluding players cheating DRACS requires every update to be validated by multiple unaffiliated referees, providing cheat detection / prevention equal to that in C/S. We have formally defined the Referee Selection Problem (RSP) to select a set of referees from the untrusted peers such that responsiveness and/or fairness are maximised, while ensuring the probability of the majority of referees colluding is below a pre-defined threshold. We have proposed two efficient algorithms, SRS-1 and SRS-2, to solve the problem.We have evaluated the performances of RACS, MRACS, and DRACS analytically and using simulations. We have shown analytically that RACS, MRACS and DRACS have cheat detection/prevention equivalent to that in C/S. Our analysis shows that RACS has better scalability and responsiveness than C/S; and that MRACS has better scalability and responsiveness than C/S, RACS, and MS. As there is currently no publicly available traces from MMOG we have constructed artificial and realistic inputs. We have used these inputs on all simulations in this thesis to show the benefits of our proposed architectures and algorithms

    Personality, presence, and the virtual self: A five-factor model approach to behavioral analysis within a virtual environment

    Full text link
    For several decades, researchers have explored the existence of the virtual self, or digital embodiment of self found within an avatar. It was surmised that this new component of one’s overall identity not only existed in conjunction with the public and private persona, but was replete with the necessary physical and psychological characteristics that facilitate a broad range of cognitive, cultural, and socio-emotional outcomes found within a virtual environment (e.g., Second Life, World of Warcraft). However, little is known with regard to whether these characteristics do indeed impact behavioral outcomes. For this reason, this study employed an observational assessment method to explore the virtual self as more than a set of characteristics attributed to an avatar, but rather as a relationship between personality (i.e., individual and avatar) and actualized behavior exhibited within a virtual environment. Further, presence measures were introduced to better understand whether feelings of immersion impact this relationship. Results indicated a burgeoning virtual self, linking personality with behavior along the domain of agreeableness. In other words, behavior is not solely the product of the environment but also is influenced by participant predispositions. Findings also suggest that the construct presence may now need to incorporate variables that account for this virtual self. Implications for educators, instructional designers, and psychologists are discussed

    A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments

    Get PDF
    Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
    • 

    corecore