1,289 research outputs found

    Bounding inconsistency using a novel threshold metric for dead reckoning update packet generation

    Get PDF
    Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. The level of inconsistency arising from the network is proportional to the network delay, and thus a function of bandwidth consumption. Distributed simulation has often used a bandwidth reduction technique known as dead reckoning that combines approximation and estimation in the communication of entity movement to reduce network traffic, and thus improve consistency. However, unless carefully tuned to application and network characteristics, such an approach can introduce more inconsistency than it avoids. The key tuning metric is the distance threshold. This paper questions the suitability of the standard distance threshold as a metric for use in the dead reckoning scheme. Using a model relating entity path curvature and inconsistency, a major performance related limitation of the distance threshold technique is highlighted. We then propose an alternative time—space threshold criterion. The time—space threshold is demonstrated, through simulation, to perform better for low curvature movement. However, it too has a limitation. Based on this, we further propose a novel hybrid scheme. Through simulation and live trials, this scheme is shown to perform well across a range of curvature values, and places bounds on both the spatial and absolute inconsistency arising from dead reckoning

    Mobile Online Gaming via Resource Sharing

    Full text link
    Mobile gaming presents a number of main issues which remain open. These are concerned mainly with connectivity, computational capacities, memory and battery constraints. In this paper, we discuss the design of a fully distributed approach for the support of mobile Multiplayer Online Games (MOGs). In mobile environments, several features might be exploited to enable resource sharing among multiple devices / game consoles owned by different mobile users. We show the advantages of trading computing / networking facilities among mobile players. This operation mode opens a wide number of interesting sharing scenarios, thus promoting the deployment of novel mobile online games. In particular, once mobile nodes make their resource available for the community, it becomes possible to distribute the software modules that compose the game engine. This allows to distribute the workload for the game advancement management. We claim that resource sharing is in unison with the idea of ludic activity that is behind MOGs. Hence, such schemes can be profitably employed in these contexts.Comment: Proceedings of 3nd ICST/CREATE-NET Workshop on DIstributed SImulation and Online gaming (DISIO 2012). In conjunction with SIMUTools 2012. Desenzano, Italy, March 2012. ISBN: 978-1-936968-47-

    Dead Reckoning Using Play Patterns in a Simple 2D Multiplayer Online Game

    Get PDF
    In today’s gaming world, a player expects the same play experience whether playing on a local network or online with many geographically distant players on congested networks. Because of delay and loss, there may be discrepancies in the simulated environment from player to player, likely resulting in incorrect perception of events. It is desirable to develop methods that minimize this problem. Dead reckoning is one such method. Traditional dead reckoning schemes typically predict a player’s position linearly by assuming players move with constant force or velocity. In this paper, we consider team-based 2D online action games. In such games, player movement is rarely linear. Consequently, we implemented such a game to act as a test harness we used to collect a large amount of data from playing sessions involving a large number of experienced players. From analyzing this data, we identified play patterns, which we used to create three dead reckoning algorithms. We then used an extensive set of simulations to compare our algorithms with the IEEE standard dead reckoning algorithm and with the recent “Interest Scheme” algorithm. Our results are promising especially with respect to the average export error and the number of hits

    Using Neural Networks to Reduce Entity State Updates in Distributed Interactive Applications

    Get PDF
    Dead reckoning is the most commonly used predictive contract mechanism for the reduction of network traffic in Distributed Interactive Applications (DIAs). However, this technique often ignores available contextual information that may be influential to the state of an entity, sacrificing remote predictive accuracy in favour of low computational complexity. In this paper, we present a novel extension of dead reckoning by employing neuralnetworks to take into account expected future entity behaviour during the transmission of entity state updates (ESUs) for remote entity modeling in DIAs. This proposed method succeeds in reducing network traffic through a decrease in the frequency of ESU transmission required to maintain consistency. Validation is achieved through simulation in a highly interactive DIA, and results indicate significant potential for improved scalability when compared to the use of the IEEE DIS Standard dead reckoning technique. The new method exhibits relatively low computational overhead and seamless integration with current dead reckoning schemes

    An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications

    Get PDF
    Distributed Interactive Applications (DIAs) enable geographically dispersed users to interact with each other in a virtual environment. A key factor to the success of a DIA is the maintenance of a consistent view of the shared virtual world for all the participants. However, maintaining consistent states in DIAs is difficult under real networks. State changes communicated by messages over such networks suffer latency leading to inconsistency across the application. Predictive Contract Mechanisms (PCMs) combat this problem through reducing the number of messages transmitted in return for perceptually tolerable inconsistency. This thesis examines the operation of PCMs using concepts and methods derived from information theory. This information theory perspective results in a novel information model of PCMs that quantifies and analyzes the efficiency of such methods in communicating the reduced state information, and a new adaptive multiple-model-based framework for improving consistency in DIAs. The first part of this thesis introduces information measurements of user behavior in DIAs and formalizes the information model for PCM operation. In presenting the information model, the statistical dependence in the entity state, which makes using extrapolation models to predict future user behavior possible, is evaluated. The efficiency of a PCM to exploit such predictability to reduce the amount of network resources required to maintain consistency is also investigated. It is demonstrated that from the information theory perspective, PCMs can be interpreted as a form of information reduction and compression. The second part of this thesis proposes an Information-Based Dynamic Extrapolation Model for dynamically selecting between extrapolation algorithms based on information evaluation and inferred network conditions. This model adapts PCM configurations to both user behavior and network conditions, and makes the most information-efficient use of the available network resources. In doing so, it improves PCM performance and consistency in DIAs

    Using User Perception to Determine Suitable Error Thresholds for Dead Reckoning in Distributed Interactive Applications

    Get PDF
    Entity state update mechanisms are readily employed in Distributed Interactive Applications (DIAs), particularly in networked games. These mechanisms use prediction techniques in order to reduce the number of update packets sent across the network, while maintaining a high level of consistency from the remote user’s point of view. These mechanisms only send update packets when the local user’s actual behaviour differs from the predictive behaviour by a certain value, often referred to as the error threshold. In practice, this value is arbitrarily chosen and typically reflects what ‘appears’ to be suitable. It has been illustrated in various other media that psycho-perceptual measures can be used to greatly improve compression techniques, while maintaining satisfactory end-user experience. The best example of this is the MP3 compression used in audio. This paper describes a preliminary study designed to collect information relating to a subject’s perception of a networked computer game. The main motivation behind this work is to investigate if psycho-perceptual measures can be used to obtain appropriate error threshold measures for entity state prediction mechanisms. Here, we employ Dead Reckoning as it is the simplest and most commonly used of these mechanisms in distributed gaming. The experiment outlined in this paper attempts to determine if an error threshold can be chosen from the users’ perception, where user feedback is determined via linguistic variables. Furthermore, the effects of convergence, the speed of the entity and the shape of the entity trajectory are also examined from a psycho-perceptual viewpoint. The results are presented within

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    Multistep-Ahead Neural-Network Predictors for Network Traffic Reduction in Distributed Interactive Applications

    Get PDF
    Predictive contract mechanisms such as dead reckoning are widely employed to support scalable remote entity modeling in distributed interactive applications (DIAs). By employing a form of controlled inconsistency, a reduction in network traffic is achieved. However, by relying on the distribution of instantaneous derivative information, dead reckoning trades remote extrapolation accuracy for low computational complexity and ease-of-implementation. In this article, we present a novel extension of dead reckoning, termed neuro-reckoning, that seeks to replace the use of instantaneous velocity information with predictive velocity information in order to improve the accuracy of entity position extrapolation at remote hosts. Under our proposed neuro-reckoning approach, each controlling host employs a bank of neural network predictors trained to estimate future changes in entity velocity up to and including some maximum prediction horizon. The effect of each estimated change in velocity on the current entity position is simulated to produce an estimate for the likely position of the entity over some short time-span. Upon detecting an error threshold violation, the controlling host transmits a predictive velocity vector that extrapolates through the estimated position, as opposed to transmitting the instantaneous velocity vector. Such an approach succeeds in reducing the spatial error associated with remote extrapolation of entity state. Consequently, a further reduction in network traffic can be achieved. Simulation results conducted using several human users in a highly interactive DIA indicate significant potential for improved scalability when compared to the use of IEEE DIS standard dead reckoning. Our proposed neuro-reckoning framework exhibits low computational resource overhead for real-time use and can be seamlessly integrated into many existing dead reckoning mechanisms

    An Information-Based Dynamic Extrapolation Model for Networked Virtual Environments

    Get PDF
    Various Information Management techniques have been developed to help maintain a consistent shared virtual world in a Networked Virtual Environment. However, such techniques have to be carefully adapted to the application state dynamics and the underlying network. This work presents a novel framework that minimizes inconsistency by optimizing bandwidth usage to deliver useful information. This framework measures the state evolution using an information model and dynamically switches extrapolation models and the packet rate to make the most information-efficient usage of the available bandwidth. The results shown demonstrate that this approach can help optimize consistency under constrained and time-varying network conditions
    corecore