19 research outputs found

    Latency Thresholds for Usability in Games: A Survey

    Get PDF
    User interactions in interactive applications are time critical operations;late response will degrade the experience. Sensitivity to delay doeshowever vary greatly with between games. This paper surveys existingliterature on the specifics of this limitation. We find a classificationwhere games are grouped with others of roughly the same requirements.In addition we find some numbers on how long latency is acceptable.These numbers are however inconsistent between studies, indicatinginconsistent methodology or insufficient classification of games andinteractions. To improve classification, we suggest some changes.In general, research is too sparse to draw any strong or statisticallysignificant conclusions. In some of the most time critical games, latencyseems to degrade the experience at about 50 ms

    Latency and player actions in online games

    Get PDF
    The growth and penetration of broadband access networks to the home has fueled the growth of online games played over the Internet. As we write this article, it is 5am on a typical weekday morning and Gamespy Arcade 1 reports more than 250,000 players online playing about 75,000 games! This proliferation of online games has been matched by an equivalent growth in both th

    Cheating in networked computer games: a review

    Get PDF
    The increasing popularity of Massively Multiplayer Online Games (MMOG) - games involving thousands of players participating simultaneously in a single virtual world - has highlighted the scalability bottlenecks present in centralised Client/Server (C/S) architectures. Researchers are proposing Peer-to-Peer (P2P) architectures as a scalable alternative to C/S; however, P2P is more vulnerable to cheating as it decentralises the game state and logic to un-trusted peer machines, rather than using trusted centralised servers. Cheating is a major concern for online games, as a minority of cheaters can potentially ruin the game for all players. In this paper we present a review and classification of known cheats, and provide real-world examples where possible. Further, we discuss counter measures used by C/S architectures to prevent cheating. Finally, we discuss several P2P architectures designed to prevent cheating, highlighting their strengths and weaknesses

    A survey on network game cheats and P2P solutions

    Get PDF
    The increasing popularity of Massively Multiplayer Online Games (MMOG) - games involving thousands of players participating simultaneously in a single virtual world - has highlighted the scalability bottlenecks present in centralised Client/Server (C/S) architectures. Researchers are proposing Peer-to-Peer (P2P) game technologies as a scalable alternative to C/S; however, P2P is more vulnerable to cheating as it decentralises the game state and logic to un-trusted peer machines, rather than using trusted centralised servers. Cheating is a major concern for online games, as a minority of cheaters can potentially ruin the game for all players. In this paper we present a review and classification of known cheats, and provide real-world examples where possible. Further, we discuss counter measures used by C/S game technologies to prevent cheating. Finally, we discuss several P2P architectures designed to prevent cheating, highlighting their strengths and weaknesses

    Using Opponent Modeling to Adapt Team Play in American Football

    Full text link
    An issue with learning effective policies in multi-agent adversarial games is that the size of the search space can be prohibitively large when the actions of both teammates and opponents are considered simultaneously. Opponent modeling, predicting an opponent’s actions in advance of execution, is one approach for selecting actions in adversarial settings, but it is often performed in an ad hoc way. In this chapter, we introduce several methods for using opponent modeling, in the form of predictions about the players ’ physical movements, to learn team policies. To explore the problem of decision-making in multi-agent adversarial scenarios, we use our approach for both offline play generation and real-time team response in the Rush 2008 American football simulator. Simultaneously predicting the movement trajectories, future reward, and play strategies of mul-tiple players in real-time is a daunting task but we illustrate how it is possible to divide and conquer this problem with an assortment of data-driven models

    Exploiting Opponent Modeling For Learning In Multi-agent Adversarial Games

    Get PDF
    An issue with learning effective policies in multi-agent adversarial games is that the size of the search space can be prohibitively large when the actions of both teammates and opponents are considered simultaneously. Opponent modeling, predicting an opponent’s actions in advance of execution, is one approach for selecting actions in adversarial settings, but it is often performed in an ad hoc way. In this dissertation, we introduce several methods for using opponent modeling, in the form of predictions about the players’ physical movements, to learn team policies. To explore the problem of decision-making in multi-agent adversarial scenarios, we use our approach for both offline play generation and real-time team response in the Rush 2008 American football simulator. Simultaneously predicting the movement trajectories, future reward, and play strategies of multiple players in real-time is a daunting task but we illustrate how it is possible to divide and conquer this problem with an assortment of data-driven models. By leveraging spatio-temporal traces of player movements, we learn discriminative models of defensive play for opponent modeling. With the reward information from previous play matchups, we use a modified version of UCT (Upper Conference Bounds applied to Trees) to create new offensive plays and to learn play repairs to counter predicted opponent actions. iii In team games, players must coordinate effectively to accomplish tasks while foiling their opponents either in a preplanned or emergent manner. An effective team policy must generate the necessary coordination, yet considering all possibilities for creating coordinating subgroups is computationally infeasible. Automatically identifying and preserving the coordination between key subgroups of teammates can make search more productive by pruning policies that disrupt these relationships. We demonstrate that combining opponent modeling with automatic subgroup identification can be used to create team policies with a higher average yardage than either the baseline game or domain-specific heuristics

    Effects of Local Latency on Games

    Get PDF
    Video games are a major type of entertainment for millions of people, and feature a wide variety genres. Many genres of video games require quick reactions, and in these games it is critical for player performance and player experience that the game is responsive. One of the major contributing factors that can make games less responsive is local latency — the total delay between input and a resulting change to the screen. Local latency is produced by a combination of delays from input devices, software processing, and displays. Due to latency, game companies spend considerable time and money play-testing their games to ensure the game is both responsive and that the in-game difficulty is reasonable. Past studies have made it clear that local latency negatively affects both player performance and experience, but there is still little knowledge about local latency’s exact effects on games. In this thesis, we address this problem by providing game designers with more knowledge about local latency’s effects. First, we performed a study to examine latency’s effects on performance and experience for popular pointing input devices used with games. Our results show significant differences between devices based on the task and the amount of latency. We then provide design guidelines based on our findings. Second, we performed a study to understand latency’s effects on ‘atoms’ of interaction in games. The study varied both latency and game speed, and found game speed to affect a task’s sensitivity to latency. Third, we used our findings to build a model to help designers quickly identify latency-sensitive game atoms, thus saving time during play-testing. We built and validated a model that predicts errors rates in a game atom based on latency and game speed. Our work helps game designers by providing new insight into latency’s varied effects and by modelling and predicting those effect

    Detection and Mitigation of Impairments for Real-Time Multimedia Applications

    Get PDF
    Measures of Quality of Service (QoS) for multimedia services should focus on phenomena that are observable to the end-user. Metrics such as delay and loss may have little direct meaning to the end-user because knowledge of specific coding and/or adaptive techniques is required to translate delay and loss to the user-perceived performance. Impairment events, as defined in this dissertation, are observable by the end-users independent of coding, adaptive playout or packet loss concealment techniques employed by their multimedia applications. Methods for detecting real-time multimedia (RTM) impairment events from end-to-end measurements are developed here and evaluated using 26 days of PlanetLab measurements collected over nine different Internet paths. Furthermore, methods for detecting impairment-causing network events like route changes and congestion are also developed. The advanced detection techniques developed in this work can be used by applications to detect and match response to network events. The heuristics-based techniques for detecting congestion and route changes were evaluated using PlanetLab measurements. It was found that Congestion events occurred for 6-8 hours during the days on weekdays on two paths. The heuristics-based route change detection algorithm detected 71\% of the visible layer 2 route changes and did not detect the events that occurred too close together in time or the events for which the minimum RTT change was small. A practical model-based route change detector named the parameter unaware detector (PUD) is also developed in this deissertation because it was expected that model-based detectors would perform better than the heuristics-based detector. Also, the optimal detector named the parameter aware detector (PAD) is developed and is useful because it provides the upper bound on the performance of any detector. The analysis for predicting the performance of PAD is another important contribution of this work. Simulation results prove that the model-based PUD algorithm has acceptable performance over a larger region of the parameter space than the heuristics-based algorithm and this difference in performance increases with an increase in the window size. Also, it is shown that both practical algorithms have a smaller acceptable performance region compared to the optimal algorithm. The model-based algorithms proposed in this dissertation are based on the assumption that RTTs have a Gamma density function. This Gamma distribution assumption may not hold when there are wireless links in the path. A study of CDMA 1xEVDO networks was initiated to understand the delay characteristics of these networks. During this study, it was found that the widely deployed proportional-fair (PF) scheduler can be corrupted accidentally or deliberately to cause RTM impairments. This is demonstrated using measurements conducted over both in-lab and deployed CDMA 1xEVDO networks. A new variant to PF that solves the impairment vulnerability of the PF algorithm is proposed and evaluated using ns-2 simulations. It is shown that this new scheduler solution together with a new adaptive-alpha initialization stratergy reduces the starvation problem of the PF algorithm

    Network Performance Management Using Application-centric Key Performance Indicators

    Get PDF
    The Internet and intranets are viewed as capable of supplying Anything, Anywhere, Anytime and e-commerce, e-government, e-community, and military C4I are now deploying many and varied applications to serve their needs. Network management is currently centralized in operations centers. To assure customer satisfaction with the network performance they typically plan, configure and monitor the network devices to insure an excess of bandwidth, that is overprovision. If this proves uneconomical or if complex and poorly understood interactions of equipment, protocols and application traffic degrade performance creating customer dissatisfaction, another more application-centric, way of managing the network will be needed. This research investigates a new qualitative class of network performance measures derived from the current quantitative metrics known as quality of service (QOS) parameters. The proposed class of qualitative indicators focuses on utilizing current network performance measures (QOS values) to derive abstract quality of experience (QOE) indicators by application class. These measures may provide a more user or application-centric means of assessing network performance even when some individual QOS parameters approach or exceed specified levels. The mathematics of functional analysis suggests treating QOS performance values as a vector, and, by mapping the degradation of the application performance to a characteristic lp-norm curve, a qualitative QOE value (good/poor) can be calculated for each application class. A similar procedure could calculate a QOE node value (satisfactory/unsatisfactory) to represent the service level of the switch or router for the current mix of application traffic. To demonstrate the utility of this approach a discrete event simulation (DES) test-bed, in the OPNET telecommunications simulation environment, was created modeling the topology and traffic of three semi-autonomous networks connected by a backbone. Scenarios, designed to degrade performance by under-provisioning links or nodes, are run to evaluate QOE for an access network. The application classes and traffic load are held constant. Future research would include refinement of the mathematics, many additional simulations and scenarios varying other independent variables. Finally collaboration with researchers in areas as diverse as human computer interaction (HCI), software engineering, teletraffic engineering, and network management will enhance the concepts modeled
    corecore