433 research outputs found
Predicting the Perceived Quality of a First Person Shooter Game: the Team Fortress 2 T-Model
This paper describes the development of a model which quantifies the effects of latency on a playerâs perceived game quality in the networked first person shooter game Team Fortress 2, in an attempt to replicate and extend previous research done on other games. We conducted a study to measure the subjective quality of gameplay under various induced latency conditions. We determined that latency had a significant effect on a player\u27s perceived quality of the game, yet did not significantly affect player performance. Using these data, we constructed the Team Fortress 2 G-model to predict the mean opinion score of Team Fortress 2 game based on known network conditions
Latency and player actions in online games
The growth and penetration of broadband access networks to the home has fueled the growth of online games played over the Internet. As we write this article, it is 5am on a typical weekday morning and Gamespy Arcade 1 reports more than 250,000 players online playing about 75,000 games! This proliferation of online games has been matched by an equivalent growth in both th
Recommended from our members
Distributed virtual environment scalability and security
Distributed virtual environments (DVEs) have been an active area of research and engineering for more than 20 years. The most widely deployed DVEs are network games such as Quake, Halo, and World of Warcraft (WoW), with millions of users and billions of dollars in annual revenue. Deployed DVEs remain expensive centralized implementations despite significant research outlining ways to distribute DVE workloads.
This dissertation shows previous DVE research evaluations are inconsistent with deployed DVE needs. Assumptions about avatar movement and proximity - fundamental scale factors - do not match WoWâs workload, and likely the workload of other deployed DVEs. Alternate workload models are explored and preliminary conclusions presented. Using realistic workloads it is shown that a fully decentralized DVE cannot be deployed to todayâs consumers, regardless of its overhead.
Residential broadband speeds are improving, and this limitation will eventually disappear. When it does, appropriate security mechanisms will be a fundamental requirement for technology adoption.
A trusted auditing system (âCarbonâ) is presented which has good security, scalability, and resource characteristics for decentralized DVEs. When performing exhaustive auditing, Carbon adds 27% network overhead to a decentralized DVE with a WoW-like workload. This resource consumption can be reduced significantly, depending upon the DVEâs risk tolerance.
Finally, the Pairwise Random Protocol (PRP) is described. PRP enables adversaries to fairly resolve probabilistic activities, an ability missing from most decentralized DVE security proposals.
Thus, this dissertations contribution is to address two of the obstacles for deploying research on decentralized DVE architectures. First, lack of evidence that research results apply to existing DVEs. Second, the lack of security systems combining appropriate security guarantees with acceptable overhead
Quantifying the Effect of Latency on Game Actions in BZFlag
Latency\u27s effect on performance in online games is becoming an increasingly relevant area of research. While much research has been performed on the broad topic of latency in games, this project studied the effect of latency on game actions based on their precision and deadline characteristics. We modified a publically available game to allow for artificially induced latency, ran experiments in that game with varying precision and deadline characteristics, and analyzed the resulting data to determine the trends
On the effectiveness of an optimization method for the traffic of TCP-based multiplayer online games
This paper studies the feasibility of using an optimization method, based on multiplexing and header compression, for the traffic of Massively Multiplayer Online Role Playing Games (MMORPGs) using TCP at the Transport Layer. Different scenarios where a number of flows share a common network path are identified. The adaptation of the multiplexing method is explained, and a formula of the savings is devised. The header compression ratio is obtained using real traces of a popular game and a statistical model of its traffic is used to obtain the bandwidth saving as a function of the number of players and the multiplexing period. The obtained savings can be up to 60 % for IPv4 and 70 % for IPv6. A Mean Opinion Score model from the literature is employed to calculate the limits of the multiplexing period that can be used without harming the user experience. The interactions between multiplexed and non-multiplexed flows, sharing a bottleneck with different kinds of background traffic, are studied through simulations. As a result of the tests, some limits for the multiplexing period are recommended: the unfairness between players can be low if the value of the multiplexing period is kept under 10 or 20 ms. TCP background flows using SACK (Selective Acknowledgment) and Reno yield better results, in terms of fairness, than Tahoe and New Reno. When UDP is used for background traffic, high values of the multiplexing period may stress the unfairness between flows if network congestion is severe
A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments
Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments
Latency Thresholds for Usability in Games: A Survey
User interactions in interactive applications are time critical operations;late response will degrade the experience. Sensitivity to delay doeshowever vary greatly with between games. This paper surveys existingliterature on the specifics of this limitation. We find a classificationwhere games are grouped with others of roughly the same requirements.In addition we find some numbers on how long latency is acceptable.These numbers are however inconsistent between studies, indicatinginconsistent methodology or insufficient classification of games andinteractions. To improve classification, we suggest some changes.In general, research is too sparse to draw any strong or statisticallysignificant conclusions. In some of the most time critical games, latencyseems to degrade the experience at about 50 ms
Elements of Infrastructure Demand in Multiplayer Video Games
With the advent of organized eSports, game streaming, and always-online video games, there exist new and more pronounced demands on players, developers, publishers, spectators, and other video game actors. By identifying and exploring elements of infrastructure in multiplayer games, this paper augments Bowmanâs (2018) conceptualization of demands in video games by introducing a new category of âinfrastructure demandâ of games. This article describes how the infrastructure increasingly built around video games creates demands upon those interacting with these games, either as players, spectators, or facilitators of multiplayer video game play. We follow the method described by Susan Leigh Star (1999), who writes that infrastructure is as mundane as it is a critical part of society and as such is particularly deserving of academic study. When infrastructure works properly it fades from view, but in doing so loses none of its importance to human endeavor. This work therefore helps to make visible the invisible elements of infrastructure present in and around multiplayer video games and explicates the demands these elements create on people interacting with those games
- âŚ