9,197 research outputs found

    Measuring the impact of game controllers on player experience in FPS games

    Get PDF
    An increasing amount of games is released on multiple platforms, and game designers face the challenge of integrating different interaction paradigms for console and PC users while keeping the core mechanics of a game. However, little research has addressed the influence of game controls on player experience. In this paper, we examine the impact of mouse and keyboard versus gamepad control in first-person shooters using the PC and PlayStation 3 versions of Battlefield: Bad Company 2. We conducted a study with 45 participants to compare player experience and game usability issues of participants who had previously played similar games on one of the respective gaming systems, while also exploring the effects of players being forced to switch to an unfamiliar platform. The results show that players switching to a new platform experience more usability issues and consider themselves more challenged, but report an equally positive overall experience as players on their comfort platform. © 2011 ACM

    Is movement better? Comparing sedentary and motion-based game controls for older adults

    Get PDF
    Providing cognitive and physical stimulation for older adults is critical for their well-being. Video games offer the opportunity of engaging seniors, and research has shown a variety of positive effects of motion-based video games for older adults. However, little is known about the suitability of motion-based game controls for older adults and how their use is affected by age-related changes. In this paper, we present a study evaluating sedentary and motion-based game controls with a focus on differences between younger and older adults. Our results show that older adults can apply motion-based game controls efficiently, and that they enjoy motion-based interaction. We present design implications based on our study, and demonstrate how our findings can be applied both to motion-based game design and to general interaction design for older adults. Copyright held by authors

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017

    Human mobility monitoring in very low resolution visual sensor network

    Get PDF
    This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics

    Characterizing the Effects of Local Latency on Aim Performance in First Person Shooters

    Get PDF
    Real-time games such as first-person shooters (FPS) are sensitive to even small amounts of lag. The effects of network latency have been studied, but less is known about local latency -- that is, the lag caused by local sources such as input devices, displays, and the application. While local latency is important to gamers, we do not know how it affects aiming performance and whether we can reduce its negative effects. To explore these issues, we tested local latency in a variety of real-world gaming systems and carried out a controlled study focusing on targeting and tracking activities in an FPS game with varying degrees of local latency. In addition, we tested the ability of a lag compensation technique (based on aim assistance) to mitigate the negative effects. To motivate the need for these studies, we also examined how aim in FPS differs from pointing in standard 2D tasks, showing significant differences in performance metrics. Our studies found local latencies in the real-world range from 23 to 243~ms that cause significant and substantial degradation in performance (even for latencies as low as 41~ms). The studies also showed that our compensation technique worked well, reducing the problems caused by lag in the case of targeting, and removing the problem altogether in the case of tracking. Our work shows that local latency is a real and substantial problem -- but game developers can mitigate the problem with appropriate compensation methods

    Stereo and ToF Data Fusion by Learning from Synthetic Data

    Get PDF
    Time-of-Flight (ToF) sensors and stereo vision systems are both capable of acquiring depth information but they have complementary characteristics and issues. A more accurate representation of the scene geometry can be obtained by fusing the two depth sources. In this paper we present a novel framework for data fusion where the contribution of the two depth sources is controlled by confidence measures that are jointly estimated using a Convolutional Neural Network. The two depth sources are fused enforcing the local consistency of depth data, taking into account the estimated confidence information. The deep network is trained using a synthetic dataset and we show how the classifier is able to generalize to different data, obtaining reliable estimations not only on synthetic data but also on real world scenes. Experimental results show that the proposed approach increases the accuracy of the depth estimation on both synthetic and real data and that it is able to outperform state-of-the-art methods
    corecore