14 research outputs found

    A survey of haptics in serious gaming

    Get PDF
    Serious gaming often requires high level of realism for training and learning purposes. Haptic technology has been proved to be useful in many applications with an additional perception modality complementary to the audio and the vision. It provides novel user experience to enhance the immersion of virtual reality with a physical control-layer. This survey focuses on the haptic technology and its applications in serious gaming. Several categories of related applications are listed and discussed in details, primarily on haptics acts as cognitive aux and main component in serious games design. We categorize haptic devices into tactile, force feedback and hybrid ones to suit different haptic interfaces, followed by description of common haptic gadgets in gaming. Haptic modeling methods, in particular, available SDKs or libraries either for commercial or academic usage, are summarized. We also analyze the existing research difficulties and technology bottleneck with haptics and foresee the future research directions

    Exploring the effects of replicating shape, weight and recoil effects on VR shooting controllers

    Get PDF
    Commercial Virtual Reality (VR) controllers with realistic force feedback are becoming available, to increase the realism and immersion of first-person shooting (FPS) games in VR. These controllers attempt to mimic not only the shape and weight of real guns but also their recoil effects (linear force feedback parallel to the barrel, when the gun is shot). As these controllers become more popular and affordable, this paper investigates the actual effects that these properties (shape, weight, and especially directional force feedback) have on performance for general VR users (e.g. users with no marksmanship experience), drawing conclusions for both consumers and device manufacturers. We created a prototype replicating the properties exploited by commercial VR controllers (i.e. shape, weight and adjustable force feedback) and used it to assess the effect of these parameters in user performance, across a series of user studies. We first analysed the benefits on user performance of adding weight and shape vs a conventional controller (e.g. Vive controller). We then explore the implications of adding linear force feedback (LFF), as well as replicating the shape and weight. Our studies show negligible effects on the immediate shooting performance with some improvements in subjective appreciation, which are already present with low levels of LFF. While higher levels of LFF do not increase subjective appreciations any further, they lead users to reach their maximum distance skillset more quickly. This indicates that while adding low levels of LFF can be enough to influence user’s immersion/engagement for gaming contexts, controllers with higher levels of LFF might be better suited for training environments and/or when dealing with particularly demanding aiming tasks

    Assessment of haptics-based interaction for assembly tasks in virtual reality

    Get PDF
    This thesis examines the benefits of haptics-based interaction for performing assembly-related tasks in a virtual environment. A software application that combined freeware and open-source software development kits was developed and demonstrated principles of physics-based modeling in a haptics-enabled immersive virtual environment. A user study was designed to evaluate subjects in performing a series of experiments relevant to the assembly engineering process including weight recognition, part positioning, and assembly simulation. Each experiment featured a structure based on factorial combinations of effects, resulting in a series of designed trials. Methods of assessing user performance were established based on task completion time and accuracy. Using a randomized complete block design, a sample population of forty individuals performed all trials within the experiments in random sequences. Statistical methods were used to analyze the performances of individuals upon the conclusion of the study. When compared to visualsonly methods, the results show that haptics-based interaction is beneficial in improving performance including reduced completion times for weight comparisons, higher placement accuracy when positioning virtual objects, and steadier hand motions along threedimensional trajectories. Furthermore, the results indicate that the accuracy in weight identification is dependent on both the hand controlling the object and sensory modality used. The study was inconclusive in determining the affect of haptics-based interaction on completion times when positioning objects or completing manual assembly tasks

    HCI models, theories, and frameworks: Toward a multidisciplinary science

    Get PDF
    Motivation The movement of body and limbs is inescapable in human-computer interaction (HCI). Whether browsing the web or intensively entering and editing text in a document, our arms, wrists, and fingers are at work on the keyboard, mouse, and desktop. Our head, neck, and eyes move about attending to feedback marking our progress. This chapter is motivated by the need to match the movement limits, capabilities, and potential of humans with input devices and interaction techniques on computing systems. Our focus is on models of human movement relevant to human-computer interaction. Some of the models discussed emerged from basic research in experimental psychology, whereas others emerged from, and were motivated by, the specific need in HCI to model the interaction between users and physical devices, such as mice and keyboards. As much as we focus on specific models of human movement and user interaction with devices, this chapter is also about models in general. We will say a lot about the nature of models, what they are, and why they are important tools for the research and development of humancomputer interfaces. Overview: Models and Modeling By its very nature, a model is a simplification of reality. However a model is useful only if it helps in designing, evaluating, or otherwise providing a basis for understanding the behaviour of a complex artifact such as a computer system. It is convenient to think of models as lying in a continuum, with analogy and metaphor at one end and mathematical equations at the other. Most models lie somewhere in-between. Toward the metaphoric end are descriptive models; toward the mathematical end are predictive models. These two categories are our particular focus in this chapter, and we shall visit a few examples of each. Two models will be presented in detail and in case studies: Fitts' model of the information processing capability of the human motor system and Guiard's model of bimanual control. Fitts' model is a mathematical expression emerging from the rigors of probability theory. It is a predictive model at the mathematical end of the continuum, to be sure, yet when applied as a model of human movement it has characteristics of a metaphor. Guiard's model emerged from a detailed analysis of how human's use their hands in everyday tasks, such as writing, drawing, playing a sport, or manipulating objects. It is a descriptive model, lacking in mathematical rigor but rich in expressive power

    Augmenting Visual Feedback Using Sensory Substitution

    Get PDF
    Direct interaction in virtual environments can be realized using relatively simple hardware, such as standard webcams and monitors. The result is a large gap between the stimuli existing in real-world interactions and those provided in the virtual environment. This leads to reduced efficiency and effectiveness when performing tasks. Conceivably these missing stimuli might be supplied through a visual modality, using sensory substitution. This work suggests a display technique that attempts to usefully and non-detrimentally employ sensory substitution to display proximity, tactile, and force information. We solve three problems with existing feedback mechanisms. Attempting to add information to existing visuals, we need to balance: not occluding the existing visual output; not causing the user to look away from the existing visual output, or otherwise distracting the user; and displaying as much new information as possible. We assume the user interacts with a virtual environment consisting of a manually controlled probe and a set of surfaces. Our solution is a pseudo-shadow: a shadow-like projection of the user's probe onto the surface being explored or manipulated. Instead of drawing the probe, we only draw the pseudo-shadow, and use it as a canvas on which to add other information. Static information is displayed by varying the parameters of a procedural texture rendered in the pseudo-shadow. The probe velocity and probe-surface distance modify this texture to convey dynamic information. Much of the computation occurs on the GPU, so the pseudo-shadow renders quickly enough for real-time interaction. As a result, this work contains three contributions: a simple collision detection and handling mechanism that can generalize to distance-based force fields; a way to display content during probe-surface interaction that reduces occlusion and spatial distraction; and a way to visually convey small-scale tactile texture

    Effectiveness of haptic feedback coupled with the use of a head-mounted display for the evaluation of virtual mechanisms

    Get PDF
    Adequate immersion in virtual environments is a key to having a successful virtual simulation experience. As people have more of a sense of being there (telepresence) when they experience a virtual simulation, their experience becomes more realistic and therefore they are able to make valid assessments of their environments. This thesis presents the results of a study focused on the evaluation of participants\u27 perceptional and preferential differences between a haptic and non-haptic virtual experience coupled with the use and non-use of a head-mounted display (HMD). Several measurements were used in order to statistically compare the performance of participants from four groups, haptic with the HMD, non-haptic with the HMD, haptic without the HMD, and non-haptic without the HMD. The study found that the virtual environment (VE) display type, either HMD or desktop monitor, affected participants\u27 ability to detect mechanism differences related to motion, arm length, and distances (mechanism length and location) as well as influenced the amount of time required to evaluate each mechanism design during trial one. The treatment type (haptic or non-haptic) affected participants\u27 ability to estimate mechanism differences, influenced the detection of mechanism arm length differences, and resulted in differences in the amount of time needed to evaluate each mechanism design. Regardless of which treatment participants initially experienced, participants overwhelmingly preferred the haptic treatment to the non?-haptic treatment. The results of this study will help scientists make more informed decisions related to haptic device utilization, as well as head-mounted display use, an the interaction of the two. Several recommendations for future human factor studies related to haptic sensation, HMD use, and virtual reality are also included

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher

    Ubiquitous haptic feedback in human-computer interaction through electrical muscle stimulation

    Get PDF
    [no abstract
    corecore