47 research outputs found

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Digital Fabrication Approaches for the Design and Development of Shape-Changing Displays

    Get PDF
    Interactive shape-changing displays enable dynamic representations of data and information through physically reconfigurable geometry. The actuated physical deformations of these displays can be utilised in a wide range of new application areas, such as dynamic landscape and topographical modelling, architectural design, physical telepresence and object manipulation. Traditionally, shape-changing displays have a high development cost in mechanical complexity, technical skills and time/finances required for fabrication. There is still a limited number of robust shape-changing displays that go beyond one-off prototypes. Specifically, there is limited focus on low-cost/accessible design and development approaches involving digital fabrication (e.g. 3D printing). To address this challenge, this thesis presents accessible digital fabrication approaches that support the development of shape-changing displays with a range of application examples – such as physical terrain modelling and interior design artefacts. Both laser cutting and 3D printing methods have been explored to ensure generalisability and accessibility for a range of potential users. The first design-led content generation explorations show that novice users, from the general public, can successfully design and present their own application ideas using the physical animation features of the display. By engaging with domain experts in designing shape-changing content to represent data specific to their work domains the thesis was able to demonstrate the utility of shape-changing displays beyond novel systems and describe practical use-case scenarios and applications through rapid prototyping methods. This thesis then demonstrates new ways of designing and building shape-changing displays that goes beyond current implementation examples available (e.g. pin arrays and continuous surface shape-changing displays). To achieve this, the thesis demonstrates how laser cutting and 3D printing can be utilised to rapidly fabricate deformable surfaces for shape-changing displays with embedded electronics. This thesis is concluded with a discussion of research implications and future direction for this work

    Supporting Eyes-Free Human–Computer Interaction with Vibrotactile Haptification

    Get PDF
    The sense of touch is a crucial sense when using our hands in complex tasks. Some tasks we learn to do even without sight by just using the sense of touch in our fingers and hands. Modern touchscreen devices, however, have lost some of that tactile feeling while removing physical controls from the interaction. Touch is also a sense that is underutilized in interactions with technology and could provide new ways of interaction to support users. While users are using information technology in certain situations, they cannot visually and mentally focus completely during the interaction. Humans can utilize their sense of touch more comprehensively in interactions and learn to understand tactile information while interacting with information technology. This thesis introduces a set of experiments that evaluate human capabilities to understand and notice tactile information provided by current actuator technology and further introduces a couple of examples of haptic user interfaces (HUIs) to use under eyes-free use scenarios. These experiments evaluate the benefits of such interfaces for users and concludes with some guidelines and methods for how to create this kind of user interfaces. The experiments in this thesis can be divided into three groups. In the first group, with the first two experiments, the detection of vibrotactile stimuli and interpretation of the abstract meaning of vibrotactile feedback was evaluated. Experiments in the second group evaluated how to design rhythmic vibrotactile tactons to be basic vibrotactile primitives for HUIs. The last group of two experiments evaluated how these HUIs benefit the users in the distracted and eyes-free interaction scenarios. The primary aim for this series of experiments was to evaluate if utilizing the current level of actuation technology could be used more comprehensively than in current-day solutions with simple haptic alerts and notifications. Thus, to find out if the comprehensive use of vibrotactile feedback in interactions would provide additional benefits for the users, compared to the current level of haptic interaction methods and nonhaptic interaction methods. The main finding of this research is that while using more comprehensive HUIs in eyes-free distracted-use scenarios, such as while driving a car, the user’s main task, driving, is performed better. Furthermore, users liked the comprehensively haptified user interfaces

    Multi-touch Detection and Semantic Response on Non-parametric Rear-projection Surfaces

    Get PDF
    The ability of human beings to physically touch our surroundings has had a profound impact on our daily lives. Young children learn to explore their world by touch; likewise, many simulation and training applications benefit from natural touch interactivity. As a result, modern interfaces supporting touch input are ubiquitous. Typically, such interfaces are implemented on integrated touch-display surfaces with simple geometry that can be mathematically parameterized, such as planar surfaces and spheres; for more complicated non-parametric surfaces, such parameterizations are not available. In this dissertation, we introduce a method for generalizable optical multi-touch detection and semantic response on uninstrumented non-parametric rear-projection surfaces using an infrared-light-based multi-camera multi-projector platform. In this paradigm, touch input allows users to manipulate complex virtual 3D content that is registered to and displayed on a physical 3D object. Detected touches trigger responses with specific semantic meaning in the context of the virtual content, such as animations or audio responses. The broad problem of touch detection and response can be decomposed into three major components: determining if a touch has occurred, determining where a detected touch has occurred, and determining how to respond to a detected touch. Our fundamental contribution is the design and implementation of a relational lookup table architecture that addresses these challenges through the encoding of coordinate relationships among the cameras, the projectors, the physical surface, and the virtual content. Detecting the presence of touch input primarily involves distinguishing between touches (actual contact events) and hovers (near-contact proximity events). We present and evaluate two algorithms for touch detection and localization utilizing the lookup table architecture. One of the algorithms, a bounded plane sweep, is additionally able to estimate hover-surface distances, which we explore for interactions above surfaces. The proposed method is designed to operate with low latency and to be generalizable. We demonstrate touch-based interactions on several physical parametric and non-parametric surfaces, and we evaluate both system accuracy and the accuracy of typical users in touching desired targets on these surfaces. In a formative human-subject study, we examine how touch interactions are used in the context of healthcare and present an exploratory application of this method in patient simulation. A second study highlights the advantages of touch input on content-matched physical surfaces achieved by the proposed approach, such as decreases in induced cognitive load, increases in system usability, and increases in user touch performance. In this experiment, novice users were nearly as accurate when touching targets on a 3D head-shaped surface as when touching targets on a flat surface, and their self-perception of their accuracy was higher
    corecore