446 research outputs found

    Mechanism of Integrating Force and Vibrotactile Cues for 3D User Interaction within Virtual Environments

    Get PDF
    Proper integration of sensory cues facilitates 3D user interaction within virtual environments (VEs). Studies showed that the integration of visual and haptic cues follows maximum likelihood estimation (MLE). Little effort focuses however on the mechanism of integrating force and vibrotactile cues. We thus investigated MLE's suitability for integrating these cues. Within a VE, human users undertook 3D interaction of navigating a flying drone along a high-voltage transmission line for inspection. The users received individual force or vibrotactile cues, and their combinations in collocated and dislocated settings. The users' task performance including completion time and accuracy was assessed under each individual cue and setting. The presence of the vibrotactile cue promoted a better performance than the force cue alone. This agreed with the applicability of tactile cues for sensing 3D surfaces, herein setting a baseline for using MLE. The task performance under the collocated setting indicated a degree of combining the individual cues. In contrast, the performance under the dislocated setting was alike under the individual vibrotactile cue. These observations imply a possible role of MLE in integrating force and vibrotactile cues for 3D user interaction within VEs

    An exploration on the integration of vibrotactile and force cues for 3D interactive tasks

    Get PDF
    Vibrotactile and force cues of the haptic modality is increasing used to facilitate interactive tasks in three-dimensional (3D) virtual environments (VE). While maximum likelihood estimation (MLE) explains the integration of multi-sensory cues in many studies, an existing work yielded mean and amplitude mismatches when using MLE to interpret the integration of vibrotactile and force cues. To investigate these mismatches, we proposed mean-shifted MLE and conducted a study of comparing MLE and mean-shift MLE. Meanshifted MLE shared the same additive assumption of the cues as MLE, but took account mean differences of both cues. In a VE, the study replicated the visual scene, the 3D interactive task, and the cues from the existing work. All human participants in the study were biased to rely on the vibrotactile cue for their task, departing from unbiased reliance towards both cues in the existing work. After validating the replications, we applied MLE and mean-shifted MLE to interpret the integration of the vibrotactile and force cues. Similar to the existing work, MLE failed to explain the mean mismatch. Mean-shifted MLE remedied this mismatch, but maintained the amplitude mismatch. Further examinations revealed that the integration of the vibrotactile and force cues might violate the additive assumption of MLE and mean-shifted MLE. This sheds a light for modeling the integration of vibrotactile and force cues to aid 3D interactive tasks within VEs

    The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment

    Get PDF
    Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience

    Wearable haptic systems for the fingertip and the hand: taxonomy, review and perspectives

    Get PDF
    In the last decade, we have witnessed a drastic change in the form factor of audio and vision technologies, from heavy and grounded machines to lightweight devices that naturally fit our bodies. However, only recently, haptic systems have started to be designed with wearability in mind. The wearability of haptic systems enables novel forms of communication, cooperation, and integration between humans and machines. Wearable haptic interfaces are capable of communicating with the human wearers during their interaction with the environment they share, in a natural and yet private way. This paper presents a taxonomy and review of wearable haptic systems for the fingertip and the hand, focusing on those systems directly addressing wearability challenges. The paper also discusses the main technological and design challenges for the development of wearable haptic interfaces, and it reports on the future perspectives of the field. Finally, the paper includes two tables summarizing the characteristics and features of the most representative wearable haptic systems for the fingertip and the hand

    Investigation of dynamic three-dimensional tangible touchscreens: Usability and feasibility

    Get PDF
    The ability for touchscreen controls to move from two physical dimensions to three dimensions may soon be possible. Though solutions exist for enhanced tactile touchscreen interaction using vibrotactile devices, no definitive commercial solution yet exists for providing real, physical shape to the virtual buttons on a touchscreen display. Of the many next steps in interface technology, this paper concentrates on the path leading to tangible, dynamic, touchscreen surfaces. An experiment was performed that explores the usage differences between a flat surface touchscreen and one augmented with raised surface controls. The results were mixed. The combination of tactile-visual modalities had a negative effect on task completion time when visual attention was focused on a single task (single target task time increased by 8% and the serial target task time increased by 6%). On the other hand, the dual modality had a positive effect on error rate when visual attention was divided between two tasks (the serial target error rate decreased by 50%). In addition to the experiment, this study also investigated the feasibility of creating a dynamic, three dimensional, tangible touchscreen. A new interface solution may be possible by inverting the traditional touchscreen architecture and integrating emerging technologies such as organic light emitting diode (OLED) displays and electrorheological fluid based tactile pins

    Master of Science

    Get PDF
    thesisHaptic feedback in modern game controllers is limited to vibrotactile feedback. The addition of skin-stretch feedback would significantly improve the type and quality of haptic feedback provided by game controllers. Skin-stretch feedback requires small forces (around a few newtons) and translations (as small as 0.5 mm) to provide identifiable direction cues. Prior work has developed skin-stretch mechanisms in two form factors: a flat form factor and a tall but compact (cubic) form factor. These mechanisms have been shown to be effective actuators for skin-stretch feedback, and are small enough to fit inside of a game controller. Additional prior work has shown that the cubic skin-stretch mechanism can be integrated into a thumb joystick for use with game controllers. This thesis presents the design, characterization, and testing of two skin-stretch game controllers. The first game controller provides skin stretch via a 2-axis mechanism integrated into its thumb joysticks. This controller uses the cubic skin-stretch mechanism to drive the skin stretch. Concerns that users' motions of the joystick could negatively impact the saliency of skin stretch rendered from the joystick prompted the design of a controller that provides 2-axis skin stretch to users' middle fingers on the back side of the controller. Two experiments were conducted with the two controllers. One experiment had participants identify the direction of skin stretch from a selection of 8 possible directions. This test compared users' accuracies with both controllers, and with five different finger restraints on the back-tactor controller. Results show that users' identification accuracy was similar across feedback conditions. A second experiment used skin stretch to rotationally guide participants to a randomized target angle. Three different feedback strategies were tested. Results showed that a strategy called sinusoidal feedback, which provided feedback that varied in frequency and amplitude as a function of the user's relative position to the tactor, performed significantly better on all performance metrics than the other feedback strategies. It is important to note that the sinusoidal feedback only requires two 1-axis skin-stretch actuators, which are spatially separated, in order to provide feedback. The other lower performing feedback strategies used two 2-axis skin-stretch actuators

    Robotic simulators for tissue examination training with multimodal sensory feedback

    Get PDF
    Tissue examination by hand remains an essential technique in clinical practice. The effective application depends on skills in sensorimotor coordination, mainly involving haptic, visual, and auditory feedback. The skills clinicians have to learn can be as subtle as regulating finger pressure with breathing, choosing palpation action, monitoring involuntary facial and vocal expressions in response to palpation, and using pain expressions both as a source of information and as a constraint on physical examination. Patient simulators can provide a safe learning platform to novice physicians before trying real patients. This paper reviews state-of-the-art medical simulators for the training for the first time with a consideration of providing multimodal feedback to learn as many manual examination techniques as possible. The study summarizes current advances in tissue examination training devices simulating different medical conditions and providing different types of feedback modalities. Opportunities with the development of pain expression, tissue modeling, actuation, and sensing are also analyzed to support the future design of effective tissue examination simulators

    A Person-Centric Design Framework for At-Home Motor Learning in Serious Games

    Get PDF
    abstract: In motor learning, real-time multi-modal feedback is a critical element in guided training. Serious games have been introduced as a platform for at-home motor training due to their highly interactive and multi-modal nature. This dissertation explores the design of a multimodal environment for at-home training in which an autonomous system observes and guides the user in the place of a live trainer, providing real-time assessment, feedback and difficulty adaptation as the subject masters a motor skill. After an in-depth review of the latest solutions in this field, this dissertation proposes a person-centric approach to the design of this environment, in contrast to the standard techniques implemented in related work, to address many of the limitations of these approaches. The unique advantages and restrictions of this approach are presented in the form of a case study in which a system entitled the "Autonomous Training Assistant" consisting of both hardware and software for guided at-home motor learning is designed and adapted for a specific individual and trainer. In this work, the design of an autonomous motor learning environment is approached from three areas: motor assessment, multimodal feedback, and serious game design. For motor assessment, a 3-dimensional assessment framework is proposed which comprises of 2 spatial (posture, progression) and 1 temporal (pacing) domains of real-time motor assessment. For multimodal feedback, a rod-shaped device called the "Intelligent Stick" is combined with an audio-visual interface to provide feedback to the subject in three domains (audio, visual, haptic). Feedback domains are mapped to modalities and feedback is provided whenever the user's performance deviates from the ideal performance level by an adaptive threshold. Approaches for multi-modal integration and feedback fading are discussed. Finally, a novel approach for stealth adaptation in serious game design is presented. This approach allows serious games to incorporate motor tasks in a more natural way, facilitating self-assessment by the subject. An evaluation of three different stealth adaptation approaches are presented and evaluated using the flow-state ratio metric. The dissertation concludes with directions for future work in the integration of stealth adaptation techniques across the field of exergames.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    • …
    corecore