1,888 research outputs found

    HAPTIC VISUALIZATION USING VISUAL TEXTURE INFORMATION

    Get PDF
    Haptic enables users to interact and manipulate virtual objects. Although haptic research has influenced many areas yet the inclusion of computer haptic into computer vision, especially content based image retrieval (CBIR), is still few and limited. The purpose of this research is to design and validate a haptic texture search framework that will allow texture retrieval to be done not just visually but also haptically. Hence, this research is addressing the gap between the computer haptic and CBIR fields. In this research, the focus is on cloth textures. The design of the proposed framework involves haptic texture rendering algorithm and query algorithm. The proposed framework integrates computer haptic and content based image retrieval (CBIR) where haptic texture rendering is performed based on extracted cloth data. For the query purposes, the data are characterized and the texture similarity is calculated. Wavelet decomposition is utilized to extract data information from texture data. In searching process, the data are retrieved based on data distribution. The experiments to validate the framework have shown that haptic texture rendering can be performed by employing techniques that involve either a simple waveform or visual texture information. While rendering process was performed instability forces were generated during the rendering process was due to the limitation of the device. In the query process, accuracy is determined by the number of feature vector elements, data extraction, and similarity measurement algorithm. A user testing to validate the framework shows that users’ perception of haptic feedback differs depending on the different type of rendering algorithm. A simple rendering algorithm, i.e. sine wave, produces a more stable force feedback, yet lacks surface details compared to the visual texture information approach

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    High Frequency Acceleration Feedback Significantly Increases the Realism of Haptically Rendered Textured Surfaces

    Get PDF
    Almost every physical interaction generates high frequency vibrations, especially if one of the objects is a rigid tool. Previous haptics research has hinted that the inclusion or exclusion of these signals plays a key role in the realism of haptically rendered surface textures, but this connection has not been formally investigated until now. This paper presents a human subject study that compares the performance of a variety of surface rendering algorithms for a master-slave teleoperation system; each controller provides the user with a different combination of position and acceleration feedback, and subjects compared the renderings with direct tool-mediated exploration of the real surface. We use analysis of variance to examine quantitative performance metrics and qualitative realism ratings across subjects. The results of this study show that algorithms that include high-frequency acceleration feedback in combination with position feedback achieve significantly higher realism ratings than traditional position feedback alone. Furthermore, we present a frequency-domain metric for quantifying a controller\u27s acceleration feedback performance; given a constant surface stiffness, the median of this metric across subjects was found to have a significant positive correlation with median realism rating

    Collision Awareness Using Vibrotactile Arrays

    Get PDF
    What is often missing from many virtual worlds is a physical sense of the confinement and constraint of the virtual environment. To address this issue, we present a method for providing localized cutaneous vibratory feedback to the user’s right arm. We created a sleeve of tactors linked to a real-time human model that activates when the corresponding body area collides with an object. The hypothesis is that vibrotactile feedback to body areas provides the wearer sufficient guidance to acertain the existence and physical realism of access paths and body configurations. The results of human subject experiments clearly show that the use of full arm vibrotactile feedback improves performance over purely visual feedback in navigating the virtual environment. These results validate the empirical performance of this concept

    The embodied user : corporeal awareness & media technology

    Get PDF
    Human beings are proficient users of tools and technology. At times, our interactions with a technological artifact appear so effortless, that the distinction between the artifact and the body starts to fade. When operating anthropomorphically designed teleoperation systems, for example, some people develop the vivid experience that they are physically there at the remote site (i.e., telepresence). Others might even come to sense the slave robot’s arms and hands as their own. The process in which the central nervous system categorizes an object as a part of the body, and in which a discrimination is made between what is contained within and outside the bodily boundaries, is called self-attribution. The aim of this thesis is twofold: (a) To determine the personal factors (e.g., the characteristics of an individual’s psychological makeup) and situational factors (e.g., the appearance of objects) that constrain or facilitate self-attribution, and (b) to determine the degree to which these factors affect people’s experiences with media technology. In Chapter 2, we describe the theoretical framework of our research which is centered on a conception of the user of technology as an embodied agent. In this chapter we distinguish two important, but often confused aspects of embodiment: the body schema, and the body image. The body schema is defined as a dynamic distributed network of procedures aimed at guiding behavior. In contrast, we defined the body image as a part of the process of consciousness and, thus, as consisting of those higher-order discriminations (or qualia) that pertain to the body, and one’s self-perception thereof. To investigate the individual and situational factors that constrain or facilitate selfattribution (i.e., incorporation into the body image), we employ the experimental paradigm of the rubber-hand illusion (Botvinick & Cohen, 1998). In this illusion, which is induced by stroking a person’s concealed hand together with a visible fake one, some people start to sense the fake hand as an actual part of their body. In Chapter 3, we investigate the rubberhand illusion under two mediated conditions: (1) a virtual reality condition, where both the fake hand and its stimulation were projected on the table in front of the participant, and (2) a mixed reality condition, where the fake hand was projected, but its stimulation was unmediated. Our experiment reveals that people can develop the rubber-hand illusion under mediated conditions, but the resulting illusion may, depending on the technology used, be less vivid than in the traditional unmediated setup. In Chapter 4, we investigate the extent to which visual discrepancies between the foreign object and a human hand affect people in developing a vivid rubber-hand illusion. We found that people experience a more vivid illusion when the foreign object resembles the human hand in terms of both shape and texture. Taken together, the experiments in Chapters 3 and 4 support the view that the rubber-hand illusion is not merely governed by a bottom-up process (i.e., based on visuotactile integration), but is affected, top-down, by a cognitive representation of what the human body is like (e.g., Tsakiris and Haggard, 2005). In the rubber-hand illusion, people commonly misperceive the location of their concealed hand toward the direction of the fake hand (Tsakiris & Haggard, 2005). As such, this so-called proprioceptive drift is often used as an alternative to self-reports in assessing the vividness of the illusion (e.g., Tsakiris & Haggard, 2005). In Chapter 5, we investigate the extent to which the observed shift in felt position of the concealed hand can be attributed to experiencing the illusion. For this purpose, we test how various features of the experimental setup of the rubber-hand illusion, which in themselves are not sufficient to elicit the illusion, affect proprioceptive drift. We corroborate existing research which demonstrates that looking at a fake hand or a tabletop for five minutes, in absence of visuotactile stimulation, is sufficient to induce a change in the felt position of an unseen hand (e.g., Gross et al., 1974). Moreover, our experiments indicate that the use of proprioceptive drift as a measure for the strength of the rubber-hand illusion yields different conclusions than an assessment by means of self-reports. Based on these results, we question the validity of proprioceptive drift as an alternative measure of the vividness of the rubber-hand illusion. In Chapter 6, we propose and test a model of the vividness of the rubber-hand illusion. In two experiments, we successfully modeled people’s self-reported experiences related to the illusion (e.g., "the fake hand felt as my own") based on three estimates: (a) a person’s susceptibility for the rubber-hand illusion, (b) the processing demand that is required for a particular experience, and (c) the suppression/constraints imposed by the situation. We demonstrate that the impressions related to the rubber-hand illusion, and by inference the processes behind them, are comparable for different persons. This is a non-trivial finding as such invariance is required for an objective scaling of individual susceptibility and situational impediment on the basis of self-reported experiences. Regarding the validity of our vividness model, we confirm that asynchrony (e.g., Botvinick & Cohen, 1998) and information-poor stimulation (e.g., Armel & Ramachandran, 2003) constrain the development of a vivid rubber-hand illusion. Moreover, we demonstrate that the correlation between a person’s susceptibility for the rubber-hand illusion and the extent of his of her proprioceptive drift is fairly moderate, thereby confirming our conclusions from Chapter 5 regarding the limited validity of proprioceptive drift as a measure of the vividness of the rubber-hand illusion. In Chapter 7, we investigate the extent to which the large individual differences in people’s susceptibility for the illusion can be explained by body image instability, and the ability to engage in motor imagery of the hand (i.e., in mental own hand transformations). In addition, we investigate whether the vividness of the illusion is dependent on the anatomical implausibility of the fake hand’s orientation. With respect to body image instability, we corroborate a small, but significant, correlation between susceptibility and body image aberration scores: As expected, people with a more unstable body image are also more susceptible to the rubber-hand illusion (cf. Burrack & Brugger, 2005). With respect to the position and orientation of the fake hand on the table, we demonstrate that people experience a less vivid rubber-hand illusion when the fake hand is orientated in an anatomically impossible, as compared to an anatomically possible manner. This finding suggests that the attribution of foreign objects to the self is constrained by the morphological capabilities of the human body. With respect to motor imagery, our results indicate a small, but significant, correlation between susceptibility and response times to a speeded left and right hands identification task. In other words, people who are more attuned to engage in mental own hand transformations are also better equipped to develop vivid rubber-hand illusions. In Chapter 8, we examine the role of self-attribution in the experience of telepresence. For this purpose, we introduce the technological domain of mediated social touch (i.e., interpersonal touching over a distance). We anticipated that, compared to a morphologically incongruent input medium, a morphologically congruent medium would be more easily attributed to the self. As a result, we expected our participants to develop a stronger sense of telepresence when they could see their interaction partner performing the touches on a sensor-equipped mannequin as opposed to a touch screen. Our participants, as expected, reported higher levels of telepresence, and demonstrated more physiological arousal with the mannequin input medium. At the same time, our experiment revealed that these effects might not have resulted from self-attribution, and thus that other psychological mechanisms of identification might play a role in telepresence experiences. In Chapter 9, the epilogue, we discuss the main contributions and limitations of this thesis, while taking a broader perspective on the field of research on media technologies and corporeal awareness

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Hand-Tool-Tissue Interaction Forces in Neurosurgery for Haptic Rendering

    Get PDF
    Haptics provides sensory stimuli that represent the interaction with a virtual or telemanipulated object, and it is considered a valuable navigation and manipulation tool during tele-operated surgical procedures. Haptic feedback can be provided to the user via cutaneous information and kinesthetic feedback. Sensory subtraction removes the kinesthetic component of the haptic feedback, having only the cutaneous component provided to the user. Such a technique guarantees a stable haptic feedback loop, while it keeps the transparency of the tele-operation system high, which means that the system faithfully replicates and render back the user's directives. This work focuses on checking whether the interaction forces during a bench model neurosurgery operation can lie in the solely cutaneous perception of the human finger pads. If this assumption is found true, it would be possible to exploit sensory subtraction techniques for providing surgeons with feedback from neurosurgery. We measured the forces exerted to surgical tools by three neurosurgeons performing typical actions on a brain phantom, using contact force sensors, whilst the forces exerted by the tools to the phantom tissue were recorded using a load cell placed under the brain phantom box. The measured surgeon-tool contact forces were 0.01 - 3.49 N for the thumb and 0.01 - 6.6 N for index and middle finger, whereas the measured tool- tissue interaction forces were from six to eleven times smaller than the contact forces, i.e., 0.01 - 0.59 N. The measurements for the contact forces fit the range of the cutaneous sensitivity for the human finger pad, thus, we can say that, in a tele-operated robotic neurosurgery scenario, it would possible to render forces at the fingertip level by conveying haptic cues solely through the cutaneous channel of the surgeon's finger pads. This approach would allow high transparenc

    Haptics in Robot-Assisted Surgery: Challenges and Benefits

    Get PDF
    Robotic surgery is transforming the current surgical practice, not only by improving the conventional surgical methods but also by introducing innovative robot-enhanced approaches that broaden the capabilities of clinicians. Being mainly of man-machine collaborative type, surgical robots are seen as media that transfer pre- and intra-operative information to the operator and reproduce his/her motion, with appropriate filtering, scaling, or limitation, to physically interact with the patient. The field, however, is far from maturity and, more critically, is still a subject of controversy in medical communities. Limited or absent haptic feedback is reputed to be among reasons that impede further spread of surgical robots. In this paper objectives and challenges of deploying haptic technologies in surgical robotics is discussed and a systematic review is performed on works that have studied the effects of providing haptic information to the users in major branches of robotic surgery. It has been tried to encompass both classical works and the state of the art approaches, aiming at delivering a comprehensive and balanced survey both for researchers starting their work in this field and for the experts
    • …
    corecore