9 research outputs found

    The addition of the haptic modality to the virtual reality modeling language

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 40-43).by Evan F. Wies.S.B.and M.Eng

    Shape-independent hardness estimation using deep learning and a GelSight tactile sensor

    Get PDF
    Hardness is among the most important attributes of an object that humans learn about through touch. However, approaches for robots to estimate hardness are limited, due to the lack of information provided by current tactile sensors. In this work, we address these limitations by introducing a novel method for hardness estimation, based on the GelSight tactile sensor, and the method does not require accurate control of contact conditions or the shape of objects. A GelSight has a soft contact interface, and provides high resolution tactile images of contact geometry, as well as contact force and slip conditions. In this paper, we try to use the sensor to measure hardness of objects with multiple shapes, under a loosely controlled contact condition. The contact is made manually or by a robot hand, while the force and trajectory are unknown and uneven. We analyze the data using a deep constitutional (and recurrent) neural network. Experiments show that the neural net model can estimate the hardness of objects with different shapes and hardness ranging from 8 to 87 in Shore 00 scale

    Multimodal Human-Machine Interface For Haptic-Controlled Excavators

    Get PDF
    The goal of this research is to develop a human-excavator interface for the hapticcontrolled excavator that makes use of the multiple human sensing modalities (visual, auditory haptic), and efficiently integrates these modalities to ensure intuitive, efficient interface that is easy to learn and use, and is responsive to operator commands. Two empirical studies were conducted to investigate conflict in the haptic-controlled excavator interface and identify the level of force feedback for best operator performance

    Virtual environments for medical training : graphic and haptic simulation of tool-tissue interactions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Includes bibliographical references (leaves 122-127).For more than 2,500 years, surgical teaching has been based on the so called "see one, do one, teach one" paradigm, in which the surgical trainee learns by operating on patients under close supervision of peers and superiors. However, higher demands on the quality of patient care and rising malpractice costs have made it increasingly risky to train on patients. Minimally invasive surgery, in particular, has made it more difficult for an instructor to demonstrate the required manual skills. It has been recognized that, similar to flight simulators for pilots, virtual reality (VR) based surgical simulators promise a safer and more comprehensive way to train manual skills of medical personnel in general and surgeons in particular. One of the major challenges in the development of VR-based surgical trainers is the real-time and realistic simulation of interactions between surgical instruments and biological tissues. It involves multi-disciplinary research areas including soft tissue mechanical behavior, tool-tissue contact mechanics, computer haptics, computer graphics and robotics integrated into VR-based training systems. The research described in this thesis addresses many of the problems of simulating tool-tissue interactions in medical virtual environments. First, two kinds of physically based real time soft tissue models - the local deformation and the hybrid deformation model - were developed to compute interaction forces and visual deformation fields that provide real-time feed back to the user. Second, a system to measure in vivo mechanical properties of soft tissues was designed, and eleven sets of animal experiments were performed to measure in vivo and in vitro biomechanical properties of porcine intra-abdominal organs. Viscoelastic tissue(cont.) parameters were then extracted by matching finite element model predictions with the empirical data. Finally, the tissue parameters were combined with geometric organ models segmented from the Visible Human Dataset and integrated into a minimally invasive surgical simulation system consisting of haptic interface devices inside a mannequin and a graphic display. This system was used to demonstrate deformation and cutting of the esophagus, where the user can haptically interact with the virtual soft tissues and see the corresponding organ deformation on the visual display at the same time.by Jung Kim.Ph.D

    User-Defined Gestures with Physical Props in Virtual Reality

    Get PDF
    ©2021 Association for Computing Machinery. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the ACM on Human-Computer Interaction, https://doi.org/10.1145/3486954.When interacting with virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of leveraging their knowledge from the physical world. However, people may prefer physical props over handheld controllers to input gestures in VR. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures.NSERC, Discovery Grant 2016-04422 || NSERC, Discovery Grant 2019-06589 || NSERC, Discovery Accelerator Grant 492970-2016 || NSERC, CREATE Saskatchewan-Waterloo Games User Research (SWaGUR) Grant 479724-2016 || Ontario Ministry of Colleges and Universities, Ontario Early Researcher Award ER15-11-18

    Robotic Trajectory Tracking: Position- and Force-Control

    Get PDF
    This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a freeform surface in a discontinuous manner. Next to resulting switching constraints, disturbances and uncertainties, the case of unknown robot models is addressed. In a second phase, once contact has been established between surface and end effector and the freeform path is followed, a desired force is applied. In order to react to changing circumstances, the manipulator needs to show the features of an intelligent agent, i.e. it needs to learn and adapt its behaviour based on a combination of a constant interaction with its environment and preprogramed goals or preferences. The robotic manipulator mimics the human behaviour based on bio-inspired algorithms. In this way it is taken advantage of the know-how and experience of human operators as their knowledge is translated in robot skills. A selection of promising concepts is explored, developed and combined to extend the application areas of robotic manipulators from monotonous, basic tasks in stiff environments to complex constrained processes. Conventional concepts (Sliding Mode Control, PID) are combined with bio-inspired learning (BELBIC, reinforcement based learning) for robust and adaptive control. Independence of robot parameters is guaranteed through approximated robot functions using a Neural Network with online update laws and model-free algorithms. The performance of the concepts is evaluated through simulations and experiments. In complex freeform trajectory tracking applications, excellent absolute mean position errors (<0.3 rad) are achieved. Position and torque control are combined in a parallel concept with minimized absolute mean torque errors (<0.1 Nm)

    Measuring Tool Embodiment in Ready-to-Hand and Unready-to-Hand Situations Using Virtual and Physical Tools

    Get PDF
    Virtual environments can provide access to a variety of information that can be designed to mimic physical attributes or afford physical-like actions. Virtual reality and other ways of interactions such as multi-touch, tangible interaction, and mid-air gestures, often promise to be more natural, where the technology becomes invisible. However, there is limited investigation on how to measure the level of invisibility when interacting with technology with a quantitative measure. For example, virtual reality provides physical-like actions that mimics every aspect of the physical world interaction, but there is no direct methodology to measure these complex interactions. As a result, designers of novel interactive technologies do not have a clear understanding of how to measure these phenomena. The current research in human computer interaction focuses on using performance measures or self-reports questionnaires to evaluate interactive technologies. Research in psychology and philosophy, on the other hand, provides an understanding of the human condition in the physical environment. Consequently, the aim of this dissertation is to provide an effective methodology to measure the invisibility aspect of technology that applies both experimental psychology and HCI research. Study 1 presented in this dissertation used the after-effect phenomenon as a measure of object embodiment — when interacting with physical objects can affect haptic changes in perception. Study 2 investigated tool embodiment to measure the interaction with physical and virtual tools, where change in attention was used as a measure of tool embodiment. Finally, study 3 further examined tool embodiment with different tool states (broken or working tool) and different inputs alternatives. Over the past decade, multi-touch surfaces have become commonplace, with many researchers and practitioners describing the benefits of their natural, physical-like interactions. Study 1 presents an empirical investigation of the psychophysical effects of direct interaction with both physical and virtual objects. The phenomenon of Kinesthetic Figural After Effects — a change in understanding of the physical size of an object after a period of exposure to an object of different size, was used as a measure. While this effect is robustly reproducible when using physical artefacts, this same effect does not manifest when manipulating virtual objects on a direct, multi-touch tabletop display. Study 2 leveraged the phenomenon of tool embodiment as measure of interaction. Tool embodiment is when a tool becomes an extension of one’s body, where attention shifts to the task at hand, rather than the tool itself. This study tested tool embodiment framework to measure the aspect of being part of a tool by incorporating philosophical and psychological concepts. This framework was applied to design and conduct study 2 that uses attention to measure readiness-to-hand with both a physical tool and a virtual tool. A novel task where participants use a tool to rotate an object, while simultaneously responding to visual stimuli both near their hand and near the task was introduced in this study. The results demonstrated that participants paid more attention to the task than to both virtual and physical tools. Study 3 further investigated tool embodiment to measure ready-to-hand and unready-to-hand situations. Locus of attention index (LAI) was used to measure the level of tool embodiment in virtual environments. Three different input modalities were used to control the virtual tool to accomplish the task. The results of this study showed that the LAI is higher with the working tool indicating an increased level of tool embodiment, and lower with broken tool indicating a decreased level of tool embodiment. Overall, the research presented in this dissertation investigated embodied interactions with both physical and virtual environments. The contributions included the construction of an evolution measure of object interaction (using the measure of after effect with physical and virtual tools) and tool interaction (using the measure of attention and LIA with physical and virtual tools). The empirical results of study 1 revealed that the after-effect measure might not be a practical measure to evaluated embodied interactions in virtual environments. However, study 2 and 3 provided a reliable method to measures embodied interactions when using tools to interact with the virtual environments. This dissertation also provided tool embodiment framework that can be used as a guide for designers to evaluate the invisibility aspect of technology

    Role of mechanics in tactile sensing of shape

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.Includes bibliographical references (leaves 199-205).by Kiran Dandekar.Ph.D

    Relevant Stimuli and their Relationships to Primate SA-I Mechanoreceptive Responses under Static Sinusoidal Indentation

    No full text
    Abstract-- The key event in tactile sense about an object in contact with the skin surface is how the stress and strain subjected to the mechanical loading are transduced into neural impulses by the mechanorecptors within the skin. In order to investigate the mechanism of transduction by the mechanoreceptors, a biomechanically validated three-dimensional finite element model for primate fingertips was used to simulate neurophysiological experiments involving static indentations by sinusoidal objects. Altogether, eighteen mechanical invariants were obtained from the computed stress and strain components at nine receptor locations. Further investigation has been done to study the influence of external load, size of the indentor and vertical location of the mechanoreceptors on the spatial response profiles
    corecore