38 research outputs found

    Delivering Expressive And Personalized Fingertip Tactile Cues

    Get PDF
    Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interactions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two generalizable software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2 and reports on five research projects.National Institutes of Health Contract 2 R01 DC00117National Institutes of Health Contract 1 R01 DC02032National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Contract N01 DC22402National Institutes of Health Grant R01-DC001001National Institutes of Health Grant R01-DC00270National Institutes of Health Grant 5 R01 DC00126National Institutes of Health Grant R29-DC00625U.S. Navy - Office of Naval Research Grant N00014-88-K-0604U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-1814U.S. Navy - Naval Air Warfare Center Training Systems Division Contract N61339-94-C-0087U.S. Navy - Naval Air Warfare Center Training System Division Contract N61339-93-C-0055U.S. Navy - Office of Naval Research Grant N00014-93-1-1198National Aeronautics and Space Administration/Ames Research Center Grant NCC 2-77

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    On neuromechanical approaches for the study of biological and robotic grasp and manipulation

    Get PDF
    abstract: Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank and open-minded assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas at the interface of neuromechanics, neuroscience, rehabilitation and robotics.The electronic version of this article is the complete one and can be found online at: https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-017-0305-

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Grasping for the Task:Human Principles for Robot Hands

    Get PDF
    The significant advances made in the design and construction of anthropomorphic robot hands, endow them with prehensile abilities reaching that of humans. However, using these powerful hands with the same level of expertise that humans display is a big challenge for robots. Traditional approaches use finger-tip (precision) or enveloping (power) methods to generate the best force closure grasps. However, this ignores the variety of prehensile postures available to the hand and also the larger context of arm action. This thesis explores a paradigm for grasp formation based on generating oppositional pressure within the hand, which has been proposed as a functional basis for grasping in humans (MacKenzie and Iberall, 1994). A set of opposition primitives encapsulates the hand's ability to generate oppositional forces. The oppositional intention encoded in a primitive serves as a guide to match the hand to the object, quantify its functional ability and relate this to the arm. In this thesis we leverage the properties of opposition primitives to both interpret grasps formed by humans and to construct grasps for a robot considering the larger context of arm action. In the first part of the thesis we examine the hypothesis that hand representation schemes based on opposition are correlated with hand function. We propose hand-parameters describing oppositional intention and compare these with commonly used methods such as joint angles, joint synergies and shape features. We expect that opposition-based parameterizations, which take an interaction-based perspective of a grasp, are able to discriminate between grasps that are similar in shape but different in functional intent. We test this hypothesis using qualitative assessment of precision and power capabilities found in existing grasp taxonomies. The next part of the thesis presents a general method to recognize oppositional intention manifested in human grasp demonstrations. A data glove instrumented with tactile sensors is used to provide the raw information regarding hand configuration and interaction force. For a grasp combining several cooperating oppositional intentions, hand surfaces can be simultaneously involved in multiple oppositional roles. We characterize the low-level interactions between different surfaces of the hand based on captured interaction force and reconstructed hand surface geometry. This is subsequently used to separate out and prioritize multiple and possibly overlapping oppositional intentions present in the demonstrated grasp. We evaluate our method on several human subjects across a wide range of hand functions. The last part of the thesis applies the properties encoded in opposition primitives to optimize task performance of the arm, for tasks where the arm assumes the dominant role. For these tasks, choosing the strongest power grasp available (from a force-closure sense) may constrain the arm to a sub-optimal configuration. Weaker grasp components impose fewer constraints on the hand, and can therefore explore a wider region of the object relative pose space. We take advantage of this to find the good arm configurations from a task perspective. The final hand-arm configuration is obtained by trading of overall robustness in the grasp with ability of the arm to perform the task. We validate our approach, using the tasks of cutting, hammering, screw-driving and opening a bottle-cap, for both human and robotic hand-arm systems

    APPLICATION OF AUGMENTED REALITY IN MANUAL ASSEMBLY DESIGN AND PLANNING

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore