14 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationVirtual environments provide a consistent and relatively inexpensive method of training individuals. They often include haptic feedback in the form of forces applied to a manipulandum or thimble to provide a more immersive and educational experience. However, the limited haptic feedback provided in these systems tends to be restrictive and frustrating to use. Providing tactile feedback in addition to this kinesthetic feedback can enhance the user's ability to manipulate and interact with virtual objects while providing a greater level of immersion. This dissertation advances the state-of-the-art by providing a better understanding of tactile feedback and advancing combined tactilekinesthetic systems. The tactile feedback described within this dissertation is provided by a finger-mounted device called the contact location display (CLD). Rather than displaying the entire contact surface, the device displays (feeds back) information only about the center of contact between the user's finger and a virtual surface. In prior work, the CLD used specialized two-dimensional environments to provide smooth tactile feedback. Using polygonal environments would greatly enhance the device's usefulness. However, the surface discontinuities created by the facets on these models are rendered through the CLD, regardless of traditional force shading algorithms. To address this issue, a haptic shading algorithm was developed to provide smooth tactile and kinesthetic interaction with general polygonal models. Two experiments were used to evaluate the shading algorithm. iv To better understand the design requirements of tactile devices, three separate experiments were run to evaluate the perception thresholds for cue localization, backlash, and system delay. These experiments establish quantitative design criteria for tactile devices. These results can serve as the maximum (i.e., most demanding) device specifications for tactile-kinesthetic haptic systems where the user experiences tactile feedback as a function of his/her limb motions. Lastly, a revision of the CLD was constructed and evaluated. By taking the newly evaluated design criteria into account, the CLD device became smaller and lighter weight, while providing a full two degree-of-freedom workspace that covers the bottom hemisphere of the finger. Two simple manipulation experiments were used to evaluate the new CLD device

    Analysis of postures for handwriting on touch screens without using tools

    Get PDF
    The act of handwriting affected the evolutionary development of humans and still impacts the motor cognition of individuals. However, the ubiquitous use of digital technologies has drastically decreased the number of times we really need to pick a pen up and write on paper. Nonetheless, the positive cognitive impact of handwriting is widely recognized, and a possible way to merge the benefits of handwriting and digital writing is to use suitable tools to write over touchscreens or graphics tablets. In this manuscript, we focus on the possibility of using the hand itself as a writing tool. A novel hand posture named FingerPen is introduced, and can be seen as a grasp performed by the hand on the index finger. A comparison with the most common posture that people tend to assume (i.e. index finger-only exploitation) is carried out by means of a biomechanical model. A conducted user study shows that the FingerPen is appreciated by users and leads to accurate writing traits

    A Fabric-based Approach for Softness Rendering

    Get PDF
    In this chapter we describe a softness display based on the contact area spread rate (CASR) paradigm. This device uses a stretchable fabric as a substrate that can be touched by users, while contact area is directly measured via an optical system. By varying the stretching state of the fabric, different stiffness values can be conveyed to users. We describe a first technological implementation of the display and compare its performance in rendering various levels of stiffness with the one exhibited by a pneumatic CASR-based device. Psychophysical experiments are reported and discussed. Afterwards, we present a new technological implementation for the fabric-based display, with reduced dimensions and faster actuation, which enables rapid changes in the fabric stretching state. These changes are mandatory to properly track typical force/area curves of real materials. System performance in mimicking force-area curves obtained from real objects exhibits a high degree of reliability, also in eliciting overall discriminable levels of softness

    Memory influences haptic perception of softness

    Get PDF
    The memory of an object’s property (e.g. its typical colour) can affect its visual perception. We investigated whether memory of the softness of every-day objects influences their haptic perception. We produced bipartite silicone rubber stimuli: one half of the stimuli was covered with a layer of an object (sponge, wood, tennis ball, foam ball); the other half was uncovered silicone. Participants were not aware of the partition. They first used their bare finger to stroke laterally over the covering layer to recognize the well-known object and then indented the other half of the stimulus with a probe to compare its softness to that of an uncovered silicone stimulus. Across four experiments with different methods we showed that silicon stimuli covered with a layer of rather hard objects (tennis ball and wood) were perceived harder than the same silicon stimuli when being covered with a layer of rather soft objects (sponge and foam ball), indicating that haptic perception of softness is affected by memory

    Visuohaptic Simulation of a Borescope for Aircraft Engine Inspection

    Get PDF
    Consisting of a long, fiber optic probe containing a small CCD camera controlled by hand-held articulation interface, a video borescope is used for remote visual inspection of hard to reach components in an aircraft. The knowledge and psychomotor skills, specifically the hand-eye coordination, required for effective inspection are hard to acquire through limited exposure to the borescope in aviation maintenance schools. Inexperienced aircraft maintenance technicians gain proficiency through repeated hands-on learning in the workplace along a steep learning curve while transitioning from the classroom to the workforce. Using an iterative process combined with focused user evaluations, this dissertation details the design, implementation and evaluation of a novel visuohaptic simulator for training novice aircraft maintenance technicians in the task of engine inspection using a borescope. First, we describe the development of the visual components of the simulator, along with the acquisition and modeling of a representative model of a {PT-6} aircraft engine. Subjective assessments with both expert and novice aircraft maintenance engineers evaluated the visual realism and the control interfaces of the simulator. In addition to visual feedback, probe contact feedback is provided through a specially designed custom haptic interface that simulates tip contact forces as the virtual probe intersects with the {3D} model surfaces of the engine. Compared to other haptic interfaces, the custom design is unique in that it is inexpensive and uses a real borescope probe to simulate camera insertion and withdrawal. User evaluation of this simulator with probe tip feedback suggested a trend of improved performance with haptic feedback. Next, we describe the development of a physically-based camera model for improved behavioral realism of the simulator. Unlike a point-based camera, the enhanced camera model simulates the interaction of the borescope probe, including multiple points of contact along the length of the probe. We present visual comparisons of a real probe\u27s motion with the simulated probe model and develop a simple algorithm for computing the resultant contact forces. User evaluation comparing our custom haptic device with two commonly available haptic devices, the Phantom Omni and the Novint Falcon, suggests that the improved camera model as well as probe contact feedback with the 3D engine model plays a significant role in the overall engine inspection process. Finally, we present results from a skill transfer study comparing classroom-only instruction with both simulator and hands-on training. Students trained using the simulator and the video borescope completed engine inspection using the real video borescope significantly faster than students who received classroom-only training. The speed improvements can be attributed to reduced borescope probe maneuvering time within the engine and improved psychomotor skills due to training. Given the usual constraints of limited time and resources, simulator training may provide beneficial skills needed by novice aircraft maintenance technicians to augment classroom instruction, resulting in a faster transition into the aviation maintenance workforce

    Progettazione e realizzazione di un display tattile immersivo per stimolazione multi-dito

    Get PDF
    L’obiettivo di questo studio consiste nella progettazione e nella realizzazione di un dispositivo aptico in grado di realizzare un interazione multi contatto con gli oggetti in un ambiente virtuale

    Increasing Transparency and Presence of Teleoperation Systems Through Human-Centered Design

    Get PDF
    Teleoperation allows a human to control a robot to perform dexterous tasks in remote, dangerous, or unreachable environments. A perfect teleoperation system would enable the operator to complete such tasks at least as easily as if he or she was to complete them by hand. This ideal teleoperator must be perceptually transparent, meaning that the interface appears to be nearly nonexistent to the operator, allowing him or her to focus solely on the task environment, rather than on the teleoperation system itself. Furthermore, the ideal teleoperation system must give the operator a high sense of presence, meaning that the operator feels as though he or she is physically immersed in the remote task environment. This dissertation seeks to improve the transparency and presence of robot-arm-based teleoperation systems through a human-centered design approach, specifically by leveraging scientific knowledge about the human motor and sensory systems. First, this dissertation aims to improve the forward (efferent) teleoperation control channel, which carries information from the human operator to the robot. The traditional method of calculating the desired position of the robot\u27s hand simply scales the measured position of the human\u27s hand. This commonly used motion mapping erroneously assumes that the human\u27s produced motion identically matches his or her intended movement. Given that humans make systematic directional errors when moving the hand under conditions similar to those imposed by teleoperation, I propose a new paradigm of data-driven human-robot motion mappings for teleoperation. The mappings are determined by having the human operator mimic the target robot as it autonomously moves its arm through a variety of trajectories in the horizontal plane. Three data-driven motion mapping models are described and evaluated for their ability to correct for the systematic motion errors made in the mimicking task. Individually-fit and population-fit versions of the most promising motion mapping model are then tested in a teleoperation system that allows the operator to control a virtual robot. Results of a user study involving nine subjects indicate that the newly developed motion mapping model significantly increases the transparency of the teleoperation system. Second, this dissertation seeks to improve the feedback (afferent) teleoperation control channel, which carries information from the robot to the human operator. We aim to improve a teleoperation system a teleoperation system by providing the operator with multiple novel modalities of haptic (touch-based) feedback. We describe the design and control of a wearable haptic device that provides kinesthetic grip-force feedback through a geared DC motor and tactile fingertip-contact-and-pressure and high-frequency acceleration feedback through a pair of voice-coil actuators mounted at the tips of the thumb and index finger. Each included haptic feedback modality is known to be fundamental to direct task completion and can be implemented without great cost or complexity. A user study involving thirty subjects investigated how these three modalities of haptic feedback affect an operator\u27s ability to control a real remote robot in a teleoperated pick-and-place task. This study\u27s results strongly support the utility of grip-force and high-frequency acceleration feedback in teleoperation systems and show more mixed effects of fingertip-contact-and-pressure feedback

    Haptics: Science, Technology, Applications

    Get PDF
    This open access book constitutes the proceedings of the 12th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2020, held in Leiden, The Netherlands, in September 2020. The 60 papers presented in this volume were carefully reviewed and selected from 111 submissions. The were organized in topical sections on haptic science, haptic technology, and haptic applications. This year's focus is on accessibility

    Grasping for the Task:Human Principles for Robot Hands

    Get PDF
    The significant advances made in the design and construction of anthropomorphic robot hands, endow them with prehensile abilities reaching that of humans. However, using these powerful hands with the same level of expertise that humans display is a big challenge for robots. Traditional approaches use finger-tip (precision) or enveloping (power) methods to generate the best force closure grasps. However, this ignores the variety of prehensile postures available to the hand and also the larger context of arm action. This thesis explores a paradigm for grasp formation based on generating oppositional pressure within the hand, which has been proposed as a functional basis for grasping in humans (MacKenzie and Iberall, 1994). A set of opposition primitives encapsulates the hand's ability to generate oppositional forces. The oppositional intention encoded in a primitive serves as a guide to match the hand to the object, quantify its functional ability and relate this to the arm. In this thesis we leverage the properties of opposition primitives to both interpret grasps formed by humans and to construct grasps for a robot considering the larger context of arm action. In the first part of the thesis we examine the hypothesis that hand representation schemes based on opposition are correlated with hand function. We propose hand-parameters describing oppositional intention and compare these with commonly used methods such as joint angles, joint synergies and shape features. We expect that opposition-based parameterizations, which take an interaction-based perspective of a grasp, are able to discriminate between grasps that are similar in shape but different in functional intent. We test this hypothesis using qualitative assessment of precision and power capabilities found in existing grasp taxonomies. The next part of the thesis presents a general method to recognize oppositional intention manifested in human grasp demonstrations. A data glove instrumented with tactile sensors is used to provide the raw information regarding hand configuration and interaction force. For a grasp combining several cooperating oppositional intentions, hand surfaces can be simultaneously involved in multiple oppositional roles. We characterize the low-level interactions between different surfaces of the hand based on captured interaction force and reconstructed hand surface geometry. This is subsequently used to separate out and prioritize multiple and possibly overlapping oppositional intentions present in the demonstrated grasp. We evaluate our method on several human subjects across a wide range of hand functions. The last part of the thesis applies the properties encoded in opposition primitives to optimize task performance of the arm, for tasks where the arm assumes the dominant role. For these tasks, choosing the strongest power grasp available (from a force-closure sense) may constrain the arm to a sub-optimal configuration. Weaker grasp components impose fewer constraints on the hand, and can therefore explore a wider region of the object relative pose space. We take advantage of this to find the good arm configurations from a task perspective. The final hand-arm configuration is obtained by trading of overall robustness in the grasp with ability of the arm to perform the task. We validate our approach, using the tasks of cutting, hammering, screw-driving and opening a bottle-cap, for both human and robotic hand-arm systems
    corecore