17 research outputs found

    Bimanual Motor Strategies and Handedness Role During Human-Exoskeleton Haptic Interaction

    Full text link
    Bimanual object manipulation involves multiple visuo-haptic sensory feedbacks arising from the interaction with the environment that are managed from the central nervous system and consequently translated in motor commands. Kinematic strategies that occur during bimanual coupled tasks are still a scientific debate despite modern advances in haptics and robotics. Current technologies may have the potential to provide realistic scenarios involving the entire upper limb extremities during multi-joint movements but are not yet exploited to their full potential. The present study explores how hands dynamically interact when manipulating a shared object through the use of two impedance-controlled exoskeletons programmed to simulate bimanually coupled manipulation of virtual objects. We enrolled twenty-six participants (2 groups: right-handed and left-handed) who were requested to use both hands to grab simulated objects across the robot workspace and place them in specific locations. The virtual objects were rendered with different dynamic proprieties and textures influencing the manipulation strategies to complete the tasks. Results revealed that the roles of hands are related to the movement direction, the haptic features, and the handedness preference. Outcomes suggested that the haptic feedback affects bimanual strategies depending on the movement direction. However, left-handers show better control of the force applied between the two hands, probably due to environmental pressures for right-handed manipulations

    A Sensorized Instrument for Minimally Invasive Surgery for the Measurement of Forces during Training and Surgery: Development and Applications

    Get PDF
    The reduced access conditions present in Minimally Invasive Surgery (MIS) affect the feel of interaction forces between the instruments and the tissue being treated. This loss of haptic information compromises the safety of the procedure and must be overcome through training. Haptics in MIS is the subject of extensive research, focused on establishing force feedback mechanisms and developing appropriate sensors. This latter task is complicated by the need to place the sensors as close as possible to the instrument tip, as the measurement of forces outside of the patient\u27s body does not represent the true tool--tissue interaction. Many force sensors have been proposed, but none are yet available for surgery. The objectives of this thesis were to develop a set of instruments capable of measuring tool--tissue force information in MIS, and to evaluate the usefulness of force information during surgery and for training and skills assessment. To address these objectives, a set of laparoscopic instruments was developed that can measure instrument position and tool--tissue interaction forces in multiple degrees of freedom. Different design iterations and the work performed towards the development of a sterilizable instrument are presented. Several experiments were performed using these instruments to establish the usefulness of force information in surgery and training. The results showed that the combination of force and position information can be used in the development of realistic tissue models or haptic interfaces specifically designed for MIS. This information is also valuable in order to create tactile maps to assist in the identification of areas of different stiffness. The real-time measurement of forces allows visual force feedback to be presented to the surgeon. When applied to training scenarios, the results show that experience level correlates better with force-based metrics than those currently used in training simulators. The proposed metrics can be automatically computed, are completely objective, and measure important aspects of performance. The primary contribution of this thesis is the design and development of highly versatile instruments capable of measuring force and position during surgery. A second contribution establishes the importance and usefulness of force data during skills assessment, training and surgery

    Programming by Demonstration on Riemannian Manifolds

    Get PDF
    This thesis presents a Riemannian approach to Programming by Demonstration (PbD). It generalizes an existing PbD method from Euclidean manifolds to Riemannian manifolds. In this abstract, we review the objectives, methods and contributions of the presented approach. OBJECTIVES PbD aims at providing a user-friendly method for skill transfer between human and robot. It enables a user to teach a robot new tasks using few demonstrations. In order to surpass simple record-and-replay, methods for PbD need to \u2018understand\u2019 what to imitate; they need to extract the functional goals of a task from the demonstration data. This is typically achieved through the application of statisticalmethods. The variety of data encountered in robotics is large. Typical manipulation tasks involve position, orientation, stiffness, force and torque data. These data are not solely Euclidean. Instead, they originate from a variety of manifolds, curved spaces that are only locally Euclidean. Elementary operations, such as summation, are not defined on manifolds. Consequently, standard statistical methods are not well suited to analyze demonstration data that originate fromnon-Euclidean manifolds. In order to effectively extract what-to-imitate, methods for PbD should take into account the underlying geometry of the demonstration manifold; they should be geometry-aware. Successful task execution does not solely depend on the control of individual task variables. By controlling variables individually, a task might fail when one is perturbed and the others do not respond. Task execution also relies on couplings among task variables. These couplings describe functional relations which are often called synergies. In order to understand what-to-imitate, PbDmethods should be able to extract and encode synergies; they should be synergetic. In unstructured environments, it is unlikely that tasks are found in the same scenario twice. The circumstances under which a task is executed\u2014the task context\u2014are more likely to differ each time it is executed. Task context does not only vary during task execution, it also varies while learning and recognizing tasks. To be effective, a robot should be able to learn, recognize and synthesize skills in a variety of familiar and unfamiliar contexts; this can be achieved when its skill representation is context-adaptive. THE RIEMANNIAN APPROACH In this thesis, we present a skill representation that is geometry-aware, synergetic and context-adaptive. The presented method is probabilistic; it assumes that demonstrations are samples from an unknown probability distribution. This distribution is approximated using a Riemannian GaussianMixtureModel (GMM). Instead of using the \u2018standard\u2019 Euclidean Gaussian, we rely on the Riemannian Gaussian\u2014 a distribution akin the Gaussian, but defined on a Riemannian manifold. A Riev mannian manifold is a manifold\u2014a curved space which is locally Euclidean\u2014that provides a notion of distance. This notion is essential for statistical methods as such methods rely on a distance measure. Examples of Riemannian manifolds in robotics are: the Euclidean spacewhich is used for spatial data, forces or torques; the spherical manifolds, which can be used for orientation data defined as unit quaternions; and Symmetric Positive Definite (SPD) manifolds, which can be used to represent stiffness and manipulability. The Riemannian Gaussian is intrinsically geometry-aware. Its definition is based on the geometry of the manifold, and therefore takes into account the manifold curvature. In robotics, the manifold structure is often known beforehand. In the case of PbD, it follows from the structure of the demonstration data. Like the Gaussian distribution, the Riemannian Gaussian is defined by a mean and covariance. The covariance describes the variance and correlation among the state variables. These can be interpreted as local functional couplings among state variables: synergies. This makes the Riemannian Gaussian synergetic. Furthermore, information encoded in multiple Riemannian Gaussians can be fused using the Riemannian product of Gaussians. This feature allows us to construct a probabilistic context-adaptive task representation. CONTRIBUTIONS In particular, this thesis presents a generalization of existing methods of PbD, namely GMM-GMR and TP-GMM. This generalization involves the definition ofMaximum Likelihood Estimate (MLE), Gaussian conditioning and Gaussian product for the Riemannian Gaussian, and the definition of ExpectationMaximization (EM) and GaussianMixture Regression (GMR) for the Riemannian GMM. In this generalization, we contributed by proposing to use parallel transport for Gaussian conditioning. Furthermore, we presented a unified approach to solve the aforementioned operations using aGauss-Newton algorithm. We demonstrated how synergies, encoded in a Riemannian Gaussian, can be transformed into synergetic control policies using standard methods for LinearQuadratic Regulator (LQR). This is achieved by formulating the LQR problem in a (Euclidean) tangent space of the Riemannian manifold. Finally, we demonstrated how the contextadaptive Task-Parameterized Gaussian Mixture Model (TP-GMM) can be used for context inference\u2014the ability to extract context from demonstration data of known tasks. Our approach is the first attempt of context inference in the light of TP-GMM. Although effective, we showed that it requires further improvements in terms of speed and reliability. The efficacy of the Riemannian approach is demonstrated in a variety of scenarios. In shared control, the Riemannian Gaussian is used to represent control intentions of a human operator and an assistive system. Doing so, the properties of the Gaussian can be employed to mix their control intentions. This yields shared-control systems that continuously re-evaluate and assign control authority based on input confidence. The context-adaptive TP-GMMis demonstrated in a Pick & Place task with changing pick and place locations, a box-taping task with changing box sizes, and a trajectory tracking task typically found in industr

    Towards observable haptics: Novel sensors for capturing tactile interaction patterns

    Get PDF
    Kõiva R. Towards observable haptics: Novel sensors for capturing tactile interaction patterns. Bielefeld: Bielefeld University; 2014.Touch is one of the primary senses humans use when performing coordinated interaction, but the lack of a sense of touch in the majority of contemporary interactive technical systems, such as robots, which operate in non-deterministic environments, results in interactions that can at best be described as clumsy. Observing human haptics and extracting the salient information from the gathered data is not only relevant if we are to try to understand the involved underlying cognitive processes, but should also provide us with significant clues to design future intelligent interactive systems. Such systems could one day help to take the burden of tedious tasks off our hands in a similar fashion to how industrial robots revolutionized manufacturing. The aim of the work in this thesis was to provide significant advancements in tactile sensing technology, and thus move us a step closer to realizing this goal. The contributions contained herein can be broken into two major parts. The first part investigates capturing interaction patterns in humans with the goals of better understanding manual intelligence and improving the lives of hand amputees, while the second part is focused on augmenting technical systems with a sense of touch. tacTiles, a wireless tactile sensitive surface element attached to a deformable textile, was developed to capture human full-body interactions with large surfaces we come into contact with in our daily lives, such as floors, chairs, sofas or other furniture. The Tactile Dataglove, iObject and the Tactile Pen were developed especially to observe human manual intelligence. Whereas iObject allows motion sensing and a higher definition tactile signal to be captured than the Tactile Dataglove (220 tactile cells in the first iObject prototype versus 54 cells in the glove), the wearable glove makes haptic interactions with arbitrary objects observable. The Tactile Pen was designed to measure grip force during handwriting in order to better facilitate therapeutic treatment assessments. These sensors have already been extensively used by various research groups, including our own, to gain a better understanding of human manual intelligence. The Finger-Force-Linear-Sensor and the Tactile Bracelet are two novel sensors that were developed to facilitate more natural control of dexterous multi Degree-of-Freedom (DOF) hand prostheses. The Finger-Force-Linear-Sensor is a very accurate bidirectional single finger force ground-truth measurement device that was designed to enable testing and development of single finger forces and muscle activations mapping algorithms. The Tactile Bracelet was designed with the goal to provide a more robust and intuitive means of control for multi-DOF hand prostheses by measuring the muscle bulgings of the remnant muscles of lower arm amputees. It is currently in development and will eventually cover the complete forearm circumference with high spatial resolution tactile sensitive surfaces. An experiment involving a large number of lower arm amputees has already been planned. The Modular flat tactile sensor system, the Fabric-based touch sensitive artificial skin and the 3D shaped tactile sensor were developed to cover and to add touch sensing capabilities to the surfaces of technical systems. The rapid augmentation of systems with a sense of touch was the main goal of the modular flat tactile sensor system. The developed sensor modules can be used alone or in an array to form larger tactile sensitive surfaces such as tactile sensitive tabletops. As many robots have curved surfaces, using flat rigid modules severely limits the areas that can be covered with tactile sensors. The Fabric-based tactile sensor, originally developed to form a tactile dataglove for human hands, can with minor modifications also function as an artificial skin for technical systems. Finally, the 3D shaped tactile sensor based on Laser-Direct-Structuring technology is a novel tactile sensor that has a true 3D shape and provides high sensitivity and a high spatial resolution. These sensors take us further along the path towards creating general purpose technical systems that in time can be of great help to us in our daily lives. The desired tactile sensor characteristics differ significantly according to which haptic interaction patterns we wish to measure. Large tactile sensor arrays that are used to capture full body haptic interactions with floors and upholstered furniture, or that are designed to cover large areas of technical system surfaces, need to be scalable, have low power consumption and should ideally have a low material cost. Two examples of such sensors are tacTiles and the Fabric-based sensor for curved surfaces. At the other end of the tactile sensor development spectrum, if we want to observe manual interactions, high spatial and temporal resolution are crucial to enable the measurement of fine grasping and manipulation actions. Our fingertips contain the highest density area of mechanoreceptors, the organs that sense mechanical pressure and distortions. Thus, to construct biologically inspired anthropomorphic robotic hands, the artificial tactile sensors for the fingertips require similar high-fidelity sensors with surfaces that are curved under small bending radii in 2 dimensions, have high spatial densities, while simultaneously providing high sensitivity. With the fingertip tactile sensor, designed to fit the Shadow Robot Hands' fingers, I show that such sensors can indeed be constructed in the 3D-shaped high spatial resolution tactile sensor section of my thesis. With my work I have made a significant contribution towards making haptics more observable. I achieved this by developing a high number of novel tactile sensors that are usable, give a deeper insight into human haptic interactions, have great potential to help amputees and that make technical systems, such as robots, more capable

    DEVELOPMENT AND ASSESSMENT OF ADVANCED ASSISTIVE ROBOTIC MANIPULATORS USER INTERFACES

    Get PDF
    Text BoxAssistive Robotic Manipulators (ARM) have shown improvement in self-care and increased independence among people with severe upper extremity disabilities. With an ARM mounted on the side of an electric powered wheelchair, an ARM may provide manipulation assistance, such as picking up object, eating, drinking, dressing, reaching out, or opening doors. However, existing assessment tools are inconsistent between studies, time consuming, and unclear in clinical effectiveness. Therefore, in this research, we have developed an ADL task board evaluation tool that provides standardized, efficient, and reliable assessment of ARM performance. Among powered wheelchair users and able-bodied controls using two commercial ARM user interfaces – joystick and keypad, we found that there were statistical differences between both user interface performances, but no statistical difference was found in the cognitive loading. The ADL task board demonstrated highly correlated performance with an existing functional assessment tool, Wolf Motor Function Test. Through this study, we have also identified barriers and limits in current commercial user interfaces and developed smartphone and assistive sliding-autonomy user interfaces that yields improved performance. Testing results from our smartphone manual interface revealed statistically faster performance. The assistive sliding-autonomy interface helped seamlessly correct the error seen with autonomous functions. The ADL task performance evaluation tool may help clinicians and researchers better access ARM user interfaces and evaluated the efficacy of customized user interfaces to improve performance. The smartphone manual interface demonstrated improved performance and the sliding-autonomy framework showed enhanced success with tasks without recalculating path planning and recognition

    Mechanisms of motor learning: by humans, for robots

    No full text
    Whenever we perform a movement and interact with objects in our environment, our central nervous system (CNS) adapts and controls the redundant system of muscles actuating our limbs to produce suitable forces and impedance for the interaction. As modern robots are increasingly used to interact with objects, humans and other robots, they too require to continuously adapt the interaction forces and impedance to the situation. This thesis investigated the motor mechanisms in humans through a series of technical developments and experiments, and utilized the result to implement biomimetic motor behaviours on a robot. Original tools were first developed, which enabled two novel motor imaging experiments using functional magnetic resonance imaging (fMRI). The first experiment investigated the neural correlates of force and impedance control to understand the control structure employed by the human brain. The second experiment developed a regressor free technique to detect dynamic changes in brain activations during learning, and applied this technique to investigate changes in neural activity during adaptation to force fields and visuomotor rotations. In parallel, a psychophysical experiment investigated motor optimization in humans in a task characterized by multiple error-effort optima. Finally a computational model derived from some of these results was implemented to exhibit human like control and adaptation of force, impedance and movement trajectory in a robot
    corecore