410,988 research outputs found

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table

    Design, implementation, control, and user evaluations of assiston-arm self-aligning upper-extremity exoskeleton

    Get PDF
    Physical rehabilitation therapy is indispensable for treating neurological disabilities. The use of robotic devices for rehabilitation holds high promise, since these devices can bear the physical burden of rehabilitation exercises during intense therapy sessions, while therapists are employed as decision makers. Robot-assisted rehabilitation devices are advantageous as they can be applied to patients with all levels of impairment, allow for easy tuning of the duration and intensity of therapies and enable customized, interactive treatment protocols. Moreover, since robotic devices are particularly good at repetitive tasks, rehabilitation robots can decrease the physical burden on therapists and enable a single therapist to supervise multiple patients simultaneously; hence, help to lower cost of therapies. While the intensity and quality of manually delivered therapies depend on the skill and fatigue level of therapists, high-intensity robotic therapies can always be delivered with high accuracy. Thanks to their integrated sensors, robotic devices can gather measurements throughout therapies, enable quantitative tracking of patient progress and development of evidence-based personalized rehabilitation programs. In this dissertation, we present the design, control, characterization and user evaluations of AssistOn-Arm, a powered, self-aligning exoskeleton for robotassisted upper-extremity rehabilitation. AssistOn-Arm is designed as a passive back-driveable impedance-type robot such that patients/therapists can move the device transparently, without much interference of the device dynamics on natural movements. Thanks to its novel kinematics and mechanically transparent design, AssistOn-Arm can passively self-align its joint axes to provide an ideal match between human joint axes and the exoskeleton axes, guaranteeing ergonomic movements and comfort throughout physical therapies. The self-aligning property of AssistOn-Arm not only increases the usable range of motion for robot-assisted upper-extremity exercises to cover almost the whole human arm workspace, but also enables the delivery of glenohumeral mobilization (scapular elevation/depression and protraction/retraction) and scapular stabilization exercises, extending the type of therapies that can be administered using upper-extremity exoskeletons. Furthermore, the self-alignment property of AssistOn-Arm signi cantly shortens the setup time required to attach a patient to the exoskeleton. As an impedance-type device with high passive back-driveability, AssistOn- Arm can be force controlled without the need of force sensors; hence, high delity interaction control performance can be achieved with open-loop impedance control. This control architecture not only simpli es implementation, but also enhances safety (coupled stability robustness), since open-loop force control does not su er from the fundamental bandwidth and stability limitations of force-feedback. Experimental characterizations and user studies with healthy volunteers con- rm the transparency, range of motion, and control performance of AssistOn- Ar

    Grip force as a functional window to somatosensory cognition

    Get PDF
    Analysis of grip force signals tailored to hand and finger movement evolution and changes in grip force control during task execution provide unprecedented functional insight into somatosensory cognition. Somatosensory cognition is a basis of our ability to manipulate, move, and transform objects of the physical world around us, to recognize them on the basis of touch alone, and to grasp them with the right amount of force for lifting and manipulating them. Recent technology has permitted the wireless monitoring of grip force signals recorded from biosensors in the palm of the human hand to track and trace human grip forces deployed in cognitive tasks executed under conditions of variable sensory (visual, auditory) input. Non-invasive multi-finger grip force sensor technology can be exploited to explore functional interactions between somatosensory brain mechanisms and motor control, in particular during learning a cognitive task where the planning and strategic execution of hand movements is essential. Sensorial and cognitive processes underlying manual skills and/or hand-specific (dominant versus non-dominant hand) behaviors can be studied in a variety of contexts by probing selected measurement loci in the fingers and palm of the human hand. Thousands of sensor data recorded from multiple spatial locations can be approached statistically to breathe functional sense into the forces measured under specific task constraints. Grip force patterns in individual performance profiling may reveal the evolution of grip force control as a direct result of cognitive changes during task learning. Grip forces can be functionally mapped to from-global-to-local coding principles in brain networks governing somatosensory processes for motor control in cognitive tasks leading to a specific task expertise or skill. Under the light of a comprehensive overview of recent discoveries into the functional significance of human grip force variations, perspectives for future studies in cognition, in particular the cognitive control of strategic and task relevant hand movements in complex real-world precision task, are pointed out

    Dance Teaching by a Robot: Combining Cognitive and Physical Human-Robot Interaction for Supporting the Skill Learning Process

    Full text link
    This letter presents a physical human-robot interaction scenario in which a robot guides and performs the role of a teacher within a defined dance training framework. A combined cognitive and physical feedback of performance is proposed for assisting the skill learning process. Direct contact cooperation has been designed through an adaptive impedance-based controller that adjusts according to the partner's performance in the task. In measuring performance, a scoring system has been designed using the concept of progressive teaching (PT). The system adjusts the difficulty based on the user's number of practices and performance history. Using the proposed method and a baseline constant controller, comparative experiments have shown that the PT presents better performance in the initial stage of skill learning. An analysis of the subjects' perception of comfort, peace of mind, and robot performance have shown a significant difference at the p < .01 level, favoring the PT algorithm.Comment: Presented at IEEE International Conference on Robotics and Automation ICRA-201

    Learning Dynamic Robot-to-Human Object Handover from Human Feedback

    Full text link
    Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.Comment: Appears in the Proceedings of the International Symposium on Robotics Research (ISRR) 201

    Task analysis of discrete and continuous skills: a dual methodology approach to human skills capture for automation

    Get PDF
    There is a growing requirement within the field of intelligent automation for a formal methodology to capture and classify explicit and tacit skills deployed by operators during complex task performance. This paper describes the development of a dual methodology approach which recognises the inherent differences between continuous tasks and discrete tasks and which proposes separate methodologies for each. Both methodologies emphasise capturing operators’ physical, perceptual, and cognitive skills, however, they fundamentally differ in their approach. The continuous task analysis recognises the non-arbitrary nature of operation ordering and that identifying suitable cues for subtask is a vital component of the skill. Discrete task analysis is a more traditional, chronologically ordered methodology and is intended to increase the resolution of skill classification and be practical for assessing complex tasks involving multiple unique subtasks through the use of taxonomy of generic actions for physical, perceptual, and cognitive actions
    • …
    corecore