1,052 research outputs found

    Human-centred design methods : developing scenarios for robot assisted play informed by user panels and field trials

    Get PDF
    Original article can be found at: http://www.sciencedirect.com/ Copyright ElsevierThis article describes the user-centred development of play scenarios for robot assisted play, as part of the multidisciplinary IROMEC1 project that develops a novel robotic toy for children with special needs. The project investigates how robotic toys can become social mediators, encouraging children with special needs to discover a range of play styles, from solitary to collaborative play (with peers, carers/teachers, parents, etc.). This article explains the developmental process of constructing relevant play scenarios for children with different special needs. Results are presented from consultation with panel of experts (therapists, teachers, parents) who advised on the play needs for the various target user groups and who helped investigate how robotic toys could be used as a play tool to assist in the children’s development. Examples from experimental investigations are provided which have informed the development of scenarios throughout the design process. We conclude by pointing out the potential benefit of this work to a variety of research projects and applications involving human–robot interactions.Peer reviewe

    Design and Control of Lower Limb Assistive Exoskeleton for Hemiplegia Mobility

    Get PDF

    Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control

    Get PDF
    As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task

    Learning Control Policies for Fall Prevention and Safety in Bipedal Locomotion

    Get PDF
    The ability to recover from an unexpected external perturbation is a fundamental motor skill in bipedal locomotion. An effective response includes the ability to not just recover balance and maintain stability but also to fall in a safe manner when balance recovery is physically infeasible. For robots associated with bipedal locomotion, such as humanoid robots and assistive robotic devices that aid humans in walking, designing controllers which can provide this stability and safety can prevent damage to robots or prevent injury related medical costs. This is a challenging task because it involves generating highly dynamic motion for a high-dimensional, non-linear and under-actuated system with contacts. Despite prior advancements in using model-based and optimization methods, challenges such as requirement of extensive domain knowledge, relatively large computational time and limited robustness to changes in dynamics still make this an open problem. In this thesis, to address these issues we develop learning-based algorithms capable of synthesizing push recovery control policies for two different kinds of robots : Humanoid robots and assistive robotic devices that assist in bipedal locomotion. Our work can be branched into two closely related directions : 1) Learning safe falling and fall prevention strategies for humanoid robots and 2) Learning fall prevention strategies for humans using a robotic assistive devices. To achieve this, we introduce a set of Deep Reinforcement Learning (DRL) algorithms to learn control policies that improve safety while using these robots. To enable efficient learning, we present techniques to incorporate abstract dynamical models, curriculum learning and a novel method of building a graph of policies into the learning framework. We also propose an approach to create virtual human walking agents which exhibit similar gait characteristics to real-world human subjects, using which, we learn an assistive device controller to help virtual human return to steady state walking after an external push is applied. Finally, we extend our work on assistive devices and address the challenge of transferring a push-recovery policy to different individuals. As walking and recovery characteristics differ significantly between individuals, exoskeleton policies have to be fine-tuned for each person which is a tedious, time consuming and potentially unsafe process. We propose to solve this by posing it as a transfer learning problem, where a policy trained for one individual can adapt to another without fine tuning.Ph.D

    Human-Mechanical system interaction in Virtual Reality

    Get PDF
    The present work aims to show the great potential of Virtual Reality (VR) technologies in the field of Human-Robot Interaction (HRI). Indeed, it is foreseeable that in not too distant future cooperating robots will be increasingly present in human environments. Many authors actually believe that after the current information revolution, we will witness the so-called "robotics revolution", with the spread of increasingly intelligent and autonomous robots capable of moving into our own environments. Since these machines must be able to interact with human beings in a safe way, new design tools for the study of Human-Robot Interaction (HRI) are needed. The author believes that VR is an ideal design tool for the study of the interaction between humans and automatic machines, since it allows the designers to interact in real-time with virtual robotic systems and to evaluate different control algorithms, without the need of physical prototypes. This also shields the user from any risk related to the physical experimentation. However, VR technologies have also a more immediate application in the field of HRI, such as the study of usability of interfaces for real-time controlled robots. In fact, these robots, such as robots for microsurgery or even "teleoperated" robots working in a hostile environments, are already quite common. VR allows the designers to evaluate the usability of such interfaces by relating their physical input with a virtual output. In particular, the author has developed a new software application aimed at simulating automatic robots and, more generally, mechanical systems in a virtual environment. The user can interact with one or more virtual manipulators and also control them in real-time by means of several input devices. Finally, an innovative approach to the modeling and control of a humanoid robot with high degree of redundancy is discussed. VR implementation of a virtual humanoid is useful for the study of both humanoid robots and human beings

    Integral admittance shaping: A unified framework for active exoskeleton control

    Full text link
    © 2015 Elsevier B.V. Current strategies for lower-limb exoskeleton control include motion intent estimation, which is subject to inaccuracies in muscle torque estimation as well as modeling error. Approaches that rely on the phases of a uniform gait cycle have proven effective, but lack flexibility to aid other kinds of movement. This research aims at developing a more versatile control that can assist the lower limbs independently of the movement attempted. Our control strategy is based on modifying the dynamic response of the human limbs, specifically their mechanical admittance. Increasing the admittance makes the lower limbs more responsive to any muscle torque generated by the human user. We present Integral Admittance Shaping, a unified mathematical framework for: (a) determining the desired dynamic response of the coupled system formed by the human limb and the exoskeleton, and (b) synthesizing an exoskeleton controller capable of achieving said response. The present control formulation focuses on single degree-of-freedom exoskeleton devices providing performance augmentation. The algorithm generates a desired shape for the frequency response magnitude of the integral admittance (torque-to-angle relationship) of the coupled system. Simultaneously, it generates an optimal feedback controller capable of achieving the desired response while guaranteeing coupled stability and passivity. The potential effects of the exoskeleton's assistance are motion amplification for the same joint torque, and torque reduction for the same joint motion. The robustness of the derived exoskeleton controllers to parameter uncertainties is analyzed and discussed. Results from initial trials using the controller on an experimental exoskeleton are presented as well

    System Identification of Bipedal Locomotion in Robots and Humans

    Get PDF
    The ability to perform a healthy walking gait can be altered in numerous cases due to gait disorder related pathologies. The latter could lead to partial or complete mobility loss, which affects the patients’ quality of life. Wearable exoskeletons and active prosthetics have been considered as a key component to remedy this mobility loss. The control of such devices knows numerous challenges that are yet to be addressed. As opposed to fixed trajectories control, real-time adaptive reference generation control is likely to provide the wearer with more intent control over the powered device. We propose a novel gait pattern generator for the control of such devices, taking advantage of the inter-joint coordination in the human gait. Our proposed method puts the user in the control loop as it maps the motion of healthy limbs to that of the affected one. To design such control strategy, it is critical to understand the dynamics behind bipedal walking. We begin by studying the simple compass gait walker. We examine the well-known Virtual Constraints method of controlling bipedal robots in the image of the compass gait. In addition, we provide both the mechanical and control design of an affordable research platform for bipedal dynamic walking. We then extend the concept of virtual constraints to human locomotion, where we investigate the accuracy of predicting lower limb joints angular position and velocity from the motion of the other limbs. Data from nine healthy subjects performing specific locomotion tasks were collected and are made available online. A successful prediction of the hip, knee, and ankle joints was achieved in different scenarios. It was also found that the motion of the cane alone has sufficient information to help predict good trajectories for the lower limb in stairs ascent. Better estimates were obtained using additional information from arm joints. We also explored the prediction of knee and ankle trajectories from the motion of the hip joints
    • …
    corecore