605 research outputs found

    User evaluation of an interactive learning framework for single-arm and dual-arm robots

    Get PDF
    The final publication is available at link.springer.comSocial robots are expected to adapt to their users and, like their human counterparts, learn from the interaction. In our previous work, we proposed an interactive learning framework that enables a user to intervene and modify a segment of the robot arm trajectory. The framework uses gesture teleoperation and reinforcement learning to learn new motions. In the current work, we compared the user experience with the proposed framework implemented on the single-arm and dual-arm Barrett’s 7-DOF WAM robots equipped with a Microsoft Kinect camera for user tracking and gesture recognition. User performance and workload were measured in a series of trials with two groups of 6 participants using two robot settings in different order for counterbalancing. The experimental results showed that, for the same task, users required less time and produced shorter robot trajectories with the single-arm robot than with the dual-arm robot. The results also showed that the users who performed the task with the single-arm robot first experienced considerably less workload in performing the task with the dual-arm robot while achieving a higher task success rate in a shorter time.Peer ReviewedPostprint (author's final draft

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    Fostering Resilient Aging with a Self-efficacy and Independence Enabling Robot (FRASIER)

    Get PDF
    With the percentage of the elderly population rapidly increasing as the Baby Boomer generation reaches retirement, the demand for assistive care will soon override the supply of caregivers available. Additionally, as most individuals age, the number of age-related limitations preventing them from completing everyday tasks independently may increase. Through FRASIER (Fostering Resilient Aging with a Self-efficacy and Independence Enabling Robot), the project team developed an assistive robot with a goal of providing a solution to this challenge

    A Robust Wheel Interface With A Novel Adaptive Controller For Computer/robot-Assisted Motivating Rehabilitation

    Get PDF
    TheraDrive is a low-cost robotic system for post-stroke upper extremity rehabilitation. This system uses off-the-shelf computer gaming wheels with force feedback to help reduce motor impairment and improve function in the arms of stroke survivors. Preliminary results show that the TheraDrive system lacks a robust mechanical linkage that can withstand the forces exerted by patients, lacks a patient-specific adaptive controller to deliver personalized therapy, and is not capable of delivering effective therapy to severely low-functioning patients. A new low-cost, high-force haptic robot with a single degree of freedom has been developed to address these concerns. The resulting TheraDrive consists of an actuated hand crank with a compliant transmission. Actuation is provided by a brushed DC motor, geared to output up to 50 lbf (223 N) at the end effector. To enable safe human-machine interaction, a special compliant element was developed to function also as a failsafe torque limiter. A load cell is used to determine the human-machine interaction forces for use by the robot\u27s impedance controller. The impedance controller renders a virtual spring that attracts or repels the end effector from a moving target that the human must track during therapy exercises. As exercises are performed, an adaptive controller monitors patient performance and adjusts the spring stiffness to ensure that exercises are difficult but doable, which is important for maintaining patient motivation. Experiments with a computer model of a human and robot show the adaptive controller\u27s ability to maintain difficulty of exercises after a period of initial calibration. Seven human subjects (3 normal, 4 stroke-impaired) were used to test this system alongside the original TheraDrive system in order to compare both systems. Data showed that the new system produced a larger change in normalized trajectory tracking error when assistance/resistance was added to exercises when compared to the original TheraDrive. Data also showed that adaptive control led subject performance to be closer to a desired level. Motivation surveys showed no significant difference in subject motivation between the two systems. When asked to choose a preferred system, stroke subjects unanimously chose the new robot

    Collaborative human-machine interfaces for mobile manipulators.

    Get PDF
    The use of mobile manipulators in service industries as both agents in physical Human Robot Interaction (pHRI) and for social interactions has been on the increase in recent times due to necessities like compensating for workforce shortages and enabling safer and more efficient operations amongst other reasons. Collaborative robots, or co-bots, are robots that are developed for use with human interaction through direct contact or close proximity in a shared space with the human users. The work presented in this dissertation focuses on the design, implementation and analysis of components for the next-generation collaborative human machine interfaces (CHMI) needed for mobile manipulator co-bots that can be used in various service industries. The particular components of these CHMI\u27s that are considered in this dissertation include: Robot Control: A Neuroadaptive Controller (NAC)-based admittance control strategy for pHRI applications with a co-bot. Robot state estimation: A novel methodology and placement strategy for using arrays of IMUs that can be embedded in robot skin for pose estimation in complex robot mechanisms. User perception of co-bot CHMI\u27s: Evaluation of human perceptions of usefulness and ease of use of a mobile manipulator co-bot in a nursing assistant application scenario. To facilitate advanced control for the Adaptive Robotic Nursing Assistant (ARNA) mobile manipulator co-bot that was designed and developed in our lab, we describe and evaluate an admittance control strategy that features a Neuroadaptive Controller (NAC). The NAC has been specifically formulated for pHRI applications such as patient walking. The controller continuously tunes weights of a neural network to cancel robot non-linearities, including drive train backlash, kinematic or dynamic coupling, variable patient pushing effort, or slope surfaces with unknown inclines. The advantage of our control strategy consists of Lyapunov stability guarantees during interaction, less need for parameter tuning and better performance across a variety of users and operating conditions. We conduct simulations and experiments with 10 users to confirm that the NAC outperforms a classic Proportional-Derivative (PD) joint controller in terms of resulting interaction jerk, user effort, and trajectory tracking error during patient walking. To tackle complex mechanisms of these next-gen robots wherein the use of encoder or other classic pose measuring device is not feasible, we present a study effects of design parameters on methods that use data from Inertial Measurement Units (IMU) in robot skins to provide robot state estimates. These parameters include number of sensors, their placement on the robot, as well as noise properties on the quality of robot pose estimation and its signal-to-noise Ratio (SNR). The results from that study facilitate the creation of robot skin, and in order to enable their use in complex robots, we propose a novel pose estimation method, the Generalized Common Mode Rejection (GCMR) algorithm, for estimation of joint angles in robot chains containing composite joints. The placement study and GCMR are demonstrated using both Gazebo simulation and experiments with a 3-DoF robotic arm containing 2 non-zero link lengths, 1 revolute joint and a 2-DoF composite joint. In addition to yielding insights on the predicted usage of co-bots, the design of control and sensing mechanisms in their CHMI benefits from evaluating the perception of the eventual users of these robots. With co-bots being only increasingly developed and used, there is a need for studies into these user perceptions using existing models that have been used in predicting usage of comparable technology. To this end, we use the Technology Acceptance Model (TAM) to evaluate the CHMI of the ARNA robot in a scenario via analysis of quantitative and questionnaire data collected during experiments with eventual uses. The results from the works conducted in this dissertation demonstrate insightful contributions to the realization of control and sensing systems that are part of CHMI\u27s for next generation co-bots

    模倣学習を用いた両腕ロボット着衣介助システムのデザインと開発

    Get PDF
    The recent demographic trend across developed nations shows a dramatic increase in the aging population and fallen fertility rates. With the aging population, the number of elderly who need support for their Activities of Daily Living (ADL) such as dressing, is growing. The use of caregivers is universal for the dressing task due to the unavailability of any effective assistive technology. Unfortunately, across the globe, many nations are suffering from a severe shortage of caregivers. Hence, the demand for service robots to assist with the dressing task is increasing rapidly. Robotic Clothing Assistance is a challenging task. The robot has to deal with the following two complex tasks simultaneously, (a) non-rigid and highly flexible cloth manipulation, and (b) safe human-robot interaction while assisting a human whose posture may vary during the task. On the other hand, humans can deal with these tasks rather easily. In this thesis, a framework for Robotic Clothing Assistance by imitation learning from a human demonstration to a compliant dual-arm robot is proposed. In this framework, the dressing task is divided into the following three phases, (a) reaching phase, (b) arm dressing phase, and (c) body dressing phase. The arm dressing phase is treated as a global trajectory modification and implemented by applying the Dynamic Movement Primitives (DMP). The body dressing phase is represented as a local trajectory modification and executed by employing the Bayesian Gaussian Process Latent Variable Model (BGPLVM). It is demonstrated that the proposed framework developed towards assisting the elderly is generalizable to various people and successfully performs a sleeveless T-shirt dressing task. Furthermore, in this thesis, various limitations and improvements to the framework are discussed. These improvements include the followings (a) evaluation of Robotic Clothing Assistance, (b) automated wheelchair movement, and (c) incremental learning to perform Robotic Clothing Assistance. Evaluation is necessary for our framework. To make it accessible in care facilities, systematic assessment of the performance, and the devices’ effects on the care receivers and caregivers is required. Therefore, a robotic simulator that mimicks human postures is used as a subject to evaluate the dressing task. The proposed framework involves a wheeled chair’s manually coordinated movement, which is difficult to perform for the elderly as it requires pushing the chair by himself. To this end, using an electric wheelchair, an approach for wheelchair and robot collaboration is presented. Finally, to incorporate different human body dimensions, Robotic Clothing Assistance is formulated as an incremental imitation learning problem. The proposed formulation enables learning and adjusting the behavior incrementally whenever a new demonstration is performed. When found inappropriate, the planned trajectory is modified through physical Human-Robot Interaction (HRI) during the execution. This research work is exhibited to the public at various events such as the International Robot Exhibition (iREX) 2017 at Tokyo (Japan), the West Japan General Exhibition Center Annex 2018 at Kokura (Japan), and iREX 2019 at Tokyo (Japan).九州工業大学博士学位論文 学位記番号:生工博甲第384号 学位授与年月日:令和2年9月25日1 Introduction|2 Related Work|3 Imitation Learning|4 Experimental System|5 Proposed Framework|6 Whole-Body Robotic Simulator|7 Electric Wheelchair-Robot Collaboration|8 Incremental Imitation Learning|9 Conclusion九州工業大学令和2年

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Perception and manipulation for robot-assisted dressing

    Get PDF
    Assistive robots have the potential to provide tremendous support for disabled and elderly people in their daily dressing activities. This thesis presents a series of perception and manipulation algorithms for robot-assisted dressing, including: garment perception and grasping prior to robot-assisted dressing, real-time user posture tracking during robot-assisted dressing for (simulated) impaired users with limited upper-body movement capability, and finally a pipeline for robot-assisted dressing for (simulated) paralyzed users who have lost the ability to move their limbs. First, the thesis explores learning suitable grasping points on a garment prior to robot-assisted dressing. Robots should be endowed with the ability to autonomously recognize the garment state, grasp and hand the garment to the user and subsequently complete the dressing process. This is addressed by introducing a supervised deep neural network to locate grasping points. To reduce the amount of real data required, which is costly to collect, the power of simulation is leveraged to produce large amounts of labeled data. Unexpected user movements should be taken into account during dressing when planning robot dressing trajectories. Tracking such user movements with vision sensors is challenging due to severe visual occlusions created by the robot and clothes. A probabilistic real-time tracking method is proposed using Bayesian networks in latent spaces, which fuses multi-modal sensor information. The latent spaces are created before dressing by modeling the user movements, taking the user's movement limitations and preferences into account. The tracking method is then combined with hierarchical multi-task control to minimize the force between the user and the robot. The proposed method enables the Baxter robot to provide personalized dressing assistance for users with (simulated) upper-body impairments. Finally, a pipeline for dressing (simulated) paralyzed patients using a mobile dual-armed robot is presented. The robot grasps a hospital gown naturally hung on a rail, and moves around the bed to finish the upper-body dressing of a hospital training manikin. To further improve simulations for garment grasping, this thesis proposes to update more realistic physical properties values for the simulated garment. This is achieved by measuring physical similarity in the latent space using contrastive loss, which maps physically similar examples to nearby points.Open Acces
    corecore