176 research outputs found

    Admittance-based controller design for physical human-robot interaction in the constrained task space

    Get PDF
    In this article, an admittance-based controller for physical human-robot interaction (pHRI) is presented to perform the coordinated operation in the constrained task space. An admittance model and a soft saturation function are employed to generate a differentiable reference trajectory to ensure that the end-effector motion of the manipulator complies with the human operation and avoids collision with surroundings. Then, an adaptive neural network (NN) controller involving integral barrier Lyapunov function (IBLF) is designed to deal with tracking issues. Meanwhile, the controller can guarantee the end-effector of the manipulator limited in the constrained task space. A learning method based on the radial basis function NN (RBFNN) is involved in controller design to compensate for the dynamic uncertainties and improve tracking performance. The IBLF method is provided to prevent violations of the constrained task space. We prove that all states of the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB) by utilizing the Lyapunov stability principles. At last, the effectiveness of the proposed algorithm is verified on a Baxter robot experiment platform. Note to Practitioners-This work is motivated by the neglect of safety in existing controller design in physical human-robot interaction (pHRI), which exists in industry and services, such as assembly and medical care. It is considerably required in the controller design for rigorously handling constraints. Therefore, in this article, we propose a novel admittance-based human-robot interaction controller. The developed controller has the following functionalities: 1) ensuring reference trajectory remaining in the constrained task space: A differentiable reference trajectory is shaped by the desired admittance model and a soft saturation function; 2) solving uncertainties of robotic dynamics: A learning approach based on radial basis function neural network (RBFNN) is involved in controller design; and 3) ensuring the end-effector of the manipulator remaining in the constrained task space: different from other barrier Lyapunov function (BLF), integral BLF (IBLF) is proposed to constrain system output directly rather than tracking error, which may be more convenient for controller designers. The controller can be potentially applied in many areas. First, it can be used in the rehabilitation robot to avoid injuring the patient by limiting the motion. Second, it can ensure the end-effector of the industrial manipulator in a prescribed task region. In some industrial tasks, dangerous or damageable tools are mounted on the end-effector, and it will hurt humans and bring damage to the robot when the end-effector is out of the prescribed task region. Third, it may bring a new idea to the designed controller for avoiding collisions in pHRI when collisions occur in the prescribed trajectory of end-effector

    Neural-learning-based force sensorless admittance control for robots with input deadzone

    Get PDF
    This paper presents a neural networks based admittance control scheme for robotic manipulators when interacting with the unknown environment in the presence of the actuator deadzone without needing force sensing. A compliant behaviour of robotic manipulators in response to external torques from the unknown environment is achieved by admittance control. Inspired by broad learning system (BLS), a flatted neural network structure using Radial Basis Function (RBF) with incremental learning algorithm is proposed to estimate the external torque, which can avoid retraining process if the system is modelled insufficiently. To deal with uncertainties in the robot system, an adaptive neural controller with dynamic learning framework is developed to ensure the tracking performance. Experiments on the Baxter robot have been implemented to test the effectiveness of the proposed method

    Biologically-inspired motion modeling and neural control for robot learning from demonstrations

    Get PDF

    Trust-Based Control of Robotic Manipulators in Collaborative Assembly in Manufacturing

    Get PDF
    Human-robot interaction (HRI) is vastly addressed in the field of automation and manufacturing. Most of the HRI literature in manufacturing explored physical human-robot interaction (pHRI) and invested in finding means for ensuring safety and optimized effort sharing amongst a team of humans and robots. The recent emergence of safe, lightweight, and human-friendly robots has opened a new realm for human-robot collaboration (HRC) in collaborative manufacturing. For such robots with the new HRI functionalities to interact closely and effectively with a human coworker, new human-centered controllers that integrate both physical and social interaction are demanded. Social human-robot interaction (sHRI) has been demonstrated in robots with affective abilities in education, social services, health care, and entertainment. Nonetheless, sHRI should not be limited only to those areas. In particular, we focus on human trust in robot as a basis of social interaction. Human trust in robot and robot anthropomorphic features have high impacts on sHRI. Trust is one of the key factors in sHRI and a prerequisite for effective HRC. Trust characterizes the reliance and tendency of human in using robots. Factors within a robotic system (e.g. performance, reliability, or attribute), the task, and the surrounding environment can all impact the trust dynamically. Over-reliance or under-reliance might occur due to improper trust, which results in poor team collaboration, and hence higher task load and lower overall task performance. The goal of this dissertation is to develop intelligent control algorithms for the manipulator robots that integrate both physical and social HRI factors in the collaborative manufacturing. First, the evolution of human trust in a collaborative robot model is identified and verified through a series of human-in-the-loop experiments. This model serves as a computational trust model estimating an objective criterion for the evolution of human trust in robot rather than estimating an individual\u27s actual level of trust. Second, an HRI-based framework is developed for controlling the speed of a robot performing pick and place tasks. The impact of the consideration of the different level of interaction in the robot controller on the overall efficiency and HRI criteria such as human perceived workload and trust and robot usability is studied using a series of human-in-the-loop experiments. Third, an HRI-based framework is developed for planning and controlling the robot motion in performing hand-over tasks to the human. Again, series of human-in-the-loop experimental studies are conducted to evaluate the impact of implementation of the frameworks on overall efficiency and HRI criteria such as human workload and trust and robot usability. Finally, another framework is proposed for the cooperative manipulation of a common object by a team of a human and a robot. This framework proposes a trust-based role allocation strategy for adjusting the proactive behavior of the robot performing a cooperative manipulation task in HRC scenarios. For the mentioned frameworks, the results of the experiments show that integrating HRI in the robot controller leads to a lower human workload while it maintains a threshold level of human trust in robot and does not degrade robot usability and efficiency

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being ā€œobservedā€ by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Neural Control of Bimanual Robots With Guaranteed Global Stability and Motion Precision

    Get PDF
    Robots with coordinated dual arms are able to perform more complicated tasks that a single manipulator could hardly achieve. However, more rigorous motion precision is required to guarantee effective cooperation between the dual arms, especially when they grasp a common object. In this case, the internal forces applied on the object must also be considered in addition to the external forces. Therefore, a prescribed tracking performance at both transient and steady states is first specified, and then, a controller is synthesized to rigorously guarantee the specified motion performance. In the presence of unknown dynamics of both the robot arms and the manipulated object, the neural network approximation technique is employed to compensate for uncertainties. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is integrated into the control design. Effectiveness of the proposed control design has been shown through experiments carried out on the Baxter Robot

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various ļ¬elds, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions eļ¬€ectively still remains un-resolved. The current artiļ¬cial intelligence (AI) technology does not support robots to fulļ¬l complex tasks without humanā€™s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research ļ¬elds. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiļ¬€ness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots oļ¬€ering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo cameraā€™s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning eļ¬ƒciency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humansā€™ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robotā€™s perspective.Comparative experiments have been performed to demonstrate the eļ¬€ectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design
    • ā€¦
    corecore