4,847 research outputs found

    On Sensorless Collision Detection and Measurement of External Forces in Presence of Modeling Inaccuracies

    Get PDF
    The field of human-robot interaction has garnered significant interest in the last decade. Every form of human-robot coexistence must guarantee the safety of the user. Safety in human-robot interaction is being vigorously studied, in areas such as collision avoidance, soft actuators, light-weight robots, computer vision techniques, soft tissue modeling, collision detection, etc. Despite the safety provisions, unwanted collisions can occur in case of system faults. In such cases, before post-collision strategies are triggered, it is imperative to effectively detect the collisions. Implementation of tactile sensors, vision systems, sonar and Lidar sensors, etc., allows for detection of collisions. However, due to the cost of such methods, more practical approaches are being investigated. A general goal remains to develop methods for fast detection of external contacts using minimal sensory information. Availability of position data and command torques in manipulators permits development of observer-based techniques to measure external forces/torques. The presence of disturbances and inaccuracies in the model of the robot presents challenges in the efficacy of observers in the context of collision detection. The purpose of this thesis is to develop methods that reduce the effects of modeling inaccuracies in external force/torque estimation and increase the efficacy of collision detection. It is comprised of the following four parts: 1. The KUKA Light-Weight Robot IV+ is commonly employed for research purposes. The regressor matrix, minimal inertial parameters and the friction model of this robot are identified and presented in detail. To develop the model, relative weight analysis is employed for identification. 2. Modeling inaccuracies and robot state approximation errors are considered simultaneously to develop model-based time-varying thresholds for collision detection. A metric is formulated to compare trajectories realizing the same task in terms of their collision detection and external force/torque estimation capabilities. A method for determining optimal trajectories with regards to accurate external force/torque estimation is also developed. 3. The effects of velocity on external force/torque estimation errors are studied with and without the use of joint force/torque sensors. Velocity-based thresholds are developed and implemented to improve collision detection. The results are compared with the collision detection module integrated in the KUKA Light-Weight Robot IV+. 4. An alternative joint-by-joint heuristic method is proposed to identify the effects of modeling inaccuracies on external force/torque estimation. Time-varying collision detection thresholds associated with the heuristic method are developed and compared with constant thresholds. In this work, the KUKA Light-Weight Robot IV+ is used for obtaining the experimental results. This robot is controlled via the Fast Research Interface and Visual C++ 2008. The experimental results confirm the efficacy of the proposed methodologies

    Evolution of Prehension Ability in an Anthropomorphic Neurorobotic Arm

    Get PDF
    In this paper we show how a simulated anthropomorphic robotic arm controlled by an artificial neural network can develop effective reaching and grasping behaviour through a trial and error process in which the free parameters encode the control rules which regulate the fine-grained interaction between the robot and the environment and variations of the free parameters are retained or discarded on the basis of their effects at the level of the global behaviour exhibited by the robot situated in the environment. The obtained results demonstrate how the proposed methodology allows the robot to produce effective behaviours thanks to its ability to exploit the morphological properties of the robot’s body (i.e. its anthropomorphic shape, the elastic properties of its muscle-like actuators, and the compliance of its actuated joints) and the properties which arise from the physical interaction between the robot and the environment mediated by appropriate control rules

    Reaching the limit in autonomous racing: Optimal control versus reinforcement learning

    Get PDF
    A central question in robotics is how to design a control system for an agile mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting. We then investigated which fundamental factors have contributed to the success of RL or have limited OC. Our study indicates that the fundamental advantage of RL over OC is not that it optimizes its objective better but that it optimizes a better objective. OC decomposes the problem into planning and control with an explicit intermediate representation, such as a trajectory, that serves as an interface. This decomposition limits the range of behaviors that can be expressed by the controller, leading to inferior control performance when facing unmodeled effects. In contrast, RL can directly optimize a task-level objective and can leverage domain randomization to cope with model uncertainty, allowing the discovery of more robust control responses. Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour. Our policy achieved superhuman control within minutes of training on a standard workstation. This work presents a milestone in agile robotics and sheds light on the role of RL and OC in robot control

    Reaching the Limit in Autonomous Racing: Optimal Control versus Reinforcement Learning

    Full text link
    A central question in robotics is how to design a control system for an agile mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting. We then investigated which fundamental factors have contributed to the success of RL or have limited OC. Our study indicates that the fundamental advantage of RL over OC is not that it optimizes its objective better but that it optimizes a better objective. OC decomposes the problem into planning and control with an explicit intermediate representation, such as a trajectory, that serves as an interface. This decomposition limits the range of behaviors that can be expressed by the controller, leading to inferior control performance when facing unmodeled effects. In contrast, RL can directly optimize a task-level objective and can leverage domain randomization to cope with model uncertainty, allowing the discovery of more robust control responses. Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour. Our policy achieved superhuman control within minutes of training on a standard workstation. This work presents a milestone in agile robotics and sheds light on the role of RL and OC in robot control

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors
    • …
    corecore