5 research outputs found

    Robotic Trajectory Tracking: Position- and Force-Control

    Get PDF
    This thesis employs a bottom-up approach to develop robust and adaptive learning algorithms for trajectory tracking: position and torque control. In a first phase, the focus is put on the following of a freeform surface in a discontinuous manner. Next to resulting switching constraints, disturbances and uncertainties, the case of unknown robot models is addressed. In a second phase, once contact has been established between surface and end effector and the freeform path is followed, a desired force is applied. In order to react to changing circumstances, the manipulator needs to show the features of an intelligent agent, i.e. it needs to learn and adapt its behaviour based on a combination of a constant interaction with its environment and preprogramed goals or preferences. The robotic manipulator mimics the human behaviour based on bio-inspired algorithms. In this way it is taken advantage of the know-how and experience of human operators as their knowledge is translated in robot skills. A selection of promising concepts is explored, developed and combined to extend the application areas of robotic manipulators from monotonous, basic tasks in stiff environments to complex constrained processes. Conventional concepts (Sliding Mode Control, PID) are combined with bio-inspired learning (BELBIC, reinforcement based learning) for robust and adaptive control. Independence of robot parameters is guaranteed through approximated robot functions using a Neural Network with online update laws and model-free algorithms. The performance of the concepts is evaluated through simulations and experiments. In complex freeform trajectory tracking applications, excellent absolute mean position errors (<0.3 rad) are achieved. Position and torque control are combined in a parallel concept with minimized absolute mean torque errors (<0.1 Nm)

    From learning to new goal generation in a bioinspired robotic setup

    No full text
    In the field of cognitive bioinspired robotics, we focus on autonomous development, and propose a possible model to explain how humans generate and pursue new goals that are not strictly dictated by survival. Autonomous lifelong learning is an important ability for robots to make them able to acquire new skills, and autonomous goal generation is a basic mechanism for that. The Intentional Distributed Robotic Architecture (IDRA) here presented intends to allow the autonomous development of new goals in situated agents starting from some simple hard-coded instincts. It addresses this capability through an imitation of the neural plasticity, the property of the cerebral cortex supporting learning. Three main brain areas are involved in goal generation, cerebral cortex, thalamus, and amygdala; these are mimicked at a functional level by the modules of our computational model, namely Deliberative, Working-Memory, Goal-Generator, and Instincts Modules, all connected in a network. IDRA has been designed to be robot independent; we have used it in simulation and on the real Aldebaran NAO humanoid robot. The reported experiments explore how basic capabilities, as active sensing, are obtained by the architecture
    corecore