19,573 research outputs found

    Active Inference for Integrated State-Estimation, Control, and Learning

    Full text link
    This work presents an approach for control, state-estimation and learning model (hyper)parameters for robotic manipulators. It is based on the active inference framework, prominent in computational neuroscience as a theory of the brain, where behaviour arises from minimizing variational free-energy. The robotic manipulator shows adaptive and robust behaviour compared to state-of-the-art methods. Additionally, we show the exact relationship to classic methods such as PID control. Finally, we show that by learning a temporal parameter and model variances, our approach can deal with unmodelled dynamics, damps oscillations, and is robust against disturbances and poor initial parameters. The approach is validated on the `Franka Emika Panda' 7 DoF manipulator.Comment: 7 pages, 6 figures, accepted for presentation at the International Conference on Robotics and Automation (ICRA) 202

    Push recovery with stepping strategy based on time-projection control

    Get PDF
    In this paper, we present a simple control framework for on-line push recovery with dynamic stepping properties. Due to relatively heavy legs in our robot, we need to take swing dynamics into account and thus use a linear model called 3LP which is composed of three pendulums to simulate swing and torso dynamics. Based on 3LP equations, we formulate discrete LQR controllers and use a particular time-projection method to adjust the next footstep location on-line during the motion continuously. This adjustment, which is found based on both pelvis and swing foot tracking errors, naturally takes the swing dynamics into account. Suggested adjustments are added to the Cartesian 3LP gaits and converted to joint-space trajectories through inverse kinematics. Fixed and adaptive foot lift strategies also ensure enough ground clearance in perturbed walking conditions. The proposed structure is robust, yet uses very simple state estimation and basic position tracking. We rely on the physical series elastic actuators to absorb impacts while introducing simple laws to compensate their tracking bias. Extensive experiments demonstrate the functionality of different control blocks and prove the effectiveness of time-projection in extreme push recovery scenarios. We also show self-produced and emergent walking gaits when the robot is subject to continuous dragging forces. These gaits feature dynamic walking robustness due to relatively soft springs in the ankles and avoiding any Zero Moment Point (ZMP) control in our proposed architecture.Comment: 20 pages journal pape

    Adaptive cancelation of self-generated sensory signals in a whisking robot

    Get PDF
    Sensory signals are often caused by one's own active movements. This raises a problem of discriminating between self-generated sensory signals and signals generated by the external world. Such discrimination is of general importance for robotic systems, where operational robustness is dependent on the correct interpretation of sensory signals. Here, we investigate this problem in the context of a whiskered robot. The whisker sensory signal comprises two components: one due to contact with an object (externally generated) and another due to active movement of the whisker (self-generated). We propose a solution to this discrimination problem based on adaptive noise cancelation, where the robot learns to predict the sensory consequences of its own movements using an adaptive filter. The filter inputs (copy of motor commands) are transformed by Laguerre functions instead of the often-used tapped-delay line, which reduces model order and, therefore, computational complexity. Results from a contact-detection task demonstrate that false positives are significantly reduced using the proposed scheme

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Complex evolutionary systems in behavioral finance

    Get PDF
    Traditional finance is built on the rationality paradigm. This chapter discusses simple models from an alternative approach in which financial markets are viewed as complex evolutionary systems. Agents are boundedly rational and base their investment decisions upon market forecasting heuristics. Prices and beliefs about future prices co-evolve over time with mutual feedback. Strategy choice is driven by evolutionary selection, so that agents tend to adopt strategies that were successful in the past. Calibration of "simple complexity models" with heterogeneous expectations to real financial market data and laboratory experiments with human subjects are also discussed.
    • …
    corecore