2,552 research outputs found
Recommended from our members
Design of an adaptive neural predictive nonlinear controller for nonholonomic mobile robot system based on posture identifier in the presence of disturbance
This paper proposes an adaptive neural predictive nonlinear controller to guide a nonholonomic wheeled mobile robot during continuous and non-continuous gradients trajectory tracking. The structure of the controller consists of two models that describe the kinematics and dynamics of the mobile robot system and a feedforward neural controller. The models are modified Elman neural network and feedforward multi-layer perceptron respectively. The modified Elman neural network model is trained off-line and on-line stages to guarantee the outputs of the model accurately represent the actual outputs of the mobile robot system. The trained neural model acts as the position and orientation identifier. The feedforward neural controller is trained off-line and adaptive weights are adapted on-line to find the reference torques, which controls the steady-state outputs of the mobile robot system. The feedback neural controller is based on the posture neural identifier and quadratic performance index optimization algorithm to find the optimal torque action in the transient state for N-step-ahead prediction. General back propagation algorithm is used to learn the feedforward neural controller and the posture neural identifier. Simulation results show the effectiveness of the proposed adaptive neural predictive control algorithm; this is demonstrated by the minimised tracking error and the smoothness of the torque control signal obtained with bounded external disturbances
Active SLAM: A Review On Last Decade
This article presents a comprehensive review of the Active Simultaneous
Localization and Mapping (A-SLAM) research conducted over the past decade. It
explores the formulation, applications, and methodologies employed in A-SLAM,
particularly in trajectory generation and control-action selection, drawing on
concepts from Information Theory (IT) and the Theory of Optimal Experimental
Design (TOED). This review includes both qualitative and quantitative analyses
of various approaches, deployment scenarios, configurations, path-planning
methods, and utility functions within A-SLAM research. Furthermore, this
article introduces a novel analysis of Active Collaborative SLAM (AC-SLAM),
focusing on collaborative aspects within SLAM systems. It includes a thorough
examination of collaborative parameters and approaches, supported by both
qualitative and statistical assessments. This study also identifies limitations
in the existing literature and suggests potential avenues for future research.
This survey serves as a valuable resource for researchers seeking insights into
A-SLAM methods and techniques, offering a current overview of A-SLAM
formulation.Comment: 34 pages, 8 figures, 6 table
Learning High-Level Policies for Model Predictive Control
The combination of policy search and deep neural networks holds the promise
of automating a variety of decision-making tasks. Model Predictive
Control~(MPC) provides robust solutions to robot control tasks by making use of
a dynamical model of the system and solving an optimization problem online over
a short planning horizon. In this work, we leverage probabilistic
decision-making approaches and the generalization capability of artificial
neural networks to the powerful online optimization by learning a deep
high-level policy for the MPC~(High-MPC). Conditioning on robot's local
observations, the trained neural network policy is capable of adaptively
selecting high-level decision variables for the low-level MPC controller, which
then generates optimal control commands for the robot. First, we formulate the
search of high-level decision variables for MPC as a policy search problem,
specifically, a probabilistic inference problem. The problem can be solved in a
closed-form solution. Second, we propose a self-supervised learning algorithm
for learning a neural network high-level policy, which is useful for online
hyperparameter adaptations in highly dynamic environments. We demonstrate the
importance of incorporating the online adaption into autonomous robots by using
the proposed method to solve a challenging control problem, where the task is
to control a simulated quadrotor to fly through a swinging gate. We show that
our approach can handle situations that are difficult for standard MPC
Chance-Constrained Trajectory Optimization for Safe Exploration and Learning of Nonlinear Systems
Learning-based control algorithms require data collection with abundant
supervision for training. Safe exploration algorithms ensure the safety of this
data collection process even when only partial knowledge is available. We
present a new approach for optimal motion planning with safe exploration that
integrates chance-constrained stochastic optimal control with dynamics learning
and feedback control. We derive an iterative convex optimization algorithm that
solves an \underline{Info}rmation-cost \underline{S}tochastic
\underline{N}onlinear \underline{O}ptimal \underline{C}ontrol problem
(Info-SNOC). The optimization objective encodes both optimal performance and
exploration for learning, and the safety is incorporated as distributionally
robust chance constraints. The dynamics are predicted from a robust regression
model that is learned from data. The Info-SNOC algorithm is used to compute a
sub-optimal pool of safe motion plans that aid in exploration for learning
unknown residual dynamics under safety constraints. A stable feedback
controller is used to execute the motion plan and collect data for model
learning. We prove the safety of rollout from our exploration method and
reduction in uncertainty over epochs, thereby guaranteeing the consistency of
our learning method. We validate the effectiveness of Info-SNOC by designing
and implementing a pool of safe trajectories for a planar robot. We demonstrate
that our approach has higher success rate in ensuring safety when compared to a
deterministic trajectory optimization approach.Comment: Submitted to RA-L 2020, review-
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors
One of the key challenges in applying reinforcement learning to complex
robotic control tasks is the need to gather large amounts of experience in
order to find an effective policy for the task at hand. Model-based
reinforcement learning can achieve good sample efficiency, but requires the
ability to learn a model of the dynamics that is good enough to learn an
effective policy. In this work, we develop a model-based reinforcement learning
algorithm that combines prior knowledge from previous tasks with online
adaptation of the dynamics model. These two ingredients enable highly
sample-efficient learning even in regimes where estimating the true dynamics is
very difficult, since the online model adaptation allows the method to locally
compensate for unmodeled variation in the dynamics. We encode the prior
experience into a neural network dynamics model, adapt it online by
progressively refitting a local linear model of the dynamics, and use model
predictive control to plan under these dynamics. Our experimental results show
that this approach can be used to solve a variety of complex robotic
manipulation tasks in just a single attempt, using prior data from other
manipulation behaviors
- …