8,045 research outputs found
Recommended from our members
Design of an adaptive neural predictive nonlinear controller for nonholonomic mobile robot system based on posture identifier in the presence of disturbance
This paper proposes an adaptive neural predictive nonlinear controller to guide a nonholonomic wheeled mobile robot during continuous and non-continuous gradients trajectory tracking. The structure of the controller consists of two models that describe the kinematics and dynamics of the mobile robot system and a feedforward neural controller. The models are modified Elman neural network and feedforward multi-layer perceptron respectively. The modified Elman neural network model is trained off-line and on-line stages to guarantee the outputs of the model accurately represent the actual outputs of the mobile robot system. The trained neural model acts as the position and orientation identifier. The feedforward neural controller is trained off-line and adaptive weights are adapted on-line to find the reference torques, which controls the steady-state outputs of the mobile robot system. The feedback neural controller is based on the posture neural identifier and quadratic performance index optimization algorithm to find the optimal torque action in the transient state for N-step-ahead prediction. General back propagation algorithm is used to learn the feedforward neural controller and the posture neural identifier. Simulation results show the effectiveness of the proposed adaptive neural predictive control algorithm; this is demonstrated by the minimised tracking error and the smoothness of the torque control signal obtained with bounded external disturbances
Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks
Effective training of deep neural networks suffers from two main issues. The
first is that the parameter spaces of these models exhibit pathological
curvature. Recent methods address this problem by using adaptive
preconditioning for Stochastic Gradient Descent (SGD). These methods improve
convergence by adapting to the local geometry of parameter space. A second
issue is overfitting, which is typically addressed by early stopping. However,
recent work has demonstrated that Bayesian model averaging mitigates this
problem. The posterior can be sampled by using Stochastic Gradient Langevin
Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD
methods inefficient. Here, we propose combining adaptive preconditioners with
SGLD. In support of this idea, we give theoretical properties on asymptotic
convergence and predictive risk. We also provide empirical results for Logistic
Regression, Feedforward Neural Nets, and Convolutional Neural Nets,
demonstrating that our preconditioned SGLD method gives state-of-the-art
performance on these models.Comment: AAAI 201
- …