12 research outputs found

    Learning robot inverse dynamics using sparse online Gaussian process with forgetting mechanism

    Full text link
    Online Gaussian processes (GPs), typically used for learning models from time-series data, are more flexible and robust than offline GPs. Both local and sparse approximations of GPs can efficiently learn complex models online. Yet, these approaches assume that all signals are relatively accurate and that all data are available for learning without misleading data. Besides, the online learning capacity of GPs is limited for high-dimension problems and long-term tasks in practice. This paper proposes a sparse online GP (SOGP) with a forgetting mechanism to forget distant model information at a specific rate. The proposed approach combines two general data deletion schemes for the basis vector set of SOGP: The position information-based scheme and the oldest points-based scheme. We apply our approach to learn the inverse dynamics of a collaborative robot with 7 degrees of freedom under a two-segment trajectory tracking problem with task switching. Both simulations and experiments have shown that the proposed approach achieves better tracking accuracy and predictive smoothness compared with the two general data deletion schemes.Comment: Submitted to 2022 IEEE/ASME International Conference on Advanced Intelligent Mechatronic

    Composite learning adaptive backstepping control using neural networks with compact supports

    Get PDF
    © 2019 John Wiley & Sons, Ltd. The ability to learn is crucial for neural network (NN) control as it is able to enhance the overall stability and robustness of control systems. In this study, a composite learning control strategy is proposed for a class of strict-feedback nonlinear systems with mismatched uncertainties, where raised-cosine radial basis function NNs with compact supports are applied to approximate system uncertainties. Both online historical data and instantaneous data are utilized to update NN weights. Practical exponential stability of the closed-loop system is established under a weak excitation condition termed interval excitation. The proposed approach ensures fast parameter convergence, implying an exact estimation of plant uncertainties, without the trajectory of NN inputs being recurrent and the time derivation of plant states. The raised-cosine radial basis function NNs applied not only reduces computational cost but also facilitates the exact determination of a subregressor activated along any trajectory of NN inputs so that the interval excitation condition is verifiable. Numerical results have verified validity and superiority of the proposed approach

    Composite adaptive locally weighted learning control for multi-constraint nonlinear systems

    Get PDF
    A composite adaptive locally weighted learning (LWL) control approach is proposed for a class of uncertain nonlinear systems with system constraints, including state constraints and asymmetric control saturation in this paper. The system constraints are tackled by considering the control input as an extended state variable and introducing barrier Lyapunov functions (BLFs) into the backstepping procedure. The system uncertainty is approximated by a composite adaptive LWL neural networks (NNs), where a prediction error is constructed via a series-parallel identification model, and NN weights are updated by both the tracking error and the prediction error. The update law with composite error feedback improves uncertainty approximation accuracy and trajectory tracking accuracy. The feasibility and effectiveness of the proposed approach have been demonstrated by formal proof and simulation results

    Real-Time Progressive Learning: Mutually Reinforcing Learning and Control with Neural-Network-Based Selective Memory

    Full text link
    Memory, as the basis of learning, determines the storage, update and forgetting of the knowledge and further determines the efficiency of learning. Featured with a mechanism of memory, a radial basis function neural network (RBFNN) based learning control scheme named real-time progressive learning (RTPL) is proposed to learn the unknown dynamics of the system with guaranteed stability and closed-loop performance. Instead of the stochastic gradient descent (SGD) update law in adaptive neural control (ANC), RTPL adopts the selective memory recursive least squares (SMRLS) algorithm to update the weights of the RBFNN. Through SMRLS, the approximation capabilities of the RBFNN are uniformly distributed over the feature space and thus the passive knowledge forgetting phenomenon of SGD method is suppressed. Subsequently, RTPL achieves the following merits over the classical ANC: 1) guaranteed learning capability under low-level persistent excitation (PE), 2) improved learning performance (learning speed, accuracy and generalization capability), and 3) low gain requirement ensuring robustness of RTPL in practical applications. Moreover, the RTPL based learning and control will gradually reinforce each other during the task execution, making it appropriate for long-term learning control tasks. As an example, RTPL is used to address the tracking control problem of a class of nonlinear systems with RBFNN being an adaptive feedforward controller. Corresponding theoretical analysis and simulation studies demonstrate the effectiveness of RTPL.Comment: 16 pages, 15 figure

    Dynamic Structural Neural Network

    Get PDF
    The file attached to this record is the author's final peer reviewed version.Artificial neural network (ANN) has been well applied in pattern recognition, classification and machine learning thanks to its high performance. Most ANNs are designed by a static structure whose weights are trained during a learning process by supervised or unsupervised methods. These training methods require a set of initial weights values, which are normally randomly generated, with different initial sets of weight values leading to different convergent ANNs for the same training set. Dealing with these drawbacks, a trend of dynamic ANN was invoked in the past year. However, they are either too complex or far from practical applications such as in the pathology predictor in binary multi-input multi-output (MIMO) problems, when the role of a symptom is considered as an agent, a pathology predictor’s outcome is formed by action of active agents while other agents’ activities seem to be ignored or have mirror effects. In this paper, we propose a new dynamic structural ANN for MIMO problems based on the dependency graph, which gives clear cause and result relationships between inputs and outputs. The new ANN has the dynamic structure of hidden layer as a directed graph showing the relation between input, hidden and output nodes. The properties of the new dynamic structural ANN are experienced with a pathology problem and its learning methods’ performances are compared on a real well known dataset. The result shows that both approaches for structural learning process improve the quality of ANNs during learning iteration
    corecore