3 research outputs found

    Lyapunov-Based Dropout Deep Neural Network (Lb-DDNN) Controller

    Full text link
    Deep neural network (DNN)-based adaptive controllers can be used to compensate for unstructured uncertainties in nonlinear dynamic systems. However, DNNs are also very susceptible to overfitting and co-adaptation. Dropout regularization is an approach where nodes are randomly dropped during training to alleviate issues such as overfitting and co-adaptation. In this paper, a dropout DNN-based adaptive controller is developed. The developed dropout technique allows the deactivation of weights that are stochastically selected for each individual layer within the DNN. Simultaneously, a Lyapunov-based real-time weight adaptation law is introduced to update the weights of all layers of the DNN for online unsupervised learning. A non-smooth Lyapunov-based stability analysis is performed to ensure asymptotic convergence of the tracking error. Simulation results of the developed dropout DNN-based adaptive controller indicate a 38.32% improvement in the tracking error, a 53.67% improvement in the function approximation error, and 50.44% lower control effort when compared to a baseline adaptive DNN-based controller without dropout regularization

    Composite Adaptive Lyapunov-Based Deep Neural Network (Lb-DNN) Controller

    Full text link
    Recent advancements in adaptive control have equipped deep neural network (DNN)-based controllers with Lyapunov-based adaptation laws that work across a range of DNN architectures to uniquely enable online learning. However, the adaptation laws are based on tracking error, and offer convergence guarantees on only the tracking error without providing conclusions on the parameter estimation performance. Motivated to provide guarantees on the DNN parameter estimation performance, this paper provides the first result on composite adaptation for adaptive Lyapunov-based DNN controllers, which uses the Jacobian of the DNN and a prediction error of the dynamics that is computed using a novel method involving an observer of the dynamics. A Lyapunov-based stability analysis is performed which guarantees the tracking, observer, and parameter estimation errors are uniformly ultimately bounded (UUB), with stronger performance guarantees when the DNN's Jacobian satisfies the persistence of excitation (PE) condition. Comparative simulation results demonstrate a significant performance improvement with the developed composite adaptive Lb-DNN controller in comparison to the tracking error-based Lb-DNN
    corecore