48,901 research outputs found

    Adaptive Predictive Control Using Neural Network for a Class of Pure-feedback Systems in Discrete-time

    Get PDF
    10.1109/TNN.2008.2000446IEEE Transactions on Neural Networks1991599-1614ITNN

    Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach

    Get PDF
    10.1109/TNN.2008.2003290IEEE Transactions on Neural Networks19111873-1886ITNN

    Global parameter identification and control of nonlinearly parameterized systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.Includes bibliographical references (leaves 109-114).Nonlinearly parameterized (NLP) systems are ubiquitous in nature and many fields of science and engineering. Despite the wide and diverse range of applications, there exist relatively few results in control systems literature which exploit the structure of the nonlinear parameterization. A vast majority of presently applicable global control design approaches to systems with NLP, make use of either feedback-linearization, or assume linear parameterization, and ignore the specific structure of the nonlinear parameterization. While this type of approach may guarantee stability, it introduced three major drawbacks. First, they produce no additional information about the nonlinear parameters. Second, they may require large control authority and actuator bandwidth, which makes them unsuitable for some applications. Third, they may simply result in unacceptably poor performance. All of these inadequacies are amplified further when parametric uncertainties are present. What is necessary is a systematic adaptive approach to identification and control of such systems that explicitly accommodates the presence of nonlinear parameters that may not be known precisely. This thesis presents results in both adaptive identification and control of NLP systems. An adaptive controller is presented for NLP systems with a triangular structure. The presence of the triangular structure together with nonlinear parameterization makes standard methods such as back-stepping, and variable structure control inapplicable. A concept of bounding functions is combined with min-max adaptation strategies and recursive error formulation to result in a globally stabilizing controller.(cont.) A large class of nonlinear systems including cascaded LNL (linear-nonlinear-linear) systems are shown to be controllable using this approach. In the context of parameter identification, results are derived for two classes of NLP systems. The first concerns systems with convex/concave parameterization, where min-max algorithms are essential for global stability. Stronger conditions of persistent excitation are shown to be necessary to overcome the presence of multiple equilibrium points which are introduced due to the stabilization aspects of the min-max algorithms. These conditions imply that the min-max estimator must periodically employ the local gradient information in order to guarantee parameter convergence. The second class of NLP systems considered in this concerns monotonically parameterized systems, of which neural networks are a specific example. It is shown that a simple algorithm based on local gradient information suffices for parameter identification. Conditions on the external input under which the parameter estimates converge to the desired set starting from arbitrary values are derived. The proof makes direct use of the monotonicity in the parameters, which in turn allows local gradients to be self-similar and therefore introduces a desirable invariance property. By suitably exploiting this invariance property and defining a sequence of distance metrics, global convergence is proved. Such a proof of global convergence is in contrast to most other existing results in the area of nonlinear parameterization, in general, and neural networks in particular.by Aleksandar M. KojiÄ.Ph.D

    Variable neural networks for adaptive control of nonlinear systems

    Get PDF
    This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. A novel neural network architecture, referred to as a variable neural network, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable neural networks, the number of basis functions can be either increased or decreased with time, according to specified design strategies, so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GRBF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GRBFs in the variable neural network are analyzed to make a compromise between orthogonality and smoothness. The weight-adaptive laws developed using the Lyapunov synthesis approach guarantee the stability of the overall control scheme, even in the presence of modeling error(s). The tracking errors converge to the required accuracy through the adaptive control algorithm derived by combining the variable neural network and Lyapunov synthesis techniques. The operation of an adaptive control scheme using the variable neural network is demonstrated using two simulated example

    A new class of wavelet networks for nonlinear system identification

    Get PDF
    A new class of wavelet networks (WNs) is proposed for nonlinear system identification. In the new networks, the model structure for a high-dimensional system is chosen to be a superimposition of a number of functions with fewer variables. By expanding each function using truncated wavelet decompositions, the multivariate nonlinear networks can be converted into linear-in-the-parameter regressions, which can be solved using least-squares type methods. An efficient model term selection approach based upon a forward orthogonal least squares (OLS) algorithm and the error reduction ratio (ERR) is applied to solve the linear-in-the-parameters problem in the present study. The main advantage of the new WN is that it exploits the attractive features of multiscale wavelet decompositions and the capability of traditional neural networks. By adopting the analysis of variance (ANOVA) expansion, WNs can now handle nonlinear identification problems in high dimensions
    corecore