2,123 research outputs found

    On the validity of memristor modeling in the neural network literature

    Full text link
    An analysis of the literature shows that there are two types of non-memristive models that have been widely used in the modeling of so-called "memristive" neural networks. Here, we demonstrate that such models have nothing in common with the concept of memristive elements: they describe either non-linear resistors or certain bi-state systems, which all are devices without memory. Therefore, the results presented in a significant number of publications are at least questionable, if not completely irrelevant to the actual field of memristive neural networks

    Recent Advances and Applications of Fractional-Order Neural Networks

    Get PDF
    This paper focuses on the growth, development, and future of various forms of fractional-order neural networks. Multiple advances in structure, learning algorithms, and methods have been critically investigated and summarized. This also includes the recent trends in the dynamics of various fractional-order neural networks. The multiple forms of fractional-order neural networks considered in this study are Hopfield, cellular, memristive, complex, and quaternion-valued based networks. Further, the application of fractional-order neural networks in various computational fields such as system identification, control, optimization, and stability have been critically analyzed and discussed

    Finite-time lag projective synchronization of delayed fractional-order quaternion-valued neural networks with parameter uncertainties

    Get PDF
    This paper discusses a class issue of finite-time lag projective synchronization (FTLPS) of delayed fractional-order quaternion-valued neural networks (FOQVNNs) with parameter uncertainties, which is solved by a non-decomposition method. Firstly, a new delayed FOQVNNs model with uncertain parameters is designed. Secondly, two types of feedback controller and adaptive controller without sign functions are designed in the quaternion domain. Based on the Lyapunov analysis method, the non-decomposition method is applied to replace the decomposition method that requires complex calculations, combined with some quaternion inequality techniques, to accurately estimate the settling time of FTLPS. Finally, the correctness of the obtained theoretical results is testified by a numerical simulation example

    Finite-time stabilization for fractional-order inertial neural networks with time varying delays

    Get PDF
    This paper deals with the finite-time stabilization of fractional-order inertial neural network with varying time-delays (FOINNs). Firstly, by correctly selected variable substitution, the system is transformed into a first-order fractional differential equation. Secondly, by building Lyapunov functionalities and using analytical techniques, as well as new control algorithms (which include the delay-dependent and delay-free controller), novel and effective criteria are established to attain the finite-time stabilization of the addressed system. Finally, two examples are used to illustrate the effectiveness and feasibility of the obtained results

    Finite-time adaptive synchronization of fractional-order delayed quaternion-valued fuzzy neural networks

    Get PDF
    Based on direct quaternion method, this paper explores the finite-time adaptive synchronization (FAS) of fractional-order delayed quaternion-valued fuzzy neural networks (FODQVFNNs). Firstly, a useful fractional differential inequality is created, which offers an effective way to investigate FAS. Then two novel quaternion-valued adaptive control strategies are designed. By means of our newly proposed inequality, the basic knowledge about fractional calculus, reduction to absurdity as well as several inequality techniques of quaternion and fuzzy logic, several sufficient FAS criteria are derived for FODQVFNNs. Moreover, the settling time of FAS is estimated, which is in connection with the order and initial values of considered systems as well as the controller parameters. Ultimately, the validity of obtained FAS criteria is corroborated by numerical simulations

    Synchronization of a class of fractional-order neural networks with multiple time delays by comparison principles

    Get PDF
    This paper studies the synchronization of fractional-order neural networks with multiple time delays. Based on an inequality of fractional-order and comparison principles of linear fractional equation with multiple time delays, some sufficient conditions for synchronization of master-slave systems are obtained. Example and related simulations are given to demonstrate the feasibility of the theoretical results

    Finite-time projective synchronization of fractional-order delayed quaternion-valued fuzzy memristive neural networks

    Get PDF
    In this paper, the finite-time projective synchronization (FTPS) problem of fractionalorder quaternion-valued fuzzy memristor neural networks (FOQVFMNNs) is studied. Through establishing a feedback controller with signed functions and an adaptive controller, sufficient conditions for FTPS for FOQVFMNNs are obtained. Furthermore, the synchronization establishment time is calculated. Finally, the practicability of the conclusions is verified by numerical simulations

    Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network

    Full text link
    Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well.Comment: 43 pages, 10 figures, 78 reference
    corecore