44 research outputs found

    Global exponential stability of impulsive dynamical systems with distributed delays

    Get PDF
    In this paper, the global exponential stability of dynamical systems with distributed delays and impulsive effect is investigated. By establishing an impulsive differential-integro inequality, we obtain some sufficient conditions ensuring the global exponential stability of the dynamical system. Three examples are given to illustrate the effectiveness of our theoretical results

    Asymptotic Stability and Exponential Stability of Impulsive Delayed Hopfield Neural Networks

    Get PDF
    A criterion for the uniform asymptotic stability of the equilibrium point of impulsive delayed Hopfield neural networks is presented by using Lyapunov functions and linear matrix inequality approach. The criterion is a less restrictive version of a recent result. By means of constructing the extended impulsive Halanay inequality, we also analyze the exponential stability of impulsive delayed Hopfield neural networks. Some new sufficient conditions ensuring exponential stability of the equilibrium point of impulsive delayed Hopfield neural networks are obtained. An example showing the effectiveness of the present criterion is given

    THERAPEUTIC VIDEO GAMES AND THE SIMULATION OF EXECUTIVE FUNCTION DEFICITS IN ADHD

    Get PDF
    Attention Deficit Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by difficulty paying attention, impulsivity, and hyperactivity. Diagnosis of ADHD rose 42% from 2003–2004 to 2011–2012. In 2011, 3.5 million children were treated with drugs. Optimizing therapy can take a year, and may not be completely effective. A clinical trial is currently being conducted of a device/drug combination using the computer game Minecraft, to determine how certain activities affect executive function, working memory, and restraint in patients diagnosed with ADHD. The human subjects’ responses are being modeled using artificial neural networks (ANNs), an artificial intelligence method that can be utilized to interpret highly complex data. We propose using ANNs to optimize drug and Minecraft therapy for individual patients based on the initial NICHQ Vanderbilt assessment scores. We are applying ANNs in the development of computational models for executive function deficiencies in ADHD. These models will then be used to develop a therapeutic video game as a drug/device combination with stimulants for the treatment of ADHD symptoms in Fragile X Syndrome. As a first step towards the design of virtual subjects with executive function deficits, computational models of the core executive functions working memory and fluid intelligence were constructed. These models were combined to create healthy control and executive function-deficient virtual subjects, who performed a Time Management task simulation that required the use of their executive functions to complete. The preliminary working memory model utilized a convolutional neural network to identify handwritten digits from the MNIST dataset, and the fluid intelligence model utilized a basic recurrent neural network to produce sequences of integers in the range 1-9 that can be multiplied together to produce the number 12. A simplified Impulsivity function was also included in the virtual subject as a first step towards the future inclusion of the core executive function inhibition

    A switching control for finite-time synchronization of memristor-based BAM neural networks with stochastic disturbances

    Get PDF
    This paper deals with the finite-time stochastic synchronization for a class of memristorbased bidirectional associative memory neural networks (MBAMNNs) with time-varying delays and stochastic disturbances. Firstly, based on the physical property of memristor and the circuit of MBAMNNs, a MBAMNNs model with more reasonable switching conditions is established. Then, based on the theory of Filippov’s solution, by using Lyapunov–Krasovskii functionals and stochastic analysis technique, a sufficient condition is given to ensure the finite-time stochastic synchronization of MBAMNNs with a certain controller. Next, by a further discussion, an errordependent switching controller is given to shorten the stochastic settling time. Finally, numerical simulations are carried out to illustrate the effectiveness of theoretical results

    Associative neural networks: properties, learning, and applications.

    Get PDF
    by Chi-sing Leung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 236-244).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background of Associative Neural Networks --- p.1Chapter 1.2 --- A Distributed Encoding Model: Bidirectional Associative Memory --- p.3Chapter 1.3 --- A Direct Encoding Model: Kohonen Map --- p.6Chapter 1.4 --- Scope and Organization --- p.9Chapter 1.5 --- Summary of Publications --- p.13Chapter I --- Bidirectional Associative Memory: Statistical Proper- ties and Learning --- p.17Chapter 2 --- Introduction to Bidirectional Associative Memory --- p.18Chapter 2.1 --- Bidirectional Associative Memory and its Encoding Method --- p.18Chapter 2.2 --- Recall Process of BAM --- p.20Chapter 2.3 --- Stability of BAM --- p.22Chapter 2.4 --- Memory Capacity of BAM --- p.24Chapter 2.5 --- Error Correction Capability of BAM --- p.28Chapter 2.6 --- Chapter Summary --- p.29Chapter 3 --- Memory Capacity and Statistical Dynamics of First Order BAM --- p.31Chapter 3.1 --- Introduction --- p.31Chapter 3.2 --- Existence of Energy Barrier --- p.34Chapter 3.3 --- Memory Capacity from Energy Barrier --- p.44Chapter 3.4 --- Confidence Dynamics --- p.49Chapter 3.5 --- Numerical Results from the Dynamics --- p.63Chapter 3.6 --- Chapter Summary --- p.68Chapter 4 --- Stability and Statistical Dynamics of Second order BAM --- p.70Chapter 4.1 --- Introduction --- p.70Chapter 4.2 --- Second order BAM and its Stability --- p.71Chapter 4.3 --- Confidence Dynamics of Second Order BAM --- p.75Chapter 4.4 --- Numerical Results --- p.82Chapter 4.5 --- Extension to higher order BAM --- p.90Chapter 4.6 --- Verification of the conditions of Newman's Lemma --- p.94Chapter 4.7 --- Chapter Summary --- p.95Chapter 5 --- Enhancement of BAM --- p.97Chapter 5.1 --- Background --- p.97Chapter 5.2 --- Review on Modifications of BAM --- p.101Chapter 5.2.1 --- Change of the encoding method --- p.101Chapter 5.2.2 --- Change of the topology --- p.105Chapter 5.3 --- Householder Encoding Algorithm --- p.107Chapter 5.3.1 --- Construction from Householder Transforms --- p.107Chapter 5.3.2 --- Construction from iterative method --- p.109Chapter 5.3.3 --- Remarks on HCA --- p.111Chapter 5.4 --- Enhanced Householder Encoding Algorithm --- p.112Chapter 5.4.1 --- Construction of EHCA --- p.112Chapter 5.4.2 --- Remarks on EHCA --- p.114Chapter 5.5 --- Bidirectional Learning --- p.115Chapter 5.5.1 --- Construction of BL --- p.115Chapter 5.5.2 --- The Convergence of BL and the memory capacity of BL --- p.116Chapter 5.5.3 --- Remarks on BL --- p.120Chapter 5.6 --- Adaptive Ho-Kashyap Bidirectional Learning --- p.121Chapter 5.6.1 --- Construction of AHKBL --- p.121Chapter 5.6.2 --- Convergent Conditions for AHKBL --- p.124Chapter 5.6.3 --- Remarks on AHKBL --- p.125Chapter 5.7 --- Computer Simulations --- p.126Chapter 5.7.1 --- Memory Capacity --- p.126Chapter 5.7.2 --- Error Correction Capability --- p.130Chapter 5.7.3 --- Learning Speed --- p.157Chapter 5.8 --- Chapter Summary --- p.158Chapter 6 --- BAM under Forgetting Learning --- p.160Chapter 6.1 --- Introduction --- p.160Chapter 6.2 --- Properties of Forgetting Learning --- p.162Chapter 6.3 --- Computer Simulations --- p.168Chapter 6.4 --- Chapter Summary --- p.168Chapter II --- Kohonen Map: Applications in Data compression and Communications --- p.170Chapter 7 --- Introduction to Vector Quantization and Kohonen Map --- p.171Chapter 7.1 --- Background on Vector quantization --- p.171Chapter 7.2 --- Introduction to LBG algorithm --- p.173Chapter 7.3 --- Introduction to Kohonen Map --- p.174Chapter 7.4 --- Chapter Summary --- p.179Chapter 8 --- Applications of Kohonen Map in Data Compression and Communi- cations --- p.181Chapter 8.1 --- Use Kohonen Map to design Trellis Coded Vector Quantizer --- p.182Chapter 8.1.1 --- Trellis Coded Vector Quantizer --- p.182Chapter 8.1.2 --- Trellis Coded Kohonen Map --- p.188Chapter 8.1.3 --- Computer Simulations --- p.191Chapter 8.2 --- Kohonen MapiCombined Vector Quantization and Modulation --- p.195Chapter 8.2.1 --- Impulsive Noise in the received data --- p.195Chapter 8.2.2 --- Combined Kohonen Map and Modulation --- p.198Chapter 8.2.3 --- Computer Simulations --- p.200Chapter 8.3 --- Error Control Scheme for the Transmission of Vector Quantized Data --- p.213Chapter 8.3.1 --- Motivation and Background --- p.214Chapter 8.3.2 --- Trellis Coded Modulation --- p.216Chapter 8.3.3 --- "Combined Vector Quantization, Error Control, and Modulation" --- p.220Chapter 8.3.4 --- Computer Simulations --- p.223Chapter 8.4 --- Chapter Summary --- p.226Chapter 9 --- Conclusion --- p.232Bibliography --- p.23

    Complete lattice projection autoassociative memories

    Get PDF
    Orientador: Marcos Eduardo Ribeiro do Valle MesquitaTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: A capacidade do cérebro humano de armazenar e recordar informações por associação tem inspirado o desenvolvimento de modelos matemáticos referidos na literatura como memórias associativas. Em primeiro lugar, esta tese apresenta um conjunto de memórias autoassociativas (AMs) que pertecem à ampla classe das memórias morfológicas autoassociativas (AMMs). Especificamente, as memórias morfológicas autoassociativas de projeção max-plus e min-plus (max-plus e min-plus PAMMs), bem como suas composições, são introduzidas nesta tese. Tais modelos podem ser vistos como versões não distribuídas das AMMs propostas por Ritter e Sussner. Em suma, a max-plus PAMM produz a maior combinação max-plus das memórias fundamentais que é menor ou igual ao padrão de entrada. Dualmente, a min-plus PAMM projeta o padrão de entrada no conjunto de todas combinações min-plus. Em segundo, no contexto da teoria dos conjuntos fuzzy, esta tese propõe novas memórias autoassociativas fuzzy, referidas como classe das max-C e min-D FPAMMs. Uma FPAMM representa uma rede neural morfológica fuzzy com uma camada oculta de neurônios que é concebida para o armazenamento e recordação de conjuntos fuzzy ou vetores num hipercubo. Experimentos computacionais relacionados à classificação de padrões e reconhecimento de faces indicam possíveis aplicações dos novos modelos acima mencionadosAbstract: The human brain¿s ability to store and recall information by association has inspired the development various mathematical models referred to in the literature as associative memories. Firstly, this thesis presents a set of autoassociative memories (AMs) that belong to the broad class of autoassociative morphological memories (AMMs). Specifically, the max-plus and min-plus projection autoassociative morphological memories (max-plus and min-plus PAMMs), as well as their compositions, are introduced in this thesis. These models are non-distributed versions of the AMM models developed by Ritter and Sussner. Briefly, the max-plus PAMM yields the largest max-plus combination of the stored vectors which is less than or equal to the input pattern. Dually, the min-plus PAMM projects the input pattern into the set of all min-plus combinations. In second, in the context of fuzzy set theory, this thesis proposes new fuzzy autoassociative memories mentioned as class of the max-C and min-D FPAMMs. A FPAMM represents a fuzzy morphological neural network with a hidden layer of neurons that is designed for the storage and retrieval of fuzzy sets or vectors on a hypercube. Computational experiments concerning pattern classification and face recognition indicate possible applications of the aforementioned new AM modelsDoutoradoMatematica AplicadaDoutor em Matemática AplicadaCAPE

    Convolutional neural network denoising autoencoders for intelligent aircraft engine gas path health signal noise filtering

    Get PDF
    Removing noise from health signals is critical in gas path diagnostics of aircraft engines. An efficient noise filtering/denoising method should remove noise without using future data points, preserve important changes, and promote accurate diagnostics without time delay. Machine Learning (ML)-based methods are promising for high fidelity, accuracy, and computational efficiency under the motivation of Intelligent Engines. However, previous ML-based denoising methods are rarely applied in actual engineering practice because they cannot accommodate time series and cannot effectively capture important changes or are limited by the time delay problem. This paper proposes a Convolutional Neural Network Denoising Autoencoder (CNN-DAE) method to build a denoising autoencoder structure. In this structure, a convolutional operation is used to accommodate time series, and causal convolution is introduced to solve the problem of using future data points. The proposed denoising method is evaluated against NASA's Propulsion Diagnostic Method Evaluation Strategy (ProDiMES) software. It has been proved that the proposed method can accommodate time series, remove noise for improved denoising accuracy and preserve the important changes for enhanced diagnostic information. NASA's blind test case results show that Kappa Coefficient of a common diagnostic method using the processed data is 0.731 and is at least 0.046 higher than the other diagnostic methods in the open literature. Processing health signals using the proposed method would significantly promote accurate diagnostics without time delay. The proposed method could support intelligent condition monitoring systems by exploiting historical information for improved denoising and diagnostic performance
    corecore