10 research outputs found

    Improved Stability Criteria of Static Recurrent Neural Networks with a Time-Varying Delay

    Get PDF
    This paper investigates the stability of static recurrent neural networks (SRNNs) with a time-varying delay. Based on the complete delay-decomposing approach and quadratic separation framework, a novel Lyapunov-Krasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, some improved delay-dependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods

    Piecewise Convex Technique for the Stability Analysis of Delayed Neural Network

    Get PDF
    On the basis of the fact that the neuron activation function is sector bounded, this paper transforms the researched original delayed neural network into a linear uncertain system. Combined with delay partitioning technique, by using the convex combination between decomposed time delay and positive matrix, this paper constructs a novel Lyapunov function to derive new less conservative stability criteria. The benefit of the method used in this paper is that it can utilize more information on slope of the activations and time delays. To illustrate the effectiveness of the new established stable criteria, one numerical example and an application example are proposed to compare with some recent results

    Dissipativity Analysis and Synthesis for a Class of Nonlinear Stochastic Impulsive Systems

    Get PDF
    The dissipativity analysis and control problems for a class of nonlinear stochastic impulsive systems (NSISs) are studied. The systems are subject to the nonlinear disturbance, stochastic disturbance, and impulsive effects, which often exist in a wide variety of industrial processes and the sources of instability. Our aim is to analyse the dissipativity and to design the state-feedback controller and impulsive controller based on the dissipativity such that the nonlinear stochastic impulsive systems are stochastic stable and strictly (Q,S,R)-dissipative. The sufficient conditions are obtained in terms of linear matrix inequality (LMI), and a numerical example with simulation is given to show the correctness of the derived results and the effectiveness of the proposed method

    Delay-Dependent Stability Analysis for Recurrent Neural Networks with Time-Varying Delays

    Get PDF
    This paper concerns the problem of delay-dependent stability criteria for recurrent neural networks with time varying delays. By taking more information of states and activation functions as augmented vectors, a new class of the Lyapunov functional is proposed. Then, some less conservative stability criteria are obtained in terms of linear matrix inequalities (LMIs). Finally, two numerical examples are given to illustrate the effectiveness of the proposed method

    Delay-Dependent Robust Exponential Stability and H

    Get PDF
    This paper deals with the problem of robust exponential stability and H∞ performance analysis for a class of uncertain Markovian jumping system with multiple delays. Based on the reciprocally convex approach, some novel delay-dependent stability criteria for the addressed system are derived. At last, numerical examples is given presented to show the effectiveness of the proposed results

    On Less Conservative Stability Criteria for Neural Networks with Time-Varying Delays Utilizing Wirtinger-Based Integral Inequality

    Get PDF
    This paper investigates the problem of stability analysis for neural networks with time-varying delays. By utilizing the Wirtinger-based integral inequality and constructing a suitable augmented Lyapunov-Krasovskii functional, two less conservative delay-dependent criteria to guarantee the asymptotic stability of the concerned networks are derived in terms of linear matrix inequalities (LMIs). Three numerical examples are included to explain the superiority of the proposed methods by comparing maximum delay bounds with the recent results published in other literature

    Subespace projection autoassociative memories based on robust estimators

    Get PDF
    Orientador: Marcos Eduardo Ribeiro do Valle MesquitaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: Uma memória autoassociativa (AM) é um sistema de entrada e saída capaz de armazenar um conjunto de dados. Uma AM também deve ser capaz de recordar um dado armazenado quando apresentado uma versão parcial ou corrompida desse dado. Uma AM que projeta um padrão de entrada em um subespaço linear é chamada memória autoassociativa de projeção em subespaço (SPAM). A fase de recordação de uma SPAM é equivalente a um problema de regressão multilinear. Nesta dissertação, além da SPAM usando o método dos quadrados mínimos, apresentamos modelos baseados nos estimadores robustos: M-estimativa, S-estimativa, MM-estimativa e ????-regressão de vetor de suporte. Ao contrário do que ocorre em alguns modelos de AMs, as SPAMs baseadas nos estimadores de regressão multilinear representam uma rede neural nos quais os pesos sinápticos são ajustados durante a fase de recordação. Experimentos computacionais consideram o reconhecimento de faces utilizando imagens em escala de cinzaAbstract: An associative memory (AM) is an input-output system able to store a finite set of data. An AM should be able to retrieve data after presentation of a partial or corrupted data. An AM that projects an input pattern onto a linear subspace is referred to as a subspace projection autoassociative memory (SPAM). The recall phase of a SPAM is equivalent to a multi-linear regression problem. This dissertation, suggests least squares regression robust estimators, M-estimate, S-estimate, MM-estimate and ?-support vector regression . In contrast to many other AMs models, a SPAM based on a robust estimator represents a neural network in which the synaptic weights are iteratively adjusted during the recall phase. Computational experiments consider the recognition of faces using grayscale imagesMestradoMatematica AplicadaMestra em Matemática Aplicad

    Delay-slope-dependent stability results of recurrent neural networks

    No full text
    By using the fact that the neuron activation functions are sector bounded and nondecreasing, this brief presents a new method, named the delay-slope-dependent method, for stability analysis of a class of recurrent neural networks with time-varying delays. This method includes more information on the slope of neuron activation functions and fewer matrix variables in the constructed Lyapunov-Krasovskii functional. Then some improved delay-dependent stability criteria with less computational burden and conservatism are obtained. Numerical examples are given to illustrate the effectiveness and the benefits of the proposed method
    corecore