736 research outputs found

    Active disturbance cancellation in nonlinear dynamical systems using neural networks

    Get PDF
    A proposal for the use of a time delay CMAC neural network for disturbance cancellation in nonlinear dynamical systems is presented. Appropriate modifications to the CMAC training algorithm are derived which allow convergent adaptation for a variety of secondary signal paths. Analytical bounds on the maximum learning gain are presented which guarantee convergence of the algorithm and provide insight into the necessary reduction in learning gain as a function of the system parameters. Effectiveness of the algorithm is evaluated through mathematical analysis, simulation studies, and experimental application of the technique on an acoustic duct laboratory model

    Improved Stability Criteria of Static Recurrent Neural Networks with a Time-Varying Delay

    Get PDF
    This paper investigates the stability of static recurrent neural networks (SRNNs) with a time-varying delay. Based on the complete delay-decomposing approach and quadratic separation framework, a novel Lyapunov-Krasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the time-varying delay and its varying interval, some improved delay-dependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods

    Improved Results on H∞ State Estimation of Static Neural Networks with Time Delay

    Get PDF
    This paper studies the problem of ∞ state estimation for a class of delayed static neural networks. The purpose of the problem is to design a delay-dependent state estimator such that the dynamics of the error system is globally exponentially stable and a prescribed ∞ performance is guaranteed. Some improved delay-dependent conditions are established by constructing augmented Lyapunov-Krasovskii functionals (LKFs). The desired estimator gain matrix can be characterized in terms of the solution to LMIs (linear matrix inequalities). Numerical examples are provided to illustrate the effectiveness of the proposed method compared with some existing results

    Tensegrity and Recurrent Neural Networks: Towards an Ecological Model of Postural Coordination

    Get PDF
    Tensegrity systems have been proposed as both the medium of haptic perception and the functional architecture of motor coordination in animals. However, a full working model integrating those two aspects with some form of neural implementation is still lacking. A basic two-dimensional cross-tensegrity plant is designed and its mechanics simulated. The plant is coupled to a Recurrent Neural Network (RNN). The model’s task is to maintain postural balance against gravity despite the intrinsically unstable configuration of the plant. The RNN takes only proprioceptive input about the springs’ lengths and rate of length change and outputs minimum lengths for each spring which modulates their interaction with the plant’s inertial kinetics. Four artificial agents are evolved to coordinate the patterns of spring contractions in order to maintain dynamic equilibrium. A first study assesses quiet standing performance and reveals coordinative patterns between the tensegrity rods akin to humans’ strategy of anti-phase hip-ankle relative phase. The agents show a mixture of periodic and aperiodic trajectories of their Center of Mass. Moreover, the agents seem to tune to the anticipatory “time-to-balance” quantity in order to maintain their movements within a region of reversibility. A second study perturbs the systems with mechanical platform shifts and sensorimotor degradation. The agents’ response to the mechanical perturbation is robust. Dimensionality analysis of the RNNs’ unit activations reveals a pattern of degree of freedom recruitment after perturbation. In the degradation sub-study, different levels of noise are added to the RNN inputs and different levels of weakening gain are applied to the forces generated by the springs to mimic haptic degradation and muscular weakening in elderly humans. As expected, the systems perform less well, falling earlier than without the insults. However, the same systems re-evolved again under the degraded conditions see significant functional recovery. Overall, the dissertation supports the plausibility of RNN cum tensegrity models of haptics-guided postural coordination in humans

    Deep Learning for Distant Speech Recognition

    Full text link
    Deep learning is an emerging technology that is considered one of the most promising directions for reaching higher levels of artificial intelligence. Among the other achievements, building computers that understand speech represents a crucial leap towards intelligent machines. Despite the great efforts of the past decades, however, a natural and robust human-machine speech interaction still appears to be out of reach, especially when users interact with a distant microphone in noisy and reverberant environments. The latter disturbances severely hamper the intelligibility of a speech signal, making Distant Speech Recognition (DSR) one of the major open challenges in the field. This thesis addresses the latter scenario and proposes some novel techniques, architectures, and algorithms to improve the robustness of distant-talking acoustic models. We first elaborate on methodologies for realistic data contamination, with a particular emphasis on DNN training with simulated data. We then investigate on approaches for better exploiting speech contexts, proposing some original methodologies for both feed-forward and recurrent neural networks. Lastly, inspired by the idea that cooperation across different DNNs could be the key for counteracting the harmful effects of noise and reverberation, we propose a novel deep learning paradigm called network of deep neural networks. The analysis of the original concepts were based on extensive experimental validations conducted on both real and simulated data, considering different corpora, microphone configurations, environments, noisy conditions, and ASR tasks.Comment: PhD Thesis Unitn, 201

    Robustness analysis of Cohen-Grossberg neural network with piecewise constant argument and stochastic disturbances

    Get PDF
    Robustness of neural networks has been a hot topic in recent years. This paper mainly studies the robustness of the global exponential stability of Cohen-Grossberg neural networks with a piecewise constant argument and stochastic disturbances, and discusses the problem of whether the Cohen-Grossberg neural networks can still maintain global exponential stability under the perturbation of the piecewise constant argument and stochastic disturbances. By using stochastic analysis theory and inequality techniques, the interval length of the piecewise constant argument and the upper bound of the noise intensity are derived by solving transcendental equations. In the end, we offer several examples to illustrate the efficacy of the findings
    corecore