1,693 research outputs found

    Stability Analysis of Stochastic Markovian Jump Neural Networks with Different Time Scales and Randomly Occurred Nonlinearities Based on Delay-Partitioning Projection Approach

    Get PDF
    In this paper, the mean square asymptotic stability of stochastic Markovian jump neural networks with different time scales and randomly occurred nonlinearities is investigated. In terms of linear matrix inequality (LMI) approach and delay-partitioning projection technique, delay-dependent stability criteria are derived for the considered neural networks for cases with or without the information of the delay rates via new Lyapunov-Krasovskii functionals. We also obtain that the thinner the delay is partitioned, the more obviously the conservatism can be reduced. An example with simulation results is given to show the effectiveness of the proposed approach

    New Stability Criterion for Takagi-Sugeno Fuzzy Cohen-Grossberg Neural Networks with Probabilistic Time-Varying Delays

    Get PDF
    A new global asymptotic stability criterion of Takagi-Sugeno fuzzy Cohen-Grossberg neural networks with probabilistic time-varying delays was derived, in which the diffusion item can play its role. Owing to deleting the boundedness conditions on amplification functions, the main result is a novelty to some extent. Besides, there is another novelty in methods, for Lyapunov-Krasovskii functional is the positive definite form of p powers, which is different from those of existing literature. Moreover, a numerical example illustrates the effectiveness of the proposed methods

    Oscillatory dynamics as a mechanism of integration in complex networks of neurons

    No full text
    The large-scale integrative mechanisms of the brain, the means by which the activity of functionally segregated neuronal regions are combined, are not well understood. There is growing agreement that a flexible mechanism of integration must be present in order to support the myriad changing cognitive demands under which we are placed. Neuronal communication through phase-coherent oscillation stands as the prominent theory of cognitive integration. The work presented in this thesis explores the role of oscillation and synchronisation in the transfer and integration of information in the brain. It is first shown that complex metastable dynamics suitable for modelling phase-coherent neuronal synchronisation emerge from modularity in networks of delay and pulse-coupled oscillators. Within a restricted parameter regime these networks display a constantly changing set of partially synchronised states where some modules remain highly synchronised while others desynchronise. An examination of network phase dynamics shows increasing coherence with increasing connectivity between modules. The metastable chimera states that emerge from the activity of modular oscillator networks are demonstrated to be synchronous with a constant phase relationship as would be required of a mechanism of large-scale neural integration. A specific example of functional phase-coherent synchronisation within a spiking neural system is then developed. Competitive stimulus selection between converging population encoded stimuli is demonstrated through entrainment of oscillation in receiving neurons. The behaviour of the model is shown to be analogous to well-known competitive processes of stimulus selection such as binocular rivalry, matching key experimentally observed properties for the distribution and correlation of periods of entrainment under differing stimuli strength. Finally two new measures of network centrality, knotty-centrality and set betweenness centrality, are developed and applied to empirically derived human structural brain connectivity data. It is shown that human brain organisation exhibits a topologically central core network within a modular structure consistent with the generation of synchronous oscillation with functional phase dynamics

    Emergence of Modular Structure in a Large-Scale Brain Network with Interactions between Dynamics and Connectivity

    Get PDF
    A network of 32 or 64 connected neural masses, each representing a large population of interacting excitatory and inhibitory neurons and generating an electroencephalography/magnetoencephalography like output signal, was used to demonstrate how an interaction between dynamics and connectivity might explain the emergence of complex network features, in particular modularity. Network evolution was modeled by two processes: (i) synchronization dependent plasticity (SDP) and (ii) growth dependent plasticity (GDP). In the case of SDP, connections between neural masses were strengthened when they were strongly synchronized, and were weakened when they were not. GDP was modeled as a homeostatic process with random, distance dependent outgrowth of new connections between neural masses. GDP alone resulted in stable networks with distance dependent connection strengths, typical small-world features, but no degree correlations and only weak modularity. SDP applied to random networks induced clustering, but no clear modules. Stronger modularity evolved only through an interaction of SDP and GDP, with the number and size of the modules depending on the relative strength of both processes, as well as on the size of the network. Lesioning part of the network, after a stable state was achieved, resulted in a temporary disruption of the network structure. The model gives a possible scenario to explain how modularity can arise in developing brain networks, and makes predictions about the time course of network changes during development and following acute lesions

    Optimized state feedback regulation of 3DOF helicopter system via extremum seeking

    Get PDF
    In this paper, an optimized state feedback regulation of a 3 degree of freedom (DOF) helicopter is designed via extremum seeking (ES) technique. Multi-parameter ES is applied to optimize the tracking performance via tuning State Vector Feedback with Integration of the Control Error (SVFBICE). Discrete multivariable version of ES is developed to minimize a cost function that measures the performance of the controller. The cost function is a function of the error between the actual and desired axis positions. The controller parameters are updated online as the optimization takes place. This method significantly decreases the time in obtaining optimal controller parameters. Simulations were conducted for the online optimization under both fixed and varying operating conditions. The results demonstrate the usefulness of using ES for preserving the maximum attainable performance

    Local2Global: a distributed approach for scaling representation learning on graphs

    Get PDF
    We propose a decentralised “local2global” approach to graph representation learning, that one can a-priori use to scale any embedding technique. Our local2global approach proceeds by first dividing the input graph into overlapping subgraphs (or “patches”) and training local representations for each patch independently. In a second step, we combine the local representations into a globally consistent representation by estimating the set of rigid motions that best align the local representations using information from the patch overlaps, via group synchronization. A key distinguishing feature of local2global relative to existing work is that patches are trained independently without the need for the often costly parameter synchronization during distributed training. This allows local2global to scale to large-scale industrial applications, where the input graph may not even fit into memory and may be stored in a distributed manner. We apply local2global on data sets of different sizes and show that our approach achieves a good trade-off between scale and accuracy on edge reconstruction and semi-supervised classification. We also consider the downstream task of anomaly detection and show how one can use local2global to highlight anomalies in cybersecurity networks
    corecore