5,009 research outputs found

    A Novel Neuroglial Architecture for Modelling Singular Perturbation System

    Get PDF
    This work develops a new modular architecture that emulates a recently-discovered biological paradigm. It originates from the human brain where the information flows along two different pathways and is processed along two time scales: one is a fast neural network (NN) and the other is a slow network called the glial network (GN). It was found that the neural network is powered and controlled by the glial network. Based on our biological knowledge of glial cells and the powerful concept of modularity, a novel approach called artificial neuroglial Network (ANGN) was designed and an algorithm based on different concepts of modularity was also developed. The implementation is based on the notion of multi-time scale systems. Validation is performed through an asynchronous machine (ASM) modeled in the standard singularly perturbed form. We apply the geometrical approach, based on Gerschgorin’s circle theorem (GCT), to separate the fast and slow variables, as well as the singular perturbation method (SPM) to determine the reduced models. This new architecture makes it possible to obtain smaller networks with less complexity and better performance

    Robust Detection of Dynamic Community Structure in Networks

    Get PDF
    We describe techniques for the robust detection of community structure in some classes of time-dependent networks. Specifically, we consider the use of statistical null models for facilitating the principled identification of structural modules in semi-decomposable systems. Null models play an important role both in the optimization of quality functions such as modularity and in the subsequent assessment of the statistical validity of identified community structure. We examine the sensitivity of such methods to model parameters and show how comparisons to null models can help identify system scales. By considering a large number of optimizations, we quantify the variance of network diagnostics over optimizations (`optimization variance') and over randomizations of network structure (`randomization variance'). Because the modularity quality function typically has a large number of nearly-degenerate local optima for networks constructed using real data, we develop a method to construct representative partitions that uses a null model to correct for statistical noise in sets of partitions. To illustrate our results, we employ ensembles of time-dependent networks extracted from both nonlinear oscillators and empirical neuroscience data.Comment: 18 pages, 11 figure

    Dynamic reconfiguration of human brain networks during learning

    Get PDF
    Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes -- flexibility and selection -- must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we explore the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance.Comment: Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4 figures, 3 table

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Identification and Control of Nonlinear Singularly Perturbed Systems Using Multi-time-scale Neural Networks

    Get PDF
    Many industrial systems are nonlinear with "slow" and "fast" dynamics because of the presence of some ``parasitic" parameters such as small time constants, resistances, inductances, capacitances, masses and moments of inertia. These systems are usually labeled as "singularly perturbed" or ``multi-time-scale" systems. Singular perturbation theory has been proved to be a useful tool to control and analyze singularly perturbed systems if the full knowledge of the system model parameters is available. However, the accurate and faithful mathematical models of those systems are usually difficult to obtain due to the uncertainties and nonlinearities. To obtain the accurate system models, in this research, a new identification scheme for the discrete time nonlinear singularly perturbed systems using multi-time-scale neural network and optimal bounded ellipsoid method is proposed firstly. Compared with other gradient descent based identification schemes, the new identification method proposed in this research can achieve faster convergence and higher accuracy due to the adaptively adjusted learning gain. Later, the optimal bounded ellipsoid based identification method for discrete time systems is extended to the identification of continuous singularly perturbed systems. Subsequently, by adding two additional terms in the weight's updating laws, a modified identification scheme is proposed to guarantee the effectiveness of the identification algorithm during the whole identification process. Lastly, through introducing some filtered variables, a robust neural network training algorithm is proposed for the system identification problem subjected to measurement noises. Based on the identification results, the singular perturbation theory is introduced to decompose a high order multi-time-scale system into two low order subsystems -- the reduced slow subsystem and the reduced fast subsystem. Then, two controllers are designed for the two subsystems separately. By using the singular perturbation theory, an adaptive controller for a regulation problem is designed in this research firstly. Because the system order is reduced, the adaptive controller proposed in this research has a simpler structure and requires much less computational resources, compared with other conventional controllers. Afterward, an indirect adaptive controller is proposed for solving the trajectory tracking problem. The stability of both identification and control schemes are analyzed through the Lyapunov approach, and the effectiveness of the identification and control algorithms are demonstrated using simulations and experiments
    corecore