2,847 research outputs found

    Synthetic associative learning in engineered multicellular consortia

    Full text link
    Associative learning is one of the key mechanisms displayed by living organisms in order to adapt to their changing environments. It was early recognized to be a general trait of complex multicellular organisms but also found in "simpler" ones. It has also been explored within synthetic biology using molecular circuits that are directly inspired in neural network models of conditioning. These designs involve complex wiring diagrams to be implemented within one single cell and the presence of diverse molecular wires become a challenge that might be very difficult to overcome. Here we present three alternative circuit designs based on two-cell microbial consortia able to properly display associative learning responses to two classes of stimuli and displaying long and short-term memory (i. e. the association can be lost with time). These designs might be a helpful approach for engineering the human gut microbiome or even synthetic organoids, defining a new class of decision-making biological circuits capable of memory and adaptation to changing conditions. The potential implications and extensions are outlined.Comment: 5 figure

    Mammalian Brain As a Network of Networks

    Get PDF
    Acknowledgements AZ, SG and AL acknowledge support from the Russian Science Foundation (16-12-00077). Authors thank T. Kuznetsova for Fig. 6.Peer reviewedPublisher PD

    Neural network based architectures for aerospace applications

    Get PDF
    A brief history of the field of neural networks research is given and some simple concepts are described. In addition, some neural network based avionics research and development programs are reviewed. The need for the United States Air Force and NASA to assume a leadership role in supporting this technology is stressed

    Pattern memory analysis based on stability theory of cellular neural networks

    Get PDF
    AbstractIn this paper, several sufficient conditions are obtained to guarantee that the n-dimensional cellular neural network can have even (⩽2n) memory patterns. In addition, the estimations of attractive domain of such stable memory patterns are obtained. These conditions, which can be directly derived from the parameters of the neural networks, are easily verified. A new design procedure for cellular neural networks is developed based on stability theory (rather than the well-known perceptron training algorithm), and the convergence in the new design procedure is guaranteed by the obtained local stability theorems. Finally, the validity and performance of the obtained results are illustrated by two examples

    A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks

    Full text link
    In this paper, we address the stability of a broad class of discrete-time hypercomplex-valued Hopfield-type neural networks. To ensure the neural networks belonging to this class always settle down at a stationary state, we introduce novel hypercomplex number systems referred to as real-part associative hypercomplex number systems. Real-part associative hypercomplex number systems generalize the well-known Cayley-Dickson algebras and real Clifford algebras and include the systems of real numbers, complex numbers, dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as particular instances. Apart from the novel hypercomplex number systems, we introduce a family of hypercomplex-valued activation functions called B\mathcal{B}-projection functions. Broadly speaking, a B\mathcal{B}-projection function projects the activation potential onto the set of all possible states of a hypercomplex-valued neuron. Using the theory presented in this paper, we confirm the stability analysis of several discrete-time hypercomplex-valued Hopfield-type neural networks from the literature. Moreover, we introduce and provide the stability analysis of a general class of Hopfield-type neural networks on Cayley-Dickson algebras

    The hippocampus and cerebellum in adaptively timed learning, recognition, and movement

    Full text link
    The concepts of declarative memory and procedural memory have been used to distinguish two basic types of learning. A neural network model suggests how such memory processes work together as recognition learning, reinforcement learning, and sensory-motor learning take place during adaptive behaviors. To coordinate these processes, the hippocampal formation and cerebellum each contain circuits that learn to adaptively time their outputs. Within the model, hippocampal timing helps to maintain attention on motivationally salient goal objects during variable task-related delays, and cerebellar timing controls the release of conditioned responses. This property is part of the model's description of how cognitive-emotional interactions focus attention on motivationally valued cues, and how this process breaks down due to hippocampal ablation. The model suggests that the hippocampal mechanisms that help to rapidly draw attention to salient cues could prematurely release motor commands were not the release of these commands adaptively timed by the cerebellum. The model hippocampal system modulates cortical recognition learning without actually encoding the representational information that the cortex encodes. These properties avoid the difficulties faced by several models that propose a direct hippocampal role in recognition learning. Learning within the model hippocampal system controls adaptive timing and spatial orientation. Model properties hereby clarify how hippocampal ablations cause amnesic symptoms and difficulties with tasks which combine task delays, novelty detection, and attention towards goal objects amid distractions. When these model recognition, reinforcement, sensory-motor, and timing processes work together, they suggest how the brain can accomplish conditioning of multiple sensory events to delayed rewards, as during serial compound conditioning.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-86-C-0037, 90-0128); Advanced Research Projects Agency (ONR N00014-92-J-4015); Office of Naval Research (N00014-91-J-4100, N00014-92-J-1309, N00014-92-J-1904); National Institute of Mental Health (MH-42900
    corecore