2,933 research outputs found
Neural coding strategies and mechanisms of competition
A long running debate has concerned the question of whether neural
representations are encoded using a distributed or a local coding scheme. In
both schemes individual neurons respond to certain specific patterns of
pre-synaptic activity. Hence, rather than being dichotomous, both coding
schemes are based on the same representational mechanism. We argue that a
population of neurons needs to be capable of learning both local and distributed
representations, as appropriate to the task, and should be capable of generating
both local and distributed codes in response to different stimuli. Many neural
network algorithms, which are often employed as models of cognitive processes,
fail to meet all these requirements. In contrast, we present a neural network
architecture which enables a single algorithm to efficiently learn, and respond
using, both types of coding scheme
Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain's navigational system
It is true that the "best" neural network is not necessarily the one with the
most "brain-like" behavior. Understanding biological intelligence, however, is
a fundamental goal for several distinct disciplines. Translating our
understanding of intelligence to machines is a fundamental problem in robotics.
Propelled by new advancements in Neuroscience, we developed a spiking neural
network (SNN) that draws from mounting experimental evidence that a number of
individual neurons is associated with spatial navigation. By following the
brain's structure, our model assumes no initial all-to-all connectivity, which
could inhibit its translation to a neuromorphic hardware, and learns an
uncharted territory by mapping its identified components into a limited number
of neural representations, through spike-timing dependent plasticity (STDP). In
our ongoing effort to employ a bioinspired SNN-controlled robot to real-world
spatial mapping applications, we demonstrate here how an SNN may robustly
control an autonomous robot in mapping and exploring an unknown environment,
while compensating for its own intrinsic hardware imperfections, such as
partial or total loss of visual input.Comment: 8 pages, 3 Figures, International Conference on Neuromorphic Systems
(ICONS 2018
Consciousness operates beyond the timescale for discerning time intervals: implications for Q-mind theories and analysis of quantum decoherence in brain
This paper presents in details how the subjective time is constructed by the brain cortex via reading packets of information called "time labels", produced by the right basal ganglia that act as brain timekeeper. Psychophysiological experiments have measured the subjective "time quanta" to be 40 ms and show that consciousness operates beyond that scale - an important result having profound implications for the Q-mind theory. Although in most current mainstream biophysics research on cognitive processes, the brain is modelled as a neural network obeying classical physics, Penrose (1989, 1997) and others have argued that quantum mechanics may play an essential role, and that successful brain simulations can only be performed with a quantum computer. Tegmark (2000) showed that make-or-break issue for the quantum models of mind is whether the relevant degrees of freedom of the brain can be sufficiently isolated to retain their quantum coherence and tried to settle the issue with detailed calculations of the relevant decoherence rates. He concluded that the mind is classical rather than quantum system, however his reasoning is based on biological inconsistency. Here we present detailed exposition of molecular neurobiology and define the dynamical timescale of cognitive processes linked to consciousness to be 10-15 ps showing that macroscopic quantum coherent phenomena in brain are not ruled out, and even may provide insight in understanding life, information and consciousness
Session 5: Development, Neuroscience and Evolutionary Psychology
Proceedings of the Pittsburgh Workshop in History and Philosophy of Biology, Center for Philosophy of Science, University of Pittsburgh, March 23-24 2001 Session 5: Development, Neuroscience and Evolutionary Psycholog
Synaptic potentiation facilitates memory-like attractor dynamics in cultured in vitro hippocampal networks
Collective rhythmic dynamics from neurons is vital for cognitive functions
such as memory formation but how neurons self-organize to produce such activity
is not well understood. Attractor-based models have been successfully
implemented as a theoretical framework for memory storage in networks of
neurons. Activity-dependent modification of synaptic transmission is thought to
be the physiological basis of learning and memory. The goal of this study is to
demonstrate that using a pharmacological perturbation on in vitro networks of
hippocampal neurons that has been shown to increase synaptic strength follows
the dynamical postulates theorized by attractor models. We use a grid of
extracellular electrodes to study changes in network activity after this
perturbation and show that there is a persistent increase in overall spiking
and bursting activity after treatment. This increase in activity appears to
recruit more "errant" spikes into bursts. Lastly, phase plots indicate a
conserved activity pattern suggesting that the network is operating in a stable
dynamical state
Self-Organized Criticality in Developing Neuronal Networks
Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV) of cortical cell cultures (n = 20) and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV) is followed by a supercritical (≈20 DIV) and then a subcritical one (≈36 DIV) until the network finally reaches stable criticality (≈58 DIV). Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro
Monitoring and Control Framework for Advanced Power Plant Systems Using Artificial Intelligence Techniques
This dissertation presents the design, development, and simulation testing of a monitoring and control framework for dynamic systems using artificial intelligence techniques. A comprehensive monitoring and control system capable of detecting, identifying, evaluating, and accommodating various subsystem failures and upset conditions is presented. The system is developed by synergistically merging concepts inspired from the biological immune system with evolutionary optimization algorithms and adaptive control techniques.;The proposed methodology provides the tools for addressing the complexity and multi-dimensionality of the modern power plants in a comprehensive and integrated manner that classical approaches cannot achieve. Current approaches typically address abnormal condition (AC) detection of isolated subsystems of low complexity, affected by specific AC involving few features with limited identification capability. They do not attempt AC evaluation and mostly rely on control system robustness for accommodation. Addressing the problem of power plant monitoring and control under AC at this level of completeness has not yet been attempted.;Within the proposed framework, a novel algorithm, namely the partition of the universe, was developed for building the artificial immune system self. As compared to the clustering approach, the proposed approach is less computationally intensive and facilitates the use of full-dimensional self for system AC detection, identification, and evaluation. The approach is implemented in conjunction with a modified and improved dendritic cell algorithm. It allows for identifying the failed subsystems without previous training and is extended to address the AC evaluation using a novel approach.;The adaptive control laws are designed to augment the performance and robustness of baseline control laws under normal and abnormal operating conditions. Artificial neural network-based and artificial immune system-based approaches are developed and investigated for an advanced power plant through numerical simulation.;This dissertation also presents the development of an interactive computational environment for the optimization of power plant control system using evolutionary techniques with immunity-inspired enhancements. Several algorithms mimicking mechanisms of the immune system of superior organisms, such as cloning, affinity-based selection, seeding, and vaccination are used. These algorithms are expected to enhance the computational effectiveness, improve convergence, and be more efficient in handling multiple local extrema, through an adequate balance between exploration and exploitation.;The monitoring and control framework formulated in this dissertation applies to a wide range of technical problems. The proposed methodology is demonstrated with promising results using a high validity DynsimRTM model of the acid gas removal unit that is part of the integrated gasification combined cycle power plant available at West Virginia University AVESTAR Center. The obtained results show that the proposed system is an efficient and valuable technique to be applied to a real world application. The implementation of this methodology can potentially have significant impacts on the operational safety of many complex systems
Cartesian Abstraction Can Yield ‘Cognitive Maps’
AbstractIt has been long debated how the so called cognitive map, the set of place cells, develops in rat hippocampus. The function of this organ is of high relevance, since the hippocampus is the key component of the medial temporal lobe memory system, responsible for forming episodic memory, declarative memory, the memory for facts and rules that serve cognition in humans. Here, a general mechanism is put forth: We introduce the novel concept of Cartesian factors. We show a non-linear projection of observations to a discretized representation of a Cartesian factor in the presence of a representation of a complementing one. The computational model is demonstrated for place cells that we produce from the egocentric observations and the head direction signals. Head direction signals make the observed factor and sparse allothetic signals make the complementing Cartesian one. We present numerical results, connect the model to the neural substrate, and elaborate on the differences between this model and other ones, including Slow Feature Analysis [17]
Cellular Automata Applications in Shortest Path Problem
Cellular Automata (CAs) are computational models that can capture the
essential features of systems in which global behavior emerges from the
collective effect of simple components, which interact locally. During the last
decades, CAs have been extensively used for mimicking several natural processes
and systems to find fine solutions in many complex hard to solve computer
science and engineering problems. Among them, the shortest path problem is one
of the most pronounced and highly studied problems that scientists have been
trying to tackle by using a plethora of methodologies and even unconventional
approaches. The proposed solutions are mainly justified by their ability to
provide a correct solution in a better time complexity than the renowned
Dijkstra's algorithm. Although there is a wide variety regarding the
algorithmic complexity of the algorithms suggested, spanning from simplistic
graph traversal algorithms to complex nature inspired and bio-mimicking
algorithms, in this chapter we focus on the successful application of CAs to
shortest path problem as found in various diverse disciplines like computer
science, swarm robotics, computer networks, decision science and biomimicking
of biological organisms' behaviour. In particular, an introduction on the first
CA-based algorithm tackling the shortest path problem is provided in detail.
After the short presentation of shortest path algorithms arriving from the
relaxization of the CAs principles, the application of the CA-based shortest
path definition on the coordinated motion of swarm robotics is also introduced.
Moreover, the CA based application of shortest path finding in computer
networks is presented in brief. Finally, a CA that models exactly the behavior
of a biological organism, namely the Physarum's behavior, finding the
minimum-length path between two points in a labyrinth is given.Comment: To appear in the book: Adamatzky, A (Ed.) Shortest path solvers. From
software to wetware. Springer, 201
- …