939 research outputs found
Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience
This essay is presented with two principal objectives in mind: first, to
document the prevalence of fractals at all levels of the nervous system, giving
credence to the notion of their functional relevance; and second, to draw
attention to the as yet still unresolved issues of the detailed relationships
among power law scaling, self-similarity, and self-organized criticality. As
regards criticality, I will document that it has become a pivotal reference
point in Neurodynamics. Furthermore, I will emphasize the not yet fully
appreciated significance of allometric control processes. For dynamic fractals,
I will assemble reasons for attributing to them the capacity to adapt task
execution to contextual changes across a range of scales. The final Section
consists of general reflections on the implications of the reviewed data, and
identifies what appear to be issues of fundamental importance for future
research in the rapidly evolving topic of this review
AI of Brain and Cognitive Sciences: From the Perspective of First Principles
Nowadays, we have witnessed the great success of AI in various applications,
including image classification, game playing, protein structure analysis,
language translation, and content generation. Despite these powerful
applications, there are still many tasks in our daily life that are rather
simple to humans but pose great challenges to AI. These include image and
language understanding, few-shot learning, abstract concepts, and low-energy
cost computing. Thus, learning from the brain is still a promising way that can
shed light on the development of next-generation AI. The brain is arguably the
only known intelligent machine in the universe, which is the product of
evolution for animals surviving in the natural environment. At the behavior
level, psychology and cognitive sciences have demonstrated that human and
animal brains can execute very intelligent high-level cognitive functions. At
the structure level, cognitive and computational neurosciences have unveiled
that the brain has extremely complicated but elegant network forms to support
its functions. Over years, people are gathering knowledge about the structure
and functions of the brain, and this process is accelerating recently along
with the initiation of giant brain projects worldwide. Here, we argue that the
general principles of brain functions are the most valuable things to inspire
the development of AI. These general principles are the standard rules of the
brain extracting, representing, manipulating, and retrieving information, and
here we call them the first principles of the brain. This paper collects six
such first principles. They are attractor network, criticality, random network,
sparse coding, relational memory, and perceptual learning. On each topic, we
review its biological background, fundamental property, potential application
to AI, and future development.Comment: 59 pages, 5 figures, review articl
Dynamics of embodied dissociated cortical cultures for the control of hybrid biological robots.
The thesis presents a new paradigm for studying the importance of interactions between an organism and its environment using a combination of biology and technology: embodying cultured cortical neurons via robotics. From this platform, explanations of the emergent neural network properties leading to cognition are sought through detailed electrical observation of neural activity. By growing the networks of neurons and glia over multi-electrode arrays (MEA), which can be used to both stimulate and record the activity of multiple neurons in parallel over months, a long-term real-time 2-way communication with the neural network becomes possible. A better understanding of the processes leading to biological cognition can, in turn, facilitate progress in understanding neural pathologies, designing neural prosthetics, and creating fundamentally different types of artificial cognition.
Here, methods were first developed to reliably induce and detect neural plasticity using MEAs. This knowledge was then applied to construct sensory-motor mappings and training algorithms that produced adaptive goal-directed behavior. To paraphrase the results, most any stimulation could induce neural plasticity, while the inclusion of temporal and/or spatial information about neural activity was needed to identify plasticity. Interestingly, the plasticity of action potential propagation in axons was observed. This is a notion counter to the dominant theories of neural plasticity that focus on synaptic efficacies and is suggestive of a vast and novel computational mechanism for learning and memory in the brain.
Adaptive goal-directed behavior was achieved by using patterned training stimuli, contingent on behavioral performance, to sculpt the network into behaviorally appropriate functional states: network plasticity was not only induced, but could be customized. Clinically, understanding the relationships between electrical stimulation, neural activity, and the functional expression of neural plasticity could assist neuro-rehabilitation and the design of neuroprosthetics. In a broader context, the networks were also embodied with a robotic drawing machine exhibited in galleries throughout the world. This provided a forum to educate the public and critically discuss neuroscience, robotics, neural interfaces, cybernetics, bio-art, and the ethics of biotechnology.Ph.D.Committee Chair: Steve M. Potter; Committee Member: Eric Schumacher; Committee Member: Robert J. Butera; Committee Member: Stephan P. DeWeerth; Committee Member: Thomas D. DeMars
The computational role of short-term plasticity and the balance of excitation and inhibition in neural microcircuits: experimental and theoretical analysis
The computations performed by the brain ultimately rely on the
functional connectivity between neurons embedded in complex networks. It is
well known that the neuronal connections, the synapses, are plastic, i.e. the
contribution of each presynaptic neuron to the firing of a postsynaptic neuron
can be independently adjusted. The modulation of effective synaptic strength
can occur on time scales that range from tens or hundreds of milliseconds, to
tens of minutes or hours, to days, and may involve pre- and/or post-synaptic
modifications. The collection of these mechanisms is generally believed to
underlie learning and memory and, hence, it is fundamental to understand their
consequences in the behavior of neurons.(...
Brain-Inspired Computational Intelligence via Predictive Coding
Artificial intelligence (AI) is rapidly becoming one of the key technologies
of this century. The majority of results in AI thus far have been achieved
using deep neural networks trained with the error backpropagation learning
algorithm. However, the ubiquitous adoption of this approach has highlighted
some important limitations such as substantial computational cost, difficulty
in quantifying uncertainty, lack of robustness, unreliability, and biological
implausibility. It is possible that addressing these limitations may require
schemes that are inspired and guided by neuroscience theories. One such theory,
called predictive coding (PC), has shown promising performance in machine
intelligence tasks, exhibiting exciting properties that make it potentially
valuable for the machine learning community: PC can model information
processing in different brain areas, can be used in cognitive control and
robotics, and has a solid mathematical grounding in variational inference,
offering a powerful inversion scheme for a specific class of continuous-state
generative models. With the hope of foregrounding research in this direction,
we survey the literature that has contributed to this perspective, highlighting
the many ways that PC might play a role in the future of machine learning and
computational intelligence at large.Comment: 37 Pages, 9 Figure
Neuromorphic Engineering Editors' Pick 2021
This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers
This PhD thesis is focused on the central idea that single neurons in the
brain should be regarded as temporally precise and highly complex
spatio-temporal pattern recognizers. This is opposed to the prevalent view of
biological neurons as simple and mainly spatial pattern recognizers by most
neuroscientists today. In this thesis, I will attempt to demonstrate that this
is an important distinction, predominantly because the above-mentioned
computational properties of single neurons have far-reaching implications with
respect to the various brain circuits that neurons compose, and on how
information is encoded by neuronal activity in the brain. Namely, that these
particular "low-level" details at the single neuron level have substantial
system-wide ramifications. In the introduction we will highlight the main
components that comprise a neural microcircuit that can perform useful
computations and illustrate the inter-dependence of these components from a
system perspective. In chapter 1 we discuss the great complexity of the
spatio-temporal input-output relationship of cortical neurons that are the
result of morphological structure and biophysical properties of the neuron. In
chapter 2 we demonstrate that single neurons can generate temporally precise
output patterns in response to specific spatio-temporal input patterns with a
very simple biologically plausible learning rule. In chapter 3, we use the
differentiable deep network analog of a realistic cortical neuron as a tool to
approximate the gradient of the output of the neuron with respect to its input
and use this capability in an attempt to teach the neuron to perform nonlinear
XOR operation. In chapter 4 we expand chapter 3 to describe extension of our
ideas to neuronal networks composed of many realistic biological spiking
neurons that represent either small microcircuits or entire brain regions
- …