1,472 research outputs found
The Dynamics of Bimodular Continuous Attractor Neural Networks with Static and Moving Stimuli
The brain achieves multisensory integration by combining the information
received from different sensory inputs to yield inferences with higher speed or
more accuracy. We consider a bimodular neural network each processing a
modality of sensory input and interacting with each other. The dynamics of
excitatory and inhibitory couplings between the two modules are studied with
static and moving stimuli. The modules exhibit non-trivial interactive
behaviors depending on the input strengths, their disparity and speed (for
moving inputs), and the inter-modular couplings. They give rise to a family of
models applicable to causal inference problems in neuroscience. They also
provide a model for the experiment of motion-bounce illusion, yielding
consistent results and predicting their robustness.Comment: 15 pages, 12 figures, journal pape
Cortico-spinal modularity in the parieto-frontal system: a new perspective on action control
: Classical neurophysiology suggests that the motor cortex (MI) has a unique role in action control. In contrast, this review presents evidence for multiple parieto-frontal spinal command modules that can bypass MI. Five observations support this modular perspective: (i) the statistics of cortical connectivity demonstrate functionally-related clusters of cortical areas, defining functional modules in the premotor, cingulate, and parietal cortices; (ii) different corticospinal pathways originate from the above areas, each with a distinct range of conduction velocities; (iii) the activation time of each module varies depending on task, and different modules can be activated simultaneously; (iv) a modular architecture with direct motor output is faster and less metabolically expensive than an architecture that relies on MI, given the slow connections between MI and other cortical areas; (v) lesions of the areas composing parieto-frontal modules have different effects from lesions of MI. Here we provide examples of six cortico-spinal modules and functions they subserve: module 1) arm reaching, tool use and object construction; module 2) spatial navigation and locomotion; module 3) grasping and observation of hand and mouth actions; module 4) action initiation, motor sequences, time encoding; module 5) conditional motor association and learning, action plan switching and action inhibition; module 6) planning defensive actions. These modules can serve as a library of tools to be recombined when faced with novel tasks, and MI might serve as a recombinatory hub. In conclusion, the availability of locally-stored information and multiple outflow paths supports the physiological plausibility of the proposed modular perspective
Can biological quantum networks solve NP-hard problems?
There is a widespread view that the human brain is so complex that it cannot
be efficiently simulated by universal Turing machines. During the last decades
the question has therefore been raised whether we need to consider quantum
effects to explain the imagined cognitive power of a conscious mind.
This paper presents a personal view of several fields of philosophy and
computational neurobiology in an attempt to suggest a realistic picture of how
the brain might work as a basis for perception, consciousness and cognition.
The purpose is to be able to identify and evaluate instances where quantum
effects might play a significant role in cognitive processes.
Not surprisingly, the conclusion is that quantum-enhanced cognition and
intelligence are very unlikely to be found in biological brains. Quantum
effects may certainly influence the functionality of various components and
signalling pathways at the molecular level in the brain network, like ion
ports, synapses, sensors, and enzymes. This might evidently influence the
functionality of some nodes and perhaps even the overall intelligence of the
brain network, but hardly give it any dramatically enhanced functionality. So,
the conclusion is that biological quantum networks can only approximately solve
small instances of NP-hard problems.
On the other hand, artificial intelligence and machine learning implemented
in complex dynamical systems based on genuine quantum networks can certainly be
expected to show enhanced performance and quantum advantage compared with
classical networks. Nevertheless, even quantum networks can only be expected to
efficiently solve NP-hard problems approximately. In the end it is a question
of precision - Nature is approximate.Comment: 38 page
Cortical Hubs Form a Module for Multisensory Integration on Top of the Hierarchy of Cortical Networks
Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information
Precis of neuroconstructivism: how the brain constructs cognition
Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment
Advancing Perception in Artificial Intelligence through Principles of Cognitive Science
Although artificial intelligence (AI) has achieved many feats at a rapid
pace, there still exist open problems and fundamental shortcomings related to
performance and resource efficiency. Since AI researchers benchmark a
significant proportion of performance standards through human intelligence,
cognitive sciences-inspired AI is a promising domain of research. Studying
cognitive science can provide a fresh perspective to building fundamental
blocks in AI research, which can lead to improved performance and efficiency.
In this review paper, we focus on the cognitive functions of perception, which
is the process of taking signals from one's surroundings as input, and
processing them to understand the environment. Particularly, we study and
compare its various processes through the lens of both cognitive sciences and
AI. Through this study, we review all current major theories from various
sub-disciplines of cognitive science (specifically neuroscience, psychology and
linguistics), and draw parallels with theories and techniques from current
practices in AI. We, hence, present a detailed collection of methods in AI for
researchers to build AI systems inspired by cognitive science. Further, through
the process of reviewing the state of cognitive-inspired AI, we point out many
gaps in the current state of AI (with respect to the performance of the human
brain), and hence present potential directions for researchers to develop
better perception systems in AI.Comment: Summary: a detailed review of the current state of perception models
through the lens of cognitive A
Multisensory wearable interface for immersion and telepresence in robotics
The idea of being present in a remote location has inspired researchers to develop robotic devices that make humans to experience the feeling of telepresence. These devices need of multiple sensory feedback to provide a more realistic telepresence experience. In this work, we develop a wearable interface for immersion and telepresence that provides to human with the capability of both to receive multisensory feedback from vision, touch and audio and to remotely control a robot platform. Multimodal feedback from a remote environment is based on the integration of sensor technologies coupled to the sensory system of the robot platform. Remote control of the robot is achieved by a modularised architecture, which allows to visually explore the remote environment. We validated our work with multiple experiments where participants, located at different venues, were able to successfully control the robot platform while visually exploring, touching and listening a remote environment. In our experiments we used two different robotic platforms: the iCub humanoid robot and the Pioneer LX mobile robot. These experiments show that our wearable interface is comfortable, easy to use and adaptable to different robotic platforms. Furthermore, we observed that our approach allows humans to experience a vivid feeling of being present in a remote environment
- …