10,820 research outputs found
Modeling Brain Circuitry over a Wide Range of Scales
If we are ever to unravel the mysteries of brain function at its most
fundamental level, we will need a precise understanding of how its component
neurons connect to each other. Electron Microscopes (EM) can now provide the
nanometer resolution that is needed to image synapses, and therefore
connections, while Light Microscopes (LM) see at the micrometer resolution
required to model the 3D structure of the dendritic network. Since both the
topology and the connection strength are integral parts of the brain's wiring
diagram, being able to combine these two modalities is critically important.
In fact, these microscopes now routinely produce high-resolution imagery in
such large quantities that the bottleneck becomes automated processing and
interpretation, which is needed for such data to be exploited to its full
potential. In this paper, we briefly review the Computer Vision techniques we
have developed at EPFL to address this need. They include delineating dendritic
arbors from LM imagery, segmenting organelles from EM, and combining the two
into a consistent representation
Brain Dynamics across levels of Organization
After presenting evidence that the electrical activity recorded from the brain surface can reflect metastable state transitions of neuronal configurations at the mesoscopic level, I will suggest that their patterns may correspond to the distinctive spatio-temporal activity in the Dynamic Core (DC) and the Global Neuronal Workspace (GNW), respectively, in the models of the Edelman group on the one hand, and of Dehaene-Changeux, on the other. In both cases, the recursively reentrant activity flow in intra-cortical and cortical-subcortical neuron loops plays an essential and distinct role. Reasons will be given for viewing the temporal characteristics of this activity flow as signature of Self-Organized Criticality (SOC), notably in reference to the dynamics of neuronal avalanches. This point of view enables the use of statistical Physics approaches for exploring phase transitions, scaling and universality properties of DC and GNW, with relevance to the macroscopic electrical activity in EEG and EMG
Analog VLSI-Based Modeling of the Primate Oculomotor System
One way to understand a neurobiological system is by building a simulacrum that replicates its behavior in real time using similar constraints. Analog very large-scale integrated (VLSI) electronic circuit technology provides such an enabling technology. We here describe a neuromorphic system that is part of a long-term effort to understand the primate oculomotor system. It requires both fast sensory processing and fast motor control to interact with the world. A one-dimensional hardware model of the primate eye has been built that simulates the physical dynamics of the biological system. It is driven by two different analog VLSI chips, one mimicking cortical visual processing for target selection and tracking and another modeling brain stem circuits that drive the eye muscles. Our oculomotor plant demonstrates both smooth pursuit movements, driven by a retinal velocity error signal, and saccadic eye movements, controlled by retinal position error, and can reproduce several behavioral, stimulation, lesion, and adaptation experiments performed on primates
Adaptive Scales of Spatial Integration and Response Latencies in a Critically-Balanced Model of the Primary Visual Cortex
The brain processes visual inputs having structure over a large range of
spatial scales. The precise mechanisms or algorithms used by the brain to
achieve this feat are largely unknown and an open problem in visual
neuroscience. In particular, the spatial extent in visual space over which
primary visual cortex (V1) performs evidence integration has been shown to
change as a function of contrast and other visual parameters, thus adapting
scale in visual space in an input-dependent manner. We demonstrate that a
simple dynamical mechanism---dynamical criticality---can simultaneously account
for the well-documented input-dependence characteristics of three properties of
V1: scales of integration in visuotopic space, extents of lateral integration
on the cortical surface, and response latencies
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
- âŠ