12,000 research outputs found
A neural circuit for navigation inspired by C. elegans Chemotaxis
We develop an artificial neural circuit for contour tracking and navigation
inspired by the chemotaxis of the nematode Caenorhabditis elegans. In order to
harness the computational advantages spiking neural networks promise over their
non-spiking counterparts, we develop a network comprising 7-spiking neurons
with non-plastic synapses which we show is extremely robust in tracking a range
of concentrations. Our worm uses information regarding local temporal gradients
in sodium chloride concentration to decide the instantaneous path for foraging,
exploration and tracking. A key neuron pair in the C. elegans chemotaxis
network is the ASEL & ASER neuron pair, which capture the gradient of
concentration sensed by the worm in their graded membrane potentials. The
primary sensory neurons for our network are a pair of artificial spiking
neurons that function as gradient detectors whose design is adapted from a
computational model of the ASE neuron pair in C. elegans. Simulations show that
our worm is able to detect the set-point with approximately four times higher
probability than the optimal memoryless Levy foraging model. We also show that
our spiking neural network is much more efficient and noise-resilient while
navigating and tracking a contour, as compared to an equivalent non-spiking
network. We demonstrate that our model is extremely robust to noise and with
slight modifications can be used for other practical applications such as
obstacle avoidance. Our network model could also be extended for use in
three-dimensional contour tracking or obstacle avoidance
Directional Tuning Curves, Elementary Movement Detectors, and the Estimation of the Direction of Visual Movement
Both the insect brain and the vertebrate retina detect visual movement with neurons having broad, cosine-shaped directional tuning curves oriented in either of two perpendicular directions. This article shows that this arrangement can lead to isotropic estimates of the direction of movement: for any direction the estimate is unbiased (no systematic errors) and equally accurate (constant random errors). A simple and robust computational scheme is presented that accounts for the directional tuning curves as measured in movement sensitive neurons in the blowfly. The scheme includes movement detectors of various spans, and predicts several phenomena of movement perception in man.
Convergent input from brainstem coincidence detectors onto delay-sensitive neurons in the inferior colliculus.
Responses of low-frequency neurons in the inferior colliculus (IC) of anesthetized guinea pigs were studied with binaural beats to assess their mean best interaural phase (BP) to a range of stimulating frequencies. Phase plots (stimulating frequency vs BP) were produced, from which measures of characteristic delay (CD) and characteristic phase (CP) for each neuron were obtained. The CD provides an estimate of the difference in travel time from each ear to coincidence-detector neurons in the brainstem. The CP indicates the mechanism underpinning the coincidence detector responses. A linear phase plot indicates a single, constant delay between the coincidencedetector inputs from the two ears. In more than half (54 of 90) of the neurons, the phase plot was not linear. We hypothesized that neurons with nonlinear phase plots received convergent input from brainstem coincidence detectors with different CDs.
Presentation of a second tone with a fixed, unfavorable delay suppressed the response of one input, linearizing the phase plot and revealing other inputs to be relatively simple coincidence detectors. For some neurons with highly complex phase plots, the suppressor tone altered BP values, but did not resolve the nature of the inputs. For neurons with linear phase plots, the suppressor tone either completely abolished their responses or reduced their discharge rate with no change in BP.
By selectively suppressing inputs with a second tone, we are
able to reveal the nature of underlying binaural inputs to IC neurons, confirming the hypothesis that the complex phase
plots of many IC neurons are a result of convergence from
simple brainstem coincidence detectors
Second order isomorphism: A reinterpretation and its implications in brain and cognitive sciences
Shepard and Chipman's second order isomorphism describes how
the brain may represent the relations in the world.
However, a common interpretation of the theory can cause difficulties.
The problem originates from the static nature
of representations. In an alternative interpretation, I propose that
we assign an active role to the internal representations and
relations. It turns out that a collection of such active units can
perform analogical tasks. The new interpretation is supported
by the existence of neural circuits that may be implementing such a function.
Within this framework, perception, cognition, and motor function
can be understood under a unifying principle of analogy
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …