51,869 research outputs found

    Cortical topography of intracortical inhibition influences the speed of decision making

    Get PDF
    The neocortex contains orderly topographic maps; however, their functional role remains controversial. Theoretical studies have suggested a role in minimizing computational costs, whereas empirical studies have focused on spatial localization. Using a tactile multiple-choice reaction time (RT) task before and after the induction of perceptual learning through repetitive sensory stimulation, we extend the framework of cortical topographies by demonstrating that the topographic arrangement of intracortical inhibition contributes to the speed of human perceptual decision-making processes. RTs differ among fingers, displaying an inverted U-shaped function. Simulations using neural fields show the inverted U-shaped RT distribution as an emergent consequence of lateral inhibition. Weakening inhibition through learning shortens RTs, which is modeled through topographically reorganized inhibition. Whereas changes in decision making are often regarded as an outcome of higher cortical areas, our data show that the spatial layout of interaction processes within representational maps contributes to selection and decision-making processes

    Data-driven modeling of the olfactory neural codes and their dynamics in the insect antennal lobe

    Get PDF
    Recordings from neurons in the insects' olfactory primary processing center, the antennal lobe (AL), reveal that the AL is able to process the input from chemical receptors into distinct neural activity patterns, called olfactory neural codes. These exciting results show the importance of neural codes and their relation to perception. The next challenge is to \emph{model the dynamics} of neural codes. In our study, we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a neural network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons, and is capable of producing unique olfactory neural codes for the tested odorants. Specifically, we (i) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (ii) characterize scent recognition, i.e., decision-making based on olfactory signals and (iii) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study answers a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns

    Functional roles of synaptic inhibition in auditory temporal processing

    Get PDF

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector

    Development of a bio-inspired vision system for mobile micro-robots

    Get PDF
    In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control

    Computational Screening of Tip and Stalk Cell Behavior Proposes a Role for Apelin Signaling in Sprout Progression

    Full text link
    Angiogenesis involves the formation of new blood vessels by sprouting or splitting of existing blood vessels. During sprouting, a highly motile type of endothelial cell, called the tip cell, migrates from the blood vessels followed by stalk cells, an endothelial cell type that forms the body of the sprout. To get more insight into how tip cells contribute to angiogenesis, we extended an existing computational model of vascular network formation based on the cellular Potts model with tip and stalk differentiation, without making a priori assumptions about the differences between tip cells and stalk cells. To predict potential differences, we looked for parameter values that make tip cells (a) move to the sprout tip, and (b) change the morphology of the angiogenic networks. The screening predicted that if tip cells respond less effectively to an endothelial chemoattractant than stalk cells, they move to the tips of the sprouts, which impacts the morphology of the networks. A comparison of this model prediction with genes expressed differentially in tip and stalk cells revealed that the endothelial chemoattractant Apelin and its receptor APJ may match the model prediction. To test the model prediction we inhibited Apelin signaling in our model and in an \emph{in vitro} model of angiogenic sprouting, and found that in both cases inhibition of Apelin or of its receptor APJ reduces sprouting. Based on the prediction of the computational model, we propose that the differential expression of Apelin and APJ yields a "self-generated" gradient mechanisms that accelerates the extension of the sprout.Comment: 48 pages, 10 figures, 8 supplementary figures. Accepted for publication in PLoS ON
    corecore