68,226 research outputs found

    Dynamic Neural Field with Local Inhibition

    Get PDF
    A lateral-inhibition type neural field model with restricted connections is presented here and represents an experimental extension of the Continuum Neural Field Theory (CNFT) by suppression of the global inhibition. A modified CNFT equation is introduced and allows for a locally defined inhibition to spatially expand within the network and results in a global competition extending far beyond the range of local connections by virtue of diffusion of inhibition. The resulting model is able to attend to a moving stimulus in the presence of a very high level of noise, several distractors or a mixture of both

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Cortical topography of intracortical inhibition influences the speed of decision making

    Get PDF
    The neocortex contains orderly topographic maps; however, their functional role remains controversial. Theoretical studies have suggested a role in minimizing computational costs, whereas empirical studies have focused on spatial localization. Using a tactile multiple-choice reaction time (RT) task before and after the induction of perceptual learning through repetitive sensory stimulation, we extend the framework of cortical topographies by demonstrating that the topographic arrangement of intracortical inhibition contributes to the speed of human perceptual decision-making processes. RTs differ among fingers, displaying an inverted U-shaped function. Simulations using neural fields show the inverted U-shaped RT distribution as an emergent consequence of lateral inhibition. Weakening inhibition through learning shortens RTs, which is modeled through topographically reorganized inhibition. Whereas changes in decision making are often regarded as an outcome of higher cortical areas, our data show that the spatial layout of interaction processes within representational maps contributes to selection and decision-making processes

    Redundant neural vision systems: competing for collision recognition roles

    Get PDF
    Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modelling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems – the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition

    Dynamic Adaptive Computation: Tuning network states to task requirements

    Get PDF
    Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation", presents a central organization principle of cortical networks and discuss first experimental evidence.Comment: 6 pages + references, 2 figure
    • …
    corecore