19 research outputs found

    Interactions of visual odometry and landmark guidance during food search in honeybees

    Get PDF
    How do honeybees use visual odometry and goal-defining landmarks to guide food search? In one experiment, bees were trained to forage in an optic-flow-rich tunnel with a landmark positioned directly above the feeder. Subsequent food-search tests indicated that bees searched much more accurately when both odometric and landmark cues were available than when only odometry was available. When the two cue sources were set in conflict, by shifting the position of the landmark in the tunnel during test, bees overwhelmingly used landmark cues rather than odometry. In another experiment, odometric cues were removed by training and testing in axially striped tunnels. The data show that bees did not weight landmarks as highly as when odometric cues were available, tending to search in the vicinity of the landmark for shorter periods. A third experiment, in which bees were trained with odometry but without a landmark, showed that a novel landmark placed anywhere in the tunnel during testing prevented bees from searching beyond the landmark location. Two further experiments, involving training bees to relatively longer distances with a goal-defining landmark, produced similar results to the initial experiment. One caveat was that, with the removal of the familiar landmark, bees tended to overshoot the training location, relative to the case where bees were trained without a landmark. Taken together, the results suggest that bees assign appropriate significance to odometric and landmark cues in a more flexible and dynamic way than previously envisaged.</p

    Fast, simple and accurate handwritten digit classification by training shallow neural network classifiers with the 'extreme learning machine' algorithm

    Get PDF
    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.Mark D. McDonnell, Migel D. Tissera, Tony Vladusich, AndrĂ© van Schaik, Jonathan Tapso

    An Animal Model of Emotional Blunting in Schizophrenia

    Get PDF
    Schizophrenia is often associated with emotional blunting—the diminished ability to respond to emotionally salient stimuli—particularly those stimuli representative of negative emotional states, such as fear. This disturbance may stem from dysfunction of the amygdala, a brain region involved in fear processing. The present article describes a novel animal model of emotional blunting in schizophrenia. This model involves interfering with normal fear processing (classical conditioning) in rats by means of acute ketamine administration. We confirm, in a series of experiments comprised of cFos staining, behavioral analysis and neurochemical determinations, that ketamine interferes with the behavioral expression of fear and with normal fear processing in the amygdala and related brain regions. We further show that the atypical antipsychotic drug clozapine, but not the typical antipsychotic haloperidol nor an experimental glutamate receptor 2/3 agonist, inhibits ketamine's effects and retains normal fear processing in the amygdala at a neurochemical level, despite the observation that fear-related behavior is still inhibited due to ketamine administration. Our results suggest that the relative resistance of emotional blunting to drug treatment may be partially due to an inability of conventional therapies to target the multiple anatomical and functional brain systems involved in emotional processing. A conceptual model reconciling our findings in terms of neurochemistry and behavior is postulated and discussed

    Color constancy and the functional significance of McCollough effects

    No full text
    A central problem in visual perception concerns how humans perceive stable and uniform object colors despite variable lighting conditions (i.e. color constancy). One solution is to 'discount' variations in lighting across object surfaces by encoding color contrasts, and utilize this information to 'fill in' properties of the entire object surface. Implicit in this solution is the caveat that the color contrasts defining object boundaries must be distinguished from the spurious color fringes that occur naturally along luminance-defined edges in the retinal image (i.e. optical chromatic aberration). In the present paper, we propose that the neural machinery underlying color constancy is complemented by an 'error-correction' procedure which compensates for chromatic aberration, and suggest that error-correction may be linked functionally to the experimentally induced illusory colored aftereffects known as McCollough effects (MEs). To test these proposals, we develop a neural network model which incorporates many of the receptive-field (RF) profiles of neurons in primate color vision. The model is composed of two parallel processing streams which encode complementary sets of stimulus features: one stream encodes color contrasts to facilitate filling-in and color constancy; the other stream selectively encodes (spurious) color fringes at luminance boundaries, and learns to inhibit the filling-in of these colors within the first stream. Computer simulations of the model illustrate how complementary color-spatial interactions between error-correction and filling-in operations (a) facilitate color constancy, (b) reveal functional links between color constancy and the ME, and (c) reconcile previously reported anomalies in the local (edge) and global (spreading) properties of the ME. We discuss the broader implications of these findings by considering the complementary functional roles performed by RFs mediating color-spatial interactions in the primate visual system. (C) 2002 Elsevier Science Ltd. All rights reserved

    Do cortical neurons process luminance or contrast to encode surface properties?

    No full text
    On the one hand, contrast signals provide information about surface properties, such as reflectance, and patchy illumination conditions, such as shadows. On the other hand, processing of luminance signals may provide information about global light levels, such as the difference between sunny and cloudy days. We devised models of contrast and luminance processing, using principles of logarithmic signal coding and half-wave rectification. We fit each model to individual response profiles obtained from 67 surface-responsive macaque V1 neurons in a center-surround paradigm similar to those used in human psychophysical studies. The most general forms of the luminance and contrast models explained, on average, 73 and 87% of the response variance over the sample population, respectively. We used a statistical technique, known as Akaike's information criterion, to quantify goodness of fit relative to number of model parameters, giving the relative probability of each model being correct. Luminance models, having fewer parameters than contrast models, performed substantially better in the vast majority of neurons, whereas contrast models performed similarly well in only a small minority of neurons. These results suggest that the processing of local and mean scene luminance predominates over contrast integration in surface-responsive neurons of the primary visual cortex. The sluggish dynamics of luminance-related cortical activity may provide a neural basis for the recent psychophysical demonstration that luminance information dominates brightness perception at low temporal frequencies

    Honeybee odometry and scent guidance

    Get PDF
    We report on a striking asymmetry in search behaviour observed in honeybees trained to forage alternately at one of two feeder sites in a narrow tunnel. Bees were trained by periodically switching the position of a sucrose reward between relatively short and long distances in the tunnel. Search behaviour was examined in the training tunnel itself and in a fresh tunnel devoid of scent cues deposited by bees during training. Bees tested in the fresh tunnel exhibited a bias towards the shorter site, while bees tested in the training tunnel searched closer to the longer site. In additional experiments, we manipulated the position of scent cues, relative to the training location, in the testing tunnel. Bees generally searched at the site to which they were trained rather than at the position of the scent. Our data argue strongly against the hypothesis that bees rely exclusively on deposited scent to accurately localise a food source in natural foraging environments. We instead conclude that odometry and scent guidance contribute to honeybee food search in a manner reflecting the significance and relative reliability of sensory information

    No functional magnetic resonance imaging evidence for brightness and color filling-in in early human visual cortex

    No full text
    The brightness and color of a surface depends on its contrast with nearby surfaces. For example, a gray surface can appear very light when surrounded by a black surface or dark when surrounded by a white surface. Some theories suggest that perceived surface brightness and color is represented explicitly by neural signals in cortical visual field maps; these neural signals are not initiated by the stimulus itself but rather by the contrast signals at the borders. Here, we use functional magnetic resonance imaging (fMRI) to search for such neural "filling-in" signals. Although we find the usual strong relationship between local contrast and fMRI response, when perceived brightness or color changes are induced by modulating a surrounding field, rather than the surface itself, we find there is no corresponding local modulation in primary visual cortex or other nearby retinotopic maps. Moreover, when we model the obtained fMRI responses, we find strong evidence for contributions of both local and long-range edge responses. We argue that such extended edge responses may be caused by neurons previously identified in neurophysiological studies as being brightness responsive, a characterization that may therefore need to be revised. We conclude that the visual field maps of human V1 and V2 do not contain filled-in, topographical representations of surface brightness and color

    Decoding suprathreshold stochastic resonance with optimal weights

    No full text
    Abstract not availableLiyan Xu, Tony Vladusich, Fabing Duan, Lachlan J.Gunn, Derek Abbott, Mark D. McDonnel
    corecore