14 research outputs found

    Encoding of naturalistic optic flow by motion sensitive neurons of nucleus rotundus in the zebra finch (TaeniopygiaTaeniopygia guttataguttata)

    Get PDF
    Eckmeier D, Kern R, Egelhaaf M, Bischof H-J. Encoding of naturalistic optic flow by motion sensitive neurons of nucleus rotundus in the zebra finch (TaeniopygiaTaeniopygia guttataguttata). Frontiers in Integrative Neuroscience. 2013;7:68.The retinal image changes that occur during locomotion, the optic flow, carry information about self-motion and the three-dimensional structure of the environment. Especially fast moving animals with only little binocular vision depend on these depth cues for maneuvering. They actively control their gaze to facilitate perception of depth based on cues in the optic flow. In the visual system of birds, nucleus rotundus neurons were originally found to respond to object motion but not to background motion. However, when background and object were both moving, responses increased the more the direction and velocity of object and background motion on the retina differed. These properties may play a role in representing depth cues in the optic flow. We therefore investigated, how neurons in nucleus rotundus respond to optic flow that contains depth cues. We presented simplified and naturalistic optic flow on a panoramic LED display while recording from single neurons in nucleus rotundus of anaesthetized zebra finches. Unlike most studies on motion vision in birds, our stimuli included depth information. We found extensive responses of motion selective neurons in nucleus rotundus to optic flow stimuli. Simplified stimuli revealed preferences for optic flow reflecting translational or rotational self-motion. Naturalistic optic flow stimuli elicited complex response modulations, but the presence of objects was signaled by only few neurons. The neurons that did respond to objects in the optic flow, however, show interesting properties

    Computational algorithms and neural circuitry for compressed sensing in the mammalian main olfactory bulb

    Get PDF
    ABSTRACT A major challenge for many sensory systems is the representation of stimuli that vary along many dimensions. This problem is particularly acute for chemosensory systems because they require sensitivity to a large number of molecular features. Here we use a combination of computational modeling and in vivo electrophysiological data to propose a solution for this problem in the circuitry of the mammalian main olfactory bulb. We model the input to the olfactory bulb as an array of chemical features that, due to the vast size of chemical feature space, is sparsely occupied. We propose that this sparseness enables compression of the chemical feature array by broadly-tuned odorant receptors. Reconstruction of stimuli is then achieved by a supernumerary network of inhibitory granule cells. The main olfactory bulb may therefore implement a compressed sensing algorithm that presents several advantages. First, we demonstrate that a model of synaptic interactions between the granule cells and the mitral cells that constitute the output of the olfactory bulb, can store a highly efficient representation of odors by competitively selecting a sparse basis set of “expert” granule cells. Second, we further show that this model network can simultaneously learn separable representations of each component of an odor mixture without exposure to those components in isolation. Third, our model is capable of independent and odor-specific adaptation, which could be used by the olfactory system to perform background subtraction or sensitively compare a sample odor with an internal expectation. This model makes specific predictions about the dynamics of granule cell activity during learning. Using in vivo electrophysiological recordings, we corroborate these predictions in an experimental paradigm that stimulates memorization of odorants

    Multiple Visual Field Representations in the Visual Wulst of a Laterally Eyed Bird, the Zebra Finch (Taeniopygia guttata)

    Get PDF
    Bischof H-J, Eckmeier D, Keary N, Löwel S, Mayer U, Michael N. Multiple Visual Field Representations in the Visual Wulst of a Laterally Eyed Bird, the Zebra Finch (Taeniopygia guttata). PLOS ONE. 2016;11(5): e0154927.The visual wulst is the telencephalic target of the avian thalamofugal visual system. It contains several retinotopically organised representations of the contralateral visual field. We used optical imaging of intrinsic signals, electrophysiological recordings, and retrograde tracing with two fluorescent tracers to evaluate properties of these representations in the zebra finch, a songbird with laterally placed eyes. Our experiments revealed that there is some variability of the neuronal maps between individuals and also concerning the number of detectable maps. It was nonetheless possible to identify three different maps, a posterolateral, a posteromedial, and an anterior one, which were quite constant in their relation to each other. The posterolateral map was in contrast to the two others constantly visible in each successful experiment. The topography of the two other maps was mirrored against that map. Electrophysiological recordings in the anterior and the posterolateral map revealed that all units responded to flashes and to moving bars. Mean directional preferences as well as latencies were different between neurons of the two maps. Tracing experiments confirmed previous reports on the thalamo-wulst connections and showed that the anterior and the posterolateral map receive projections from separate clusters within the thalamic nuclei. Maps are connected to each other by wulst intrinsic projections. Our experiments confirm that the avian visual wulst contains several separate retinotopic maps with both different physiological properties and different thalamo-wulst afferents. This confirms that the functional organization of the visual wulst is very similar to its mammalian equivalent, the visual cortex

    Gaze Strategy in the Free Flying Zebra Finch (Taeniopygia guttata)

    Get PDF
    Fast moving animals depend on cues derived from the optic flow on their retina. Optic flow from translational locomotion includes information about the three-dimensional composition of the environment, while optic flow experienced during a rotational self motion does not. Thus, a saccadic gaze strategy that segregates rotations from translational movements during locomotion will facilitate extraction of spatial information from the visual input. We analysed whether birds use such a strategy by highspeed video recording zebra finches from two directions during an obstacle avoidance task. Each frame of the recording was examined to derive position and orientation of the beak in three-dimensional space. The data show that in all flights the head orientation was shifted in a saccadic fashion and was kept straight between saccades. Therefore, birds use a gaze strategy that actively stabilizes their gaze during translation to simplify optic flow based navigation. This is the first evidence of birds actively optimizing optic flow during flight

    Optic flow in behavior and brain function of the Zebra Finch

    No full text
    Eckmeier D. Optic flow in behavior and brain function of the Zebra Finch. Bielefeld; 2010

    Four examples of neurons from the posterolateral wulst area with on–responses (A-D).

    No full text
    <p>Left two panels: Raster plots (40 sweeps) and PSTH´s. Grey shaded areas: Time of stimulus transition across the monitor. Arrows indicate the direction of stimulus movement. Medial panel: Directional responses of the same neurons as circular diagrams. The line originating in the center is the main vector. Its length is a measure for the directionality of the neuron. Right panel: Estimation of the receptive fields by combining the responses to all four directions. Light colors indicate strong activation.</p

    Distribution of directional responses within the anteromedial (a-m) and the posterolateral (p-l) wulst field.

    No full text
    <p>A: Positions of the recording sites on the surface of the visual wulst of the left hemisphere. x-axis depicts the distance from the midline of the brain, y-axis the distance from the Y-point (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0154927#sec002" target="_blank">methods</a>). The compass rose at the top left indicates the colour coding of the different directions used to mark the directions at the recording sites. B: directional response distribution of the anteromedial map, 0 degrees: downwards, 180 degrees: upwards, 90 degrees: forwards (rostral), 270 degrees: backwards (caudal). C: posteromedial map, explanations see B. Red colour indicates on responses, blue colour indicates off responses.</p

    Neurons from the posterolateral and anteromedial wulst areas with on (red) and off (blue) responses.

    No full text
    <p>Note that the off responses are often maximal with a stimulus direction opposite to that of the on–response. Other features as in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0154927#pone.0154927.g005" target="_blank">Fig 5</a>.</p

    Optically imaged retinotopic maps in the zebra finch visual wulst are variable.

    No full text
    <p>Activation in the left visual wulst was induced by visual stimulation of the right eye with moving vertical and horizontal bars (top). Colour coded polar maps of azimuth and elevation (upper row) and grey-scale coded activity maps (lower row) from three zebra finches (ZF 2, ZF 77 and ZF 587), showing at least three separate representations of the visual field. The magnitude of stimulus-driven neuronal activation is given as a number in the upper right corner of the activity maps. <b>A</b>/<b>B</b>, <b>E</b>/ <b>F</b>, <b>I</b>/ <b>J:</b> azimuth and activity maps from three different birds, showing interindividual differences in both retinotopic details and activation strength. <b>C</b>/<b>D</b>, <b>G</b>/ <b>H</b>, <b>K</b>/ <b>L:</b> elevation and activity maps from the same birds as above. Note that e.g. the anterior retinotopic maps in <b>C</b> are mirror images of the posterior retinotopic map. Scale bar 500 ÎĽm.</p
    corecore