8 research outputs found

    tuning

    No full text
    A direct, interval-based method fo

    estimation strongly depends on the analysis method

    No full text
    An important issue in the neurosciences is a quantitative description of the relation between sensory stimuli presented to an animal and their representations in the nervous system. A standard technique is the construction of a neural tuning curve, that is, a neuron’s average firing rate as a function of some parameter characterizing a family of stimuli. It is unavoidable that some of the response data are erroneously attributed to a cell, e.g., during spike sorting. However, the widely used method of statistical analysis based on the sample mean and least-squares approximation for the spike count can perform extremely badly if the noise distribution is not exactly normal, which is almost never the case in applications. Here, we present a method for constructing neural tuning curves that is especially suited for cases of high noise and the presence of outliers. Since it is usually not decidable if an outlier is faulty or not we limit the influence of far outlying points rather than try to identify and discard them. In contrast to traditional methods employing a point-by-point estimation of a tuning curve, we use all measured data from all different stimulus conditions at once in the construction. Given the measured data at only a finite number of stimulus conditions, a robust tuning curve is obtained that approximates the cell’s ideal tuning curve optimally in all stimulus conditions with respect to a given distance measure. A measure that assesses the quality of this fitting method with respect to the traditional least-squares fitting method and to a median-based fitting method is introduced. The reliability of inference with respect to the encoding accuracy that can be achieved by a population of neurons is demonstrated in both artificially generated and experimentally recorded data from rat primary visual cortex. While the data shown in this paper are responses to orientation stimuli, the method of tuning curve construction is also viable and maintains it

    Local interactions in neural networks explain global effects in Gestalt processing and masking

    No full text
    One of the fundamental and puzzling questions in vision research is how objects are segmented from their backgrounds and how object formation evolves in time. The recently discovered shine-through effect allows one to study object segmentation and object formation of a masked target depending on the spatiotemporal Gestalt of the masking stimulus (Herzog & Koch, 2001). In the shine-through effect, a vernier (two abutting lines) precedes a grating for a very short time. For small gratings, the vernier remains invisible while it regains visibility as a shine-through element for extended and homogeneous gratings. However, even subtle deviations from the homogeneity of the grating diminish or even abolish shine-through. At first glance, these results suggest that explanations of these effects have to rely on high-level Gestalt terminology such as homogeneity rather than on low-level properties such as luminance (Herzog, Fahle, & Koch, 2001). Here, we show that a simple neural network model of the Wilson-Cowan type qualitatively and quantitatively explains the basic effects in the shine-through paradigm, although the model does not contain any explicit, global Gestalt processing. Visibility of the target vernier corresponds to transient activation of neural populations resulting from the dynamics of local lateral interactions of excitatory and inhibitory layers of neural populations

    LETTER Communicated by Manfred Fahle Local Interactions in Neural Networks Explain Global Effects in Gestalt Processing and Masking

    No full text
    One of the fundamental and puzzling questions in vision research is how objects are segmented from their backgrounds and how object formation evolves in time. The recently discovered shine-through effect allows one to study object segmentation and object formation of a masked target depending on the spatiotemporal Gestalt of the masking stimulus (Herzog & Koch, 2001). In the shine-through effect, a vernier (two abutting lines) precedes a grating for a very short time. For small gratings, the vernier remains invisible while it regains visibility as a shine-through element for extended and homogeneous gratings. However, even subtle deviations from the homogeneity of the grating diminish or even abolish shinethrough. At �rst glance, these results suggest that explanations of these effects have to rely on high-level Gestalt terminology such as homogeneity rather than on low-level properties such as luminance (Herzog, Fahle, & Koch, 2001). Here, we show that a simple neural network model of the Wilson-Cowan type qualitatively and quantitatively explains the basic effects in the shine-through paradigm, although the model does not contain any explicit, global Gestalt processing. Visibility of the target vernier corresponds to transient activation of neural populations resulting from the dynamics of local lateral interactions of excitatory and inhibitory layers of neural populations.

    www.elsevier.com/locate/neucom

    No full text
    Dynamics of neuronal populations modeled by a Wilson–Cowan system account for the transient visibility of maskedstimul

    Stimulus Representation in Rat Primary Visual Cortex: Multi-Electrode Recordings With Micromachined Silicon Probes and Estimation Theory

    No full text
    The study of neural population codes relies on massively parallel recordings in combination with theoretically motivated analysis tools. We applied two multi-site recording techniques to record from cells throughout cortical depth in a minimally invasive way. The feasibility of such experiments in area 17 of the anesthetized rat is demonstrated. Bayesian reconstruction and the interpretative framework of Fisher information are introduced. We demonstrate applicability and usefulness of Bayesian stimulus reconstruction and show that even small numbers of neurons can yield a high degree of representational accuracy under favorable conditions. Results are discussed and future lines of research outlined
    corecore