655 research outputs found

    Measuring the spatial extent of texture pooling using reverse correlation

    Get PDF
    The local image representation produced by early stages of visual analysis is uninformative regarding spatially extensive textures and surfaces. We know little about the cortical algorithm used to combine local information over space, and still less about the area over which it can operate. But such operations are vital to support perception of real-world objects and scenes. Here, we deploy a novel reverse-correlation technique to measure the extent of spatial pooling for target regions of different areas placed either in the central visual field, or more peripherally. Stimuli were large arrays of micropatterns, with their contrasts perturbed individually on an interval-by-interval basis. By comparing trial-by-trial observer responses with the predictions of computational models, we show that substantial regions (up to 13 carrier cycles) of a stimulus can be monitored in parallel by summing contrast over area. This summing strategy is very different from the more widely assumed signal selection strategy (a MAX operation), and suggests that neural mechanisms representing extensive visual textures can be recruited by attention. We also demonstrate that template resolution is much less precise in the parafovea than in the fovea, consistent with recent accounts of crowding

    Regarding the benefit of zero-dimensional noise

    Get PDF
    Baker and Meese (2012) (B&M) provided an empirically driven criticism of the use of two-dimensional (2D) pixel noise in equivalent noise (EN) experiments. Their main objection was that in addition to injecting variability into the contrast detecting mechanisms, 2D noise also invokes gain control processes from a widely tuned contrast gain pool (e.g., Foley, 1994). B&M also developed a zero-dimensional (0D) noise paradigm in which all of the variance is concentrated in the mechanisms involved in the detection process. They showed that this form of noise conformed much more closely to expectations than did a 2D variant

    A common rule for integration and suppression of luminance contrast across eyes, space, time, and pattern

    Get PDF
    Visual perception begins by dissecting the retinal image into millions of small patches for local analyses by local receptive fields. However, image structures extend well beyond these receptive fields and so further processes must be involved in sewing the image fragments back together to derive representations of higher order (more global) structures. To investigate the integration process, we also need to understand the opposite process of suppression. To investigate both processes together, we measured triplets of dipper functions for targets and pedestals involving interdigitated stimulus pairs (A, B). Previous work has shown that summation and suppression operate over the full contrast range for the domains of ocularity and space. Here, we extend that work to include orientation and time domains. Temporal stimuli were 15-Hz counter-phase sine-wave gratings, where A and B were the positive and negative phases of the oscillation, respectively. For orientation, we used orthogonally oriented contrast patches (A, B) whose sum was an isotropic difference of Gaussians. Results from all four domains could be understood within a common framework in which summation operates separately within the numerator and denominator of a contrast gain control equation. This simple arrangement of summation and counter-suppression achieves integration of various stimulus attributes without distorting the underlying contrast code

    Contrast integration over area is extensive: a three-stage model of spatial summation

    Get PDF
    Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach

    Area summation of first- and second-order modulations of luminance

    Get PDF
    To extend our understanding of the early visual hierarchy, we investigated the long-range integration of first- and second-order signals in spatial vision. In our first experiment we performed a conventional area summation experiment where we varied the diameter of (a) luminance-modulated (LM) noise and (b) contrastmodulated (CM) noise. Results from the LM condition replicated previous findings with sine-wave gratings in the absence of noise, consistent with long-range integration of signal contrast over space. For CM, the summation function was much shallower than for LM suggesting, at first glance, that the signal integration process was spatially less extensive than for LM. However, an alternative possibility was that the high spatial frequency noise carrier for the CM signal was attenuated by peripheral retina (or cortex), thereby impeding our ability to observe area summation of CM in the conventional way. To test this, we developed the ''Swiss cheese'' stimulus of Meese and Summers (2007) in which signal area can be varied without changing the stimulus diameter, providing some protection against inhomogeneity of the retinal field. Using this technique and a two-component subthreshold summation paradigm we found that (a) CM is spatially integrated over at least five stimulus cycles (possibly more), (b) spatial integration follows square-law signal transduction for both LM and CM and (c) the summing device integrates over spatially-interdigitated LM and CM signals when they are co-oriented, but not when crossoriented. The spatial pooling mechanism that we have identified would be a good candidate component for amodule involved in representing visual textures, including their spatial extent

    Paradoxical psychometric functions ("swan functions") are explained by dilution masking in four stimulus dimensions

    Get PDF
    The visual system dissects the retinal image into millions of local analyses along numerous visual dimensions. However, our perceptions of the world are not fragmentary, so further processes must be involved in stitching it all back together. Simply summing up the responses would not work because this would convey an increase in image contrast with an increase in the number of mechanisms stimulated. Here, we consider a generic model of signal combination and counter-suppression designed to address this problem. The model is derived and tested for simple stimulus pairings (e.g. A + B), but is readily extended over multiple analysers. The model can account for nonlinear contrast transduction, dilution masking, and signal combination at threshold and above. It also predicts nonmonotonic psychometric functions where sensitivity to signal A in the presence of pedestal B first declines with increasing signal strength (paradoxically dropping below 50% correct in two-interval forced choice), but then rises back up again, producing a contour that follows the wings and neck of a swan. We looked for and found these "swan" functions in four different stimulus dimensions (ocularity, space, orientation, and time), providing some support for our proposal

    {EDISON}-{WMW}: Exact Dynamic Programing Solution of the {Wilcoxon}-{Mann}-{Whitney} Test

    Get PDF
    In many research disciplines, hypothesis tests are applied to evaluate whether findings are statistically significant or could be explained by chance. The Wilcoxon–Mann–Whitney (WMW) test is among the most popular hypothesis tests in medicine and life science to analyze if two groups of samples are equally distributed. This nonparametric statistical homogeneity test is commonly applied in molecular diagnosis. Generally, the solution of the WMW test takes a high combinatorial effort for large sample cohorts containing a significant number of ties. Hence, P value is frequently approximated by a normal distribution. We developed EDISON-WMW, a new approach to calculate the exact permutation of the two-tailed unpaired WMW test without any corrections required and allowing for ties. The method relies on dynamic programing to solve the combinatorial problem of the WMW test efficiently. Beyond a straightforward implementation of the algorithm, we presented different optimization strategies and developed a parallel solution. Using our program, the exact P value for large cohorts containing more than 1000 samples with ties can be calculated within minutes. We demonstrate the performance of this novel approach on randomly-generated data, benchmark it against 13 other commonly-applied approaches and moreover evaluate molecular biomarkers for lung carcinoma and chronic obstructive pulmonary disease (COPD). We found that approximated P values were generally higher than the exact solution provided by EDISON-WMW. Importantly, the algorithm can also be applied to high-throughput omics datasets, where hundreds or thousands of features are included. To provide easy access to the multi-threaded version of EDISON-WMW, a web-based solution of our algorithm is freely available at http://www.ccb.uni-saarland.de/software/wtest/

    Contrast and lustre:a model that accounts for eleven different forms of contrast discrimination in binocular vision

    Get PDF
    Our goal here is a more complete understanding of how information about luminance contrast is encoded and used by the binocular visual system. In two-interval forced-choice experiments we assessed observers' ability to discriminate changes in contrast that could be an increase or decrease of contrast in one or both eyes, or an increase in one eye coupled with a decrease in the other (termed IncDec). The base or pedestal contrasts were either in-phase or out-of-phase in the two eyes. The opposed changes in the IncDec condition did not cancel each other out, implying that along with binocular summation, information is also available from mechanisms that do not sum the two eyes' inputs. These might be monocular mechanisms. With a binocular pedestal, monocular increments of contrast were much easier to see than monocular decrements. These findings suggest that there are separate binocular (B) and monocular (L,R) channels, but only the largest of the three responses, max(L,B,R), is available to perception and decision. Results from contrast discrimination and contrast matching tasks were described very accurately by this model. Stimuli, data, and model responses can all be visualized in a common binocular contrast space, allowing a more direct comparison between models and data. Some results with out-of-phase pedestals were not accounted for by the max model of contrast coding, but were well explained by an extended model in which gratings of opposite polarity create the sensation of lustre. Observers can discriminate changes in lustre alongside changes in contrast

    Object Image Size Is a Fundamental Coding Dimension in Human Vision: New Insights and Model

    Get PDF
    In previous psychophysical work we found that luminance contrast is integrated over retinal area subject to contrast gain control. If different mechanisms perform this operation for a range of superimposed retinal regions of different sizes, this could provide the basis for size-coding. To test this idea we included two novel features in a standard adaptation paradigm to discount more pedestrian accounts of repulsive size-aftereffects. First, we used spatially jittering luminance-contrast adaptors to avoid simple contour displacement aftereffects. Second, we decoupled adaptor and target spatial frequency to avoid the well-known spatial frequency shift aftereffect. Empirical results indicated strong evidence of a bidirectional size adaptation aftereffect. We show that the textbook population model is inappropriate for our results, and develop our existing model of contrast perception to include multiple size mechanisms with divisive surround-suppression from the largest mechanism. For a given stimulus patch, this delivers a blurred step-function of responses across the population, with contrast and size encoded by the height and lateral position of the step. Unlike for textbook population coding schemes, our human results (N = 4 male, N = 4 female) displayed two asymmetries: (i) size aftereffects were greatest for targets smaller than the adaptor, and (ii) on that side of the function, results did not return to baseline, even when targets were 25% of adaptor diameter. Our results and emergent model properties provide evidence for a novel dimension of visual coding (size) and a novel strategy for that coding, consistent with previous results on contrast detection and discrimination for various stimulus sizes. [Abstract copyright: Copyright © 2023 The Author(s). Published by Elsevier Ltd.. All rights reserved.

    The Effect of Interocular Phase Difference on Perceived Contrast

    Get PDF
    Binocular vision is traditionally treated as two processes: the fusion of similar images, and the interocular suppression of dissimilar images (e.g. binocular rivalry). Recent work has demonstrated that interocular suppression is phase-insensitive, whereas binocular summation occurs only when stimuli are in phase. But how do these processes affect our perception of binocular contrast? We measured perceived contrast using a matching paradigm for a wide range of interocular phase offsets (0–180°) and matching contrasts (2–32%). Our results revealed a complex interaction between contrast and interocular phase. At low contrasts, perceived contrast reduced monotonically with increasing phase offset, by up to a factor of 1.6. At higher contrasts the pattern was non-monotonic: perceived contrast was veridical for in-phase and antiphase conditions, and monocular presentation, but increased a little at intermediate phase angles. These findings challenge a recent model in which contrast perception is phase-invariant. The results were predicted by a binocular contrast gain control model. The model involves monocular gain controls with interocular suppression from positive and negative phase channels, followed by summation across eyes and then across space. Importantly, this model—applied to conditions with vertical disparity—has only a single (zero) disparity channel and embodies both fusion and suppression processes within a single framework
    • …
    corecore