8 research outputs found

    Measuring the Depth Perception Invoked by a Simple, Sustained, Polarity-Reversed Stereogram

    Full text link
    The same-sign hypothesis suggests that only those edges in the two retinal images whose luminance gradients have the same sign can be stereoscopically fused to generate a perception of depth. If true, one would expect that the magnitude of the depth induced by a polarity-reversed stereogram (i.e. one where the corresponding figures in the two stereo half images have opposite luminance polarity) should be determined by the disparity of the samesign edges. Here we present a simple, sustained, polarity-reversed stereogram which we believe to be the first example of a polarity-reversed stereogram where this prediction is shown to be true. We conclude by discussing possible reasons why this prediction fails for other polarity-reversed stereograms.Defense Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-95-1-0657); National Science Foundation (SBR-9905194)

    The Use of the Cancellation Technique to Quantify the Hermann Grid Illusion

    Get PDF
    When observers view a grid of mid-gray lines superimposed on a black background, they report seeing illusory dark gray smudges at the grid intersections, an effect known as the Hermann grid illusion. The strength of the illusion is often measured using the cancellation technique: A white disk is placed over one of these intersections and the luminance of the disk is reduced until the disk disappears. Its luminance at this point, i.e., the disk's detection threshold, is taken to be a measure of the strength of the illusion. Our experiments showed that some distortions of the Hermann grid, which were sufficient to completely disrupt the illusion, did not reduce the disk's detection threshold. This showed that the cancellation technique is not a valid method for measuring the strength of the Hermann grid illusion. Those studies that attempted to use this technique inadvertently studied a different effect known as the blanking phenomenon. We conclude by presenting an explanation for the latter effect

    Figure 2

    No full text
    <p>The experimental results. For each experiment, the plots show the luminance required for the target disk to be detected 80.35% of the time. The error bars represent the standard error of the mean. For some points the errors are so small that the error bars corresponding to those points collapse into horizontal lines. The horizontal dotted line represents the luminance of the gray background.</p

    Figure 3

    No full text
    <p>The receptive fields of two cells. For each cell, the inner circle and the area between the two circles represent the regions where stimulation by a light source respectively leads to excitation and inhibition of the cell. Please see the text for further details.</p

    Figure 1

    No full text
    <p>The six displays used in the experiments. Display (a) is the Hermann grid display <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0000265#pone.0000265-Hermann1" target="_blank">[1]</a>. Most observers see illusory dark gray smudges at the grid intersections. The illusion is strongest when viewed on a computer monitor. Displays (b) and (c) are based on displays that were presented at the European Conference on Visual Perception (Geier, Sera, Bernath, 2005, <i>Perception</i> 34, supplement 54).</p

    Insights into accuracy of social scientists' forecasts of societal change

    Get PDF
    How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2; N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a sample from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data
    corecore