411 research outputs found

    Sustained attention and the flash grab effect

    Get PDF
    Acknowledgments The research leading to these results received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007–2013)/ERC Grant Agreement No. AG324070 to PC and from NSERC Canada Discovery RGPIN-2019-03989 to PC and Leverhulme Early Career Fellowship ECF-2020-488 to NA.Peer reviewe

    Do Artists See Their Retinas?

    Get PDF
    Our perception starts with the image that falls on our retina and on this retinal image, distant objects are small and shadowed surfaces are dark. But this is not what we see. Visual constancies correct for distance so that, for example, a person approaching us does not appear to become a larger person. Interestingly, an artist, when rendering a scene realistically, must undo all these corrections, making distant objects again small. To determine whether years of art training and practice have conferred any specialized visual expertise, we compared the perceptual abilities of artists to those of non-artists in three tasks. We first asked them to adjust either the size or the brightness of a target to match it to a standard that was presented on a perspective grid or within a cast shadow. We instructed them to ignore the context, judging size, for example, by imagining the separation between their fingers if they were to pick up the test object from the display screen. In the third task, we tested the speed with which artists access visual representations. Subjects searched for an L-shape in contact with a circle; the target was an L-shape, but because of visual completion, it appeared to be a square occluded behind a circle, camouflaging the L-shape that is explicit on the retinal image. Surprisingly, artists were as affected by context as non-artists in all three tests. Moreover, artists took, on average, significantly more time to make their judgments, implying that they were doing their best to demonstrate the special skills that we, and they, believed they had acquired. Our data therefore support the proposal from Gombrich that artists do not have special perceptual expertise to undo the effects of constancies. Instead, once the context is present in their drawing, they need only compare the drawing to the scene to match the effect of constancies in both

    What Line Drawings Reveal About the Visual Brain

    Get PDF
    Scenes in the real world carry large amounts of information about color, texture, shading, illumination, and occlusion giving rise to our perception of a rich and detailed environment. In contrast, line drawings have only a sparse subset of scene contours. Nevertheless, they also trigger vivid three-dimensional impressions despite having no equivalent in the natural world. Here, we ask why line drawings work. We see that they exploit the underlying neural codes of vision and they also show that artists’ intuitions go well beyond the understanding of vision found in current neurosciences and computer vision

    A shape-contrast effect for briefly presented stimuli.

    Full text link
    When a suprathreshold visual stimulus is flashed for 60-300 ms and masked, though it is no longer visibly degraded, the perceived shape is vulnerable to distortion effects, especially when a 2rid shape is present. Specifically, when preceded by a flashed line, a briefly flashed circle appears to be an ellipse elongated perpendicular to the line. Given an appropriate stimulus onset asynchrony, this distortion isperceived when the 2 stimuli (~4*) are presented as far as 12 " apart but is not due to perception of apparent motion between the 2 stimuli. Additional pairs of shapes defined by taper and overall curvature also revealed similar nonlocal shape distortion effects. The test shapes always appeared tobe more dissimilar to the priming shapes, adistortion termed ashape-contrast effect. Its properties are consistent with the response characteristics of the shape-tuned neurons in the inferotemporal cortex and may reveal the underlying dimensions of early shape ncoding. From the instant astimulus is presented, the visual system accumulates information about the stimulus and begins to generate a subjective impression of its shape and location. For very brief presentations terminated by a mask, stimuli look fuzzy, ill defined, or intertwined with the details of the mask. Several studies have shown that for durations greater than-50 ms, the stimulus begins to have a relatively sharp, crisp appearance and is seen independently of the mask (e.g.

    BAND SELECTION METHOD APPLIED TO M3 (MOON MINERALOGY MAPPER)

    Get PDF
    poster abstractRemote sensing optical sensors, such as those on board satellites and planetary probes, are able to detect and measure solar radiation at both im-proved spectral and spatial resolution. In particular, a hyperspectral dataset often consists of tens to hundreds of specified wavelength bands and con-tains a vast amount of spectral information for potential processing. One drawback of such a large spectral dataset is information redundancy result-ing from high correlation between narrow spectral bands. Reducing the data dimensionality is critical in practical hyperspectral remote sensing applica-tions. Price’s method is a band selection approach that uses a small subset of bands to accurately reconstruct the full hyperspectral dataset. The method seeks to represent the dataset by a weighted sum of basis functions. An it-erative process is used to successively approximate the full dataset. The process ends when the last basis function no longer provides a significant contribution to the reconstruction of the dataset, i.e. the basis function is dominated by noise. The research presented examines the feasibility of Price’s method for ex-tracting an optimal band subset from recently acquired lunar hyperspectral images recorded by the Moon Mineralogy Mapper (M3) instrument on board the Chandrayaan-1 spacecraft. The Apollo 17 landing site was used for test-ing of the band selection method. Preliminary results indicate that the band selection method is able to successfully reconstruct the original hyperspectral dataset with minimal error. In a recent test case, 15 bands were used to reconstruct the original 74 bands of reflectance data. This represents an accurate reconstruction using only 20% of the original dataset. The results from this study can help to configure spectral channels of fu-ture optical instruments for lunar exploration. The channels can be chosen based on the knowledge of which wavelength bands represent the greatest relevant information for characterizing geology of a particular location

    An Unattended Mask makes an Attended Target Disappear

    Get PDF
    In pattern masking, the target and mask are presented at the same location and follow one another very closely in time. When the observer attends to the target, he or she must also attend to the mask, as the switching time for attention is quite slow. In a series of experiments, we present mask–target–mask sequences staggered in time and location (Cavanagh, Holcombe, & Chou, 2008) that allow participants to attentively track the target location without attending to the masks. The results show that the strength of masking is on average unaffected by the removal of attention from the masks. Moreover, after isolating the target location perceptually with moving attention, it is clear that the target, when at threshold, has not been degraded or integrated with a persisting mask but it has vanished. We also show that the strength of masking is unaffected by the lateral spacing between adjacent target and mask sequences until the spacing is so large that the apparent motion driving the attentive tracking breaks down. Finally, we compare the effect of the pre- and postmask and find that the premask is responsible for the larger part of the masking

    Where Are You Looking? Pseudogaze in Afterimages

    Get PDF
    How do we know where we are looking? A frequent assumption is that the subjective experience of our direction of gaze is assigned to the location in the world that falls on our fovea. However, we find that observers can shift their subjective direction of gaze among different nonfoveal points in an afterimage. Observers were asked to look directly at different corners of a diamond-shaped afterimage. When the requested corner was 3.5° in the periphery, the observer often reported that the image moved away in the direction of the attempted gaze shift. However, when the corner was at 1.75° eccentricity, most reported successfully fixating at the point. Eye-tracking data revealed systematic drift during the subjective fixations on peripheral locations. For example, when observers reported looking directly at a point above the fovea, their eyes were often drifting steadily upwards. We then asked observers to make a saccade from a subjectively fixated, nonfoveal point to another point in the afterimage, 7° directly below their fovea. The observers consistently reported making appropriately diagonal saccades, but the eye movement traces only occasionally followed the perceived oblique direction. These results suggest that the perceived direction of gaze can be assigned flexibly to an attended point near the fovea. This may be how the visual world acquires its stability during fixation of an object, despite the drifts and microsaccades that are normal characteristics of visual fixation

    Where Are You Looking? Pseudogaze in Afterimages

    Get PDF
    How do we know where we are looking? A frequent assumption is that the subjective experience of our direction of gaze is assigned to the location in the world that falls on our fovea. However, we find that observers can shift their subjective direction of gaze among different nonfoveal points in an afterimage. Observers were asked to look directly at different corners of a diamond-shaped afterimage. When the requested corner was 3.5° in the periphery, the observer often reported that the image moved away in the direction of the attempted gaze shift. However, when the corner was at 1.75° eccentricity, most reported successfully fixating at the point. Eye-tracking data revealed systematic drift during the subjective fixations on peripheral locations. For example, when observers reported looking directly at a point above the fovea, their eyes were often drifting steadily upwards. We then asked observers to make a saccade from a subjectively fixated, nonfoveal point to another point in the afterimage, 7° directly below their fovea. The observers consistently reported making appropriately diagonal saccades, but the eye movement traces only occasionally followed the perceived oblique direction. These results suggest that the perceived direction of gaze can be assigned flexibly to an attended point near the fovea. This may be how the visual world acquires its stability during fixation of an object, despite the drifts and microsaccades that are normal characteristics of visual fixation

    Perceiving Illumination Inconsistencies in Scenes

    Get PDF
    The human visual system is adept at detecting and encoding statistical regularities in its spatio-temporal environment. Here we report an unexpected failure of this ability in the context of perceiving inconsistencies in illumination distributions across a scene. Contrary to predictions from previous studies [Enns and Rensink, 1990; Sun and Perona, 1996a, 1996b, 1997], we find that the visual system displays a remarkable lack of sensitivity to illumination inconsistencies, both in experimental stimuli and in images of real scenes. Our results allow us to draw inferences regarding how the visual system encodes illumination distributions across scenes. Specifically, they suggest that the visual system does not verify the global consistency of locally derived estimates of illumination direction

    A Bayesian Account of Depth from Shadow

    Get PDF
    When an object casts a shadow on a background surface, the offset of the shadow can be a compelling cue to the relative depth between the object and the background (e.g., Kersten et al 1996, Fig. 1). Cavanagh et al (2021) found that, at least for small shadow offsets, perceived depth scales almost linearly with shadow offset. Here we ask whether this finding can be understood quantitatively in terms of Bayesian decision theory. Estimating relative depth from shadow offset is complicated by the fact that the shadow offset is co-determined by the slant of the light source relative to the background. Since this is often difficult or impossible to estimate directly, the observer must employ priors for both the relative depth and the light source slant. To establish an ecological prior for relative depth, we employed the SYNS dataset (Adams et al., 2016) and the methods of Ehinger et al (2017) to measure the distribution of relative depths at depth edges near the horizon (Fig. 2). Lacking comparable empirical statistics for illumination slant, we considered two possible distributions: A zero-parameter uniform distribution, and a two-parameter beta distribution. To model the human data, we assumed that the visual system makes use of these priors and the observed shadow offset to minimize expected squared error in perceived relative depth. Fig. 3 shows that while the empirical depth prior brings the model into the range of the human data, a flat illumination prior predicts a more compressive scaling than observed. Fitting a beta distribution to minimize weighted squared deviation between human and optimal depth judgements corrects this deviation, and predicts a broadly peaked distribution over illumination slant, peaking at 37.4 deg away from the surface normal (Fig. 4). We will discuss possible ecological explanations for this illumination prior
    corecore