10 research outputs found

    Invisibility and interpretation

    Get PDF
    Invisibility is often thought to occur because of the low-level limitations of the visual system. For example, it is often assumed that backward masking renders a target invisible because the visual system is simply too slow to resolve the target and the mask separately. Here, we propose an alternative explanation in which invisibility is a goal rather than a limitation and occurs naturally when making sense out of the plethora of incoming information. For example, we present evidence that (in)visibility of an element can strongly depend on how it groups with other elements. Changing grouping changes visibility. In addition, we will show that features often just appear to be invisible but are in fact visible in a way the experimenter is not aware of

    Feature fusion reveals slow and fast visual memories

    Get PDF
    Although the visual system can achieve a coarse classification of its inputs in a relatively short time, the synthesis of qualia-rich and detailed percepts can take substantially more time. If these prolonged computations were to take place in a retinotopic space, moving objects would generate extensive smear. However, under normal viewing conditions, moving objects appear relatively sharp and clear, suggesting that a substantial part of visual short-term memory takes place at a nonretinotopic locus. By using a retinotopic feature fusion and a nonretinotopic feature attribution paradigm, we provide evidence for a relatively fast retinotopic buffer and a substantially slower nonretinotopic memory. We present a simple model that can account for the dynamics of these complementary memory processes. Taken together, our results indicate that the visual system can accomplish temporal integration of information while avoiding smear by breaking off sensory memory into fast and slow components that are implemented in retinotopic and nonretinotopic loci, respectively

    Feature fusion reveals slow and fast visual memories

    Full text link
    Although the visual system can achieve a coarse classification of its inputs in a relatively short time, the synthesis of qualia-rich and detailed percepts can take substantially more time. If these prolonged computations were to take place in a retinotopic space, moving objects would generate extensive smear. However, under normal viewing conditions, moving objects appear relatively sharp and clear, suggesting that a substantial part of visual short-term memory takes place at a nonretinotopic locus. By using a retinotopic feature fusion and a nonretinotopic feature attribution paradigm, we provide evidence for a relatively fast retinotopic buffer and a substantially slower nonretinotopic memory. We present a simple model that can account for the dynamics of these complementary memory processes. Taken together, our results indicate that the visual system can accomplish temporal integration of information while avoiding smear by breaking off sensory memory into fast and slow components that are implemented in retinotopic and nonretinotopic loci, respectively

    Perceived speed differences explain apparent compression in slit viewing

    No full text
    When a figure moves behind a stationary narrow slit, observers often report seeing the figure as an integrated whole, a phenomenon known as slit viewing or anorthoscopic perception. Interestingly, in slit viewing, the figure is perceived compressed along the axis of motion, e.g., a circle is perceived as an ellipse. Underestimation of the speed of the moving object was offered as an explanation for this apparent compression. We measured perceived speed and compression in anorthoscopic perception and found results that are inconsistent with this hypothesis. We found evidence for an alternative hypothesis according to which apparent compression results from perceived speed differences between different parts of the figure, viz., the trailing parts are perceived to move faster than the leading parts. These differences in the perceived speeds of the trailing and the leading edges may be due to differences in the visibilities of the leading and trailing parts. We discuss our findings within a non-retinotopic framework of form analysis for moving objects

    Shape distortions and Gestalt grouping in anorthoscopic perception

    No full text
    When a figure moves behind a stationary narrow slit, observers often report seeing the figure as a whole, a phenomenon called slit viewing or anorthoscopic perception. Interestingly, in slit viewing, the figure is perceived compressed along the axis of motion. As with other perceptual distortions, it is unclear whether the perceptual space in the vicinity of the slit or the representation of the figure itself undergoes compression. In a psychophysical experiment, we tested these two hypotheses. We found that the percept of a stationary bar, presented within the slit, was not distorted even when at the same time a circle underwent compression by moving through the slit. This result suggests that the compression of form results from figural rather than from space compression. In support of this hypothesis, we found that when the bar was perceptually grouped with the circle, the bar appeared compressed. Our results show that, in slit viewing, the distortion occurs at a non-retinotopic level where grouped objects are jointly represented

    Perceptual grouping induces non-retinotopic feature attribution in human vision

    No full text
    The human visual system computes features of moving objects with high precision despite the fact that these features can change or blend into each other in the retinotopic image. Very little is known about how the human brain accomplishes this complex feat. Using a Ternus-Pikler display, introduced by Gestalt psychologists about a century ago, we show that human observers can perceive features of moving objects at locations these features are not present. More importantly, our results indicate that these non-retinotopic feature attributions are not errors caused by the limitations of the perceptual system but follow rules of perceptual grouping. From a computational perspective, our data imply sophisticated real-time transformations of retinotopic relations in the visual cortex. Our results suggest that the human motion and form systems interact with each other to remap the retinotopic projection of the physical space in order to maintain the identity of moving objects in the perceptual space

    The flight path of the phoenix--the visible trace of invisible elements in human vision

    No full text
    How features are attributed to objects is one of the most puzzling issues in the neurosciences. A deeply entrenched view is that features are perceived at the locations where they are presented. Here, we show that features in motion displays can be systematically attributed from one location to another although the elements possessing the features are invisible. Furthermore, features can be integrated across locations. Feature mislocalizations are usually treated as errors and limits of the visual system. On the contrary, we show that the nonretinotopic feature attributions, reported herein, follow rules of grouping precisely suggesting that they reflect a fundamental computational strategy and not errors of visual processing

    Motion, not masking, provides the medium for feature attribution

    No full text
    Understanding the dynamics of how separate features combine to form holistic object representations is a central problem in visual cognition. Feature attribution (also known as feature transposition and feature inheritance) refers to the later of two stimuli expressing the features belonging to the earlier one. Both visual masking and apparent motion are implicated in feature attribution. We found that when apparent motion occurs without masking, it correlates positively with feature attribution. Moreover, when apparent motion occurs with masking, feature attribution remains positively correlated with apparent motion after the contribution of masking is factored out, but does not correlate with masking after the contribution of apparent motion is similarly factored out. Hence, motion processes on their own provide the effective medium for feature attribution. Our results clarify the dynamics of feature binding in the formation of integral and unitary object representations in human vision

    Feature fusion reveals slow and fast visual memories

    No full text
    Although the visual system can achieve a coarse classification of its inputs in a relatively short time, the synthesis of qualia-rich and detailed percepts can take substantially more time. If these prolonged computations were to take place in a retinotopic space, moving objects would generate extensive smear. However, under normal viewing conditions, moving objects appear relatively sharp and clear, suggesting that a substantial part of visual short-term memory takes place at a nonretinotopic locus. By using a retinotopic feature fusion and a nonretinotopic feature attribution paradigm, we provide evidence for a relatively fast retinotopic buffer and a substantially slower nonretinotopic memory. We present a simple model that can account for the dynamics of these complementary memory processes. Taken together, our results indicate that the visual system can accomplish temporal integration of information while avoiding smear by breaking off sensory memory into fast and slow components that are implemented in retinotopic and nonretinotopic loci, respectively
    corecore