803 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Number is not just an illusion: Discrete numerosity is encoded independently from perceived size

    Get PDF
    While seminal theories suggest that nonsymbolic visual numerosity is mainly extracted from segmented items, more recent views advocate that numerosity cannot be processed independently of nonnumeric continuous features confounded with the numerical set (i.e., such as the density, the convex hull, etc.). To disentangle these accounts, here we employed two different visual illusions presented in isolation or in a merged condition (e.g., combining the effects of the two illusions). In particular, in a number comparison task, we concurrently manipulated both the perceived object segmentation by connecting items with Kanizsa-like illusory lines, and the perceived convex-hull/density of the set by embedding the stimuli in a Ponzo illusion context, keeping constant other low-level features. In Experiment 1, the two illusions were manipulated in a compatible direction (i.e., both triggering numerical underestimation), whereas in Experiment 2 they were manipulated in an incompatible direction (i.e., with the Ponzo illusion triggering numerical overestimation and the Kanizsa illusion numerical underestimation). Results from psychometric functions showed that, in the merged condition, the biases of each illusion summated (i.e., largest underestimation as compared with the conditions in which illusions were presented in isolation) in Experiment 1, while they averaged and competed against each other in Experiment 2. These findings suggest that discrete nonsymbolic numerosity can be extracted independently from continuous magnitudes. They also point to the need of more comprehensive theoretical views accounting for the operations by which both discrete elements and continuous variables are computed and integrated by the visual system

    The visual representation of texture

    Get PDF
    This research is concerned with texture: a source of visual information, that has motivated a huge amount of psychophysical and computational research. This thesis questions how useful the accepted view of texture perception is. From a theoretical point of view, work to date has largely avoided two critical aspects of a computational theory of texture perception. Firstly, what is texture? Secondly, what is an appropriate representation for texture? This thesis argues that a task dependent definition of texture is necessary, and proposes a multi-local, statistical scheme for representing texture orientation. Human performance on a series of psychophysical orientation discrimination tasks are compared to specific predictions from the scheme. The first set of experiments investigate observers' ability to directly derive statistical estimates from texture. An analogy is reported between the way texture statistics are derived, and the visual processing of spatio-luminance features. The second set of experiments are concerned with the way texture elements are extracted from images (an example of the generic grouping problem in vision). The use of highly constrained experimental tasks, typically texture orientation discriminations, allows for the formulation of simple statistical criteria for setting critical parameters of the model (such as the spatial scale of analysis). It is shown that schemes based on isotropic filtering and symbolic matching do not suffice for performing this grouping, but that the scheme proposed, base on oriented mechanisms, does. Taken together these results suggest a view of visual texture processing, not as a disparate collection of processes, but as a general strategy for deriving statistical representations of images common to a range of visual tasks

    On symmetry in visual perception

    Get PDF
    This thesis is concerned with the role of symmetry in low-level image segmentation. Early detection of local image properties that could indicate the presence of an object would be useful in segmentation, and it is proposed here that approximate bilateral symmetry, which is common to many natural and man made objects, is a candidate local property. To be useful in low-level image segmentation the representation of symmetry must be relatively robust to noise interference, and the symmetry must be detectable without prior knowledge of the location and orientation of the pattern axis. The experiments reported here investigated whether bilateral symmetry can be detected with and without knowledge of the axis of symmetry, in several different types of pattern. The pattern properties found to aid symmetry detection in random dot patterns were the presence of compound features, formed from locally dense clusters of dots, and contrast uniformity across the axis. In the second group of experiments, stimuli were designed to enhance the features found to be important for global symmetry detection. The pattern elements were enlarged, and grey level was varied between matched pairs, thereby making each pair distinctive. Symmetry detection was found to be robust to variation in the size of matched elements, but was disrupted by contrast variation within pairs. It was concluded that the global pattern structure is contained in the parallelism between extended, cross axis regions of uniform contrast. In the third group of experiments, detection performance was found to improve when the parallel structure was strengthened by the presence of matched strings, rather than pairs of elements. It is argued that elongation, parallelism, and approximate alignment between pattern constituents are visual properties that are both presegmentally detectable, and sufficient for the representation of global symmetric structure. A simple computational property of these patterns is described

    Dynamic surface completion : the joint formation of color, texture, and shape

    Get PDF
    Dynamic surface completion is a phenomenon of visual filling-in where a colored pattern perceptually spreads onto an area confined by virtual contours in a multi-aperture motion display. The spreading effect is qualitatively similar to static texture spreading but widely surpasses it in strength, making it particularly suited for quantitative studies of visual interpolation processes. I carried out six experiments to establish with objective tasks that homogeneous color, as well as non-uniform texture spreading is a genuine representation of surface qualities and thus goes beyond mere contour interpolation. The experiments also serve to relate the phenomena to ongoing discussions about potentially responsible mechanisms for spatiotemporal integration: With a phenomenological method, I examined to what extent simple sensory persistence might be causally involved in the effect under consideration. The findings are partially consistent with the idea of sensory persistence, and indicate that information fragments are integrated over a time window of about 100 to 150 ms to form a complete surface representation

    Science of Facial Attractiveness

    Get PDF

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Analysis of Complex Motion Patterns by Form/Cue Invariant MSTd Neurons

    Get PDF
    Several groups have proposed that area MSTd of the macaque monkey has a role in processing optical flow information used in the analysis of self motion, based on its neurons’ selectivity for large-field motion patterns such as expansion, contraction, and rotation. It has also been suggested that this cortical region may be important in analyzing the complex motions of objects. More generally, MSTd could be involved in the generic function of complex motion pattern representation, with its cells responsible for integrating local motion signals sent forward from area MT into a more unified representation. If MSTd is extracting generic motion pattern signals, it would be important that the preferred tuning of MSTd neurons not depend on the particular features and cues that allow these motions to be represented. To test this idea, we examined the diversity of stimulus features and cues over which MSTd cells can extract information about motion patterns such as expansion, contraction, rotation, and spirals. The different classes of stimuli included: coherently moving random dot patterns, solid squares, outlines of squares, a square aperture moving in front of an underlying stationary pattern of random dots, a square composed entirely of flicker, and a square of nonFourier motion. When a unit was tuned with respect to motion patterns across these stimulus classes, the motion pattern producing the most vigorous response in a neuron was nearly the same for each class. Although preferred tuning was invariant, the magnitude and width of the tuning curves often varied between classes. Thus, MSTd is form/cue invariant for complex motions, making it an appropriate candidate for analysis of object motion as well as motion introduced by observer translation
    • …
    corecore