59,318 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    GraFIX: a semiautomatic approach for parsing low- and high-quality eye-tracking data

    Get PDF
    Fixation durations (FD) have been used widely as a measurement of information processing and attention. However, issues like data quality can seriously influence the accuracy of the fixation detection methods and, thus, affect the validity of our results (Holmqvist, Nyström, & Mulvey, 2012). This is crucial when studying special populations such as infants, where common issues with testing (e.g., high degree of movement, unreliable eye detection, low spatial precision) result in highly variable data quality and render existing FD detection approaches highly time consuming (hand-coding) or imprecise (automatic detection). To address this problem, we present GraFIX, a novel semiautomatic method consisting of a two-step process in which eye-tracking data is initially parsed by using velocity-based algorithms whose input parameters are adapted by the user and then manipulated using the graphical interface, allowing accurate and rapid adjustments of the algorithms’ outcome. The present algorithms (1) smooth the raw data, (2) interpolate missing data points, and (3) apply a number of criteria to automatically evaluate and remove artifactual fixations. The input parameters (e.g., velocity threshold, interpolation latency) can be easily manually adapted to fit each participant. Furthermore, the present application includes visualization tools that facilitate the manual coding of fixations. We assessed this method by performing an intercoder reliability analysis in two groups of infants presenting low- and high-quality data and compared it with previous methods. Results revealed that our two-step approach with adaptable FD detection criteria gives rise to more reliable and stable measures in low- and high-quality data

    Facial expression aftereffect revealed by adaption to emotion-invisible dynamic bubbled faces

    Get PDF
    Visual adaptation is a powerful tool to probe the short-term plasticity of the visual system. Adapting to local features such as the oriented lines can distort our judgment of subsequently presented lines, the tilt aftereffect. The tilt aftereffect is believed to be processed at the low-level of the visual cortex, such as V1. Adaptation to faces, on the other hand, can produce significant aftereffects in high-level traits such as identity, expression, and ethnicity. However, whether face adaptation necessitate awareness of face features is debatable. In the current study, we investigated whether facial expression aftereffects (FEAE) can be generated by partially visible faces. We first generated partially visible faces using the bubbles technique, in which the face was seen through randomly positioned circular apertures, and selected the bubbled faces for which the subjects were unable to identify happy or sad expressions. When the subjects adapted to static displays of these partial faces, no significant FEAE was found. However, when the subjects adapted to a dynamic video display of a series of different partial faces, a significant FEAE was observed. In both conditions, subjects could not identify facial expression in the individual adapting faces. These results suggest that our visual system is able to integrate unrecognizable partial faces over a short period of time and that the integrated percept affects our judgment on subsequently presented faces. We conclude that FEAE can be generated by partial face with little facial expression cues, implying that our cognitive system fills-in the missing parts during adaptation, or the subcortical structures are activated by the bubbled faces without conscious recognition of emotion during adaptation

    Contextual cropping and scaling of TV productions

    Get PDF
    This is the author's accepted manuscript. The final publication is available at Springer via http://dx.doi.org/10.1007/s11042-011-0804-3. Copyright @ Springer Science+Business Media, LLC 2011.In this paper, an application is presented which automatically adapts SDTV (Standard Definition Television) sports productions to smaller displays through intelligent cropping and scaling. It crops regions of interest of sports productions based on a smart combination of production metadata and systematic video analysis methods. This approach allows a context-based composition of cropped images. It provides a differentiation between the original SD version of the production and the processed one adapted to the requirements for mobile TV. The system has been comprehensively evaluated by comparing the outcome of the proposed method with manually and statically cropped versions, as well as with non-cropped versions. Envisaged is the integration of the tool in post-production and live workflows

    A Laminar Cortical Model for 3D Perception of Slanted and Curved Surfaces and of 2D Images: Developement, attention, and Bistability

    Full text link
    A model of laminar visual cortical dynamics proposes how 3D boundary and surface representations of slated and curved 3D objects and 2D images arise. The 3D boundary representations emerge from interactions between non-classical horizontal receptive field interactions with intracorticcal and intercortical feedback circuits. Such non-classical interactions contextually disambiguate classical receptive field responses to ambiguous visual cues using cells that are sensitive to angles and disparity gradients with cortical areas V1 and V2. These cells are all variants of bipole grouping cells. Model simulations show how horizontal connections can develop selectively to angles, how slanted surfaces can activate 3D boundary representations that are sensitive to angles and disparity gradients, how 3D filling-in occurs across slanted surfaces, how a 2D Necker cube image can be represented in 3D, and how bistable Necker cuber percepts occur. The model also explains data about slant aftereffects and 3D neon color spreading. It shows how habituative transmitters that help to control developement also help to trigger bistable 3D percepts and slant aftereffects, and how attention can influence which of these percepts is perceived by propogating along some object boundaries.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-98-1-0108); Defense Advanced Research Projects Agency and the Office of Naval Research (N0014-95-1-0409, N00014-01-1-0624, N00014-95-1-0657); National Science Foundation (IIS-97-20333
    • 

    corecore