4 research outputs found

    Integration and segregation processes in motion perception

    No full text
    The perception of motion requires the visual system to integrate inputs from various stages of motion processing. This thesis aims to contribute to our understanding of motion integration. Two studies were conducted for this purpose. The first study investigates how different motion perception systems integrate visual features. Our findings reveal that the position-based motion system, a motion perception system that relies on attentive tracking of object positions, binds moving features based on static cues like proximity and similarity. In contrast, the velocity-based motion system, which utilizes direction-selective cells, largely disregards these static cues. These results support previous findings that these motion systems can extract different motion information from the same stimulus. Furthermore, the discovery of distinct integration rules between these systems provides a novel contribution to the existing knowledge on motion integration. The second study addresses a limitation of the SSVEP methodology in motion perception research. It reveals that large-scale cortical dynamics can generate SSVEPs that mimic those produced by motion-sensitive neural populations by interacting with moving stimuli. We propose that randomizing the phase of position modulations across trials can overcome this issue by eliminating SSVEPs generated by large-scale cortical dynamics. These technical advancements have implications for past and future motion integration studies utilizing SSVEPs

    Value-driven effects on perceptual averaging

    No full text
    Perceptual averaging refers to a strategy of encoding the statistical properties of entire sets of objects rather than encoding individual object properties, potentially circumventing the visual system’s strict capacity limitations. Prior work has shown that such average representations of set properties, such as its mean size, can be modulated by top-down and bottom-up attention. However, it is unclear to what extent attentional biases through selection history, in the form of value-driven attentional capture, influences this type of summary statistical representation. To investigate, we conducted two experiments in which participants estimated the mean size of a set of heterogeneously sized circles while a previously rewarded color singleton was part of the set. In Experiment 1, all circles were gray, except either the smallest or the largest circle, which was presented in a color previously associated with a reward. When the largest circle in the set was associated with the highest value (as a proxy of selection history), we observed the largest biases, such that perceived mean size scaled linearly with the increasing value of the attended color singleton. In Experiment 2, we introduced a dual-task component in the form of an attentional search task to ensure that the observed bias of reward on perceptual averaging was not fully explained by focusing attention solely on the reward-signaling color singleton. Collectively, findings support the proposal that selection history, like bottom-up and top-down attention, influences perceptual averaging, and that this happens in a flexible manner proportional to the extent to which attention is captured

    Tomris Uyar'ın hayatı

    No full text
    Ankara : İhsan Doğramacı Bilkent Üniversitesi İktisadi, İdari ve Sosyal Bilimler Fakültesi, Tarih Bölümü, 2016.This work is a student project of the The Department of History, Faculty of Economics, Administrative and Social Sciences, İhsan Doğramacı Bilkent University.by Ünsal, Mehmet Süha

    The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions.

    No full text
    Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes
    corecore