677 research outputs found

    Dynamic weighting of feature dimensions in visual search: behavioral and psychophysiological evidence

    Get PDF
    Dimension-based accounts of visual search and selection have significantly contributed to the understanding of the cognitive mechanisms of attention. Extensions of the original approach assuming the existence of dimension-based feature contrast saliency signals that govern the allocation of focal attention have recently been employed to explain the spatial and temporal dynamics of the relative strengths of saliency representations. Here we review behavioral and neurophysiological findings providing evidence for the dynamic trial-by-trial weighting of feature dimensions in a variety of visual search tasks. The examination of the effects of feature and dimension-based inter-trial transitions in feature detection tasks shows that search performance is affected by the change of target-defining dimensions, but not features. The use of the redundant-signals paradigm shows that feature contrast saliency signals are integrated at a pre-selective processing stage. The comparison of feature detection and compound search tasks suggests that the relative significance of dimension-dependent and dimension-independent saliency representations is task-contingent. Empirical findings that explain reduced dimension-based effects in compound search tasks are discussed. Psychophysiological evidence is presented that confirms the assumption that the locus of the effects of feature dimension changes is perceptual pre-selective rather than post-selective response-based. Behavioral and psychophysiological results are considered within in the framework of the dimension weighting account of selective visual attention

    Global scene layout modulates contextual learning in change detection

    Get PDF
    Change in the visual scene often goes unnoticed – a phenomenon referred to as “change blindness.” This study examined whether the hierarchical structure, i.e., the global–local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of “global precedence” in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning

    Multisensory perception and action: development, decision-making, and neural mechanisms

    Get PDF
    Surrounded by multiple objects and events, receiving multisensory stimulation, our brain must sort through relevant and irrelevant multimodal signals to correctly decode and represent the information from the same and different objects and, respectively, events in the physical world. Over the last two decades, scientific interest has increased dramatically in how we integrate multisensory information and how we interact with a multisensory world, as evidenced by exponential growth of the relevant studies using behavioral and/or neuro-scientific approaches. The Special Issue topic of “Multisensory perception and action: psychophysics, neural mechanisms, and applications” emerged from a scientific meeting dedicated to these issues: the Munich Multisensory Perception Symposium held in Holzhausen am Ammersee, Germany (June 24–26, 2011). This volume, which collects research articles contributed by attendees of the symposium as well as the wider community, is organized into three interrelated sections: (I) Development, learning, and decision making in multisensory perception (II) Multisensory timing and sensorimotor temporal integration (III) Electrophysiological and neuro-imaging analyses of multisensory perceptio

    What pops out in positional priming of pop-out: insights from event-related EEG lateralizations

    Get PDF
    It is well established that, in visual pop-out search, reaction time (RT) performance is influenced by cross-trial repetitions versus changes of target-defining attributes. One instance of this is referred to as “positional priming of pop-out” (pPoP; Maljkovic and Nakayama, 1996). In positional PoP paradigms, the processing of the current target is examined depending on whether it occurs at the previous target or a previous distractor location, relative to a previously empty location (“neutral” baseline), permitting target facilitation and distractor inhibition to be dissociated. The present study combined RT measures with specific sensory- and motor-driven event-related lateralizations to track the time course of four distinct processing levels as a function of the target’s position across consecutive trials. The results showed that, relative to targets at previous target and “neutral” locations, the appearance of a target at a previous distractor location was associated with a delayed build-up of the posterior contralateral negativity wave, indicating that distractor positions are suppressed at early stages of visual processing. By contrast, presentation of a target at a previous target, relative to “neutral” and distractor locations, modulated the elicitation of the subsequent stimulus-locked lateralized readiness potential wave, indicating that post-selective response selection is facilitated if the target occurred at the same position as on the previous trial. Overall, the results of present study provide electrophysiological evidence for the idea that target location priming (RT benefits) does not originate from an enhanced coding of target saliency at repeated (target) locations; instead, they arise (near-) exclusively from processing levels subsequent to focal-attentional target selection

    Temporal perception of visual-haptic events in multimodal telepresence system

    Get PDF
    Book synopsis: Haptic interfaces are divided into two main categories: force feedback and tactile. Force feedback interfaces are used to explore and modify remote/virtual objects in three physical dimensions in applications including computer-aided design, computer-assisted surgery, and computer-aided assembly. Tactile interfaces deal with surface properties such as roughness, smoothness, and temperature. Haptic research is intrinsically multi-disciplinary, incorporating computer science/engineering, control, robotics, psychophysics, and human motor control. By extending the scope of research in haptics, advances can be achieved in existing applications such as computer-aided design (CAD), tele-surgery, rehabilitation, scientific visualization, robot-assisted surgery, authentication, and graphical user interfaces (GUI), to name a few. Advances in Haptics presents a number of recent contributions to the field of haptics. Authors from around the world present the results of their research on various issues in the field of haptics

    Amodal completion in visual working memory

    Get PDF
    Amodal completion refers to the perceptual “filling-in” of partly occluded object fragments. Previous work has shown that object completion occurs efficiently, at early perceptual stages of processing. However, despite efficient early completion, at a later stage, the maintenance of complete-object representations in visual working memory (VWM) may be severely restricted due to limited mnemonic resources being available. To examine for such a limitation, we investigated whether the structure of to-be-remembered objects influences what is encoded and maintained in VWM using a change detection paradigm. Participants were presented with a memory display that contained either “composite” objects, that is, notched shapes abutting an occluding square, or equivalent unoccluded, “simple” objects. The results showed overall increased memory performance for simple relative to composite objects. Moreover, evidence for completion in VWM was found for composite objects that were interpreted as globally completed wholes, relative to local completions or an uncompleted mosaic (baseline) condition. This global completion advantage was obtained only when the “context” of simple objects also supported a global object interpretation. Finally, with an increase in memory set size, the global object advantage decreased substantially. These findings indicate that processes of amodal completion influence VWM performance until some overall-capacity limitation prevents completion. VWM completion processes do not operate automatically; rather, the representation format is determined top-down based on the simple object context provided. Overall, these findings support the notion of VWM as a capacity-limited resource, with storage capacity depending on the structured representation of to-be-remembered objects

    Failure to pop out: feature singletons do not capture attention under low signal-to-noise ratio conditions

    Get PDF
    Pop-out search implies that the target is always the first item selected, no matter how many distractors are presented. However, increasing evidence indicates that search is not entirely independent of display density even for pop-out targets: search is slower with sparse (few distractors) than with dense displays (many distractors). Despite its significance, the cause of this anomaly remains unclear. We investigated several mechanisms that could slow down search for pop-out targets. Consistent with the assumption that pop-out targets frequently fail to pop out in sparse displays, we observed greater variability of search duration for sparse displays relative to dense. Computational modeling of the response time distributions also supported the view that pop-out targets fail to pop out in sparse displays. Our findings strongly question the classical assumption that early processing of pop-out targets is independent of the distrac- tors. Rather, the density of distractors critically influences whether or not a stimulus pops out. These results call for new, more reliable measures of pop-out search and potentially a reinterpretation of studies that used relatively sparse displays

    Medial temporal lobe-dependent repetition suppression and enhancement due to implicit vs. explicit processing of individual repeated search displays

    Get PDF
    Using visual search, functional magnetic resonance imaging (fMRI) and patient studies have demonstrated that medial temporal lobe (MTL) structures differentiate repeated from novel displays—even when observers are unaware of display repetitions. This suggests a role for MTL in both explicit and, importantly, implicit learning of repeated sensory information (Greene et al., 2007). However, recent behavioral studies suggest, by examining visual search and recognition performance concurrently, that observers have explicit knowledge of at least some of the repeated displays (Geyer et al., 2010). The aim of the present fMRI study was thus to contribute new evidence regarding the contribution of MTL structures to explicit vs. implicit learning in visual search. It was found that MTL activation was increased for explicit and, respectively, decreased for implicit relative to baseline displays. These activation differences were most pronounced in left anterior parahippocampal cortex (aPHC), especially when observers were highly trained on the repeated displays. The data are taken to suggest that explicit and implicit memory processes are linked within MTL structures, but expressed via functionally separable mechanisms (repetition-enhancement vs. -suppression). They further show that repetition effects in visual search would have to be investigated at the display level

    Perception of delay in haptic telepresence systems

    Get PDF
    Time delay is recognized as an important issue in haptic telepresence systems as it is inherent to long-distance data transmission. What factors influence haptic delay perception in a time-delayed environment are, however, largely unknown. In this article, we examine the impact of manual movement frequency and amplitude in a sinusoidal exploratory movement as well as the stiffness of the haptic environment on the detection threshold for delay in haptic feedback. The results suggest that the detection of delay in force feedback depends on the movement frequency and amplitude, while variation of the absolute feedback force level does not influence the detection threshold. A model based on the exploration movement is proposed and guidelines for system design with respect to the time delay in haptic feedback are provided

    What are task-sets: a single, integrated representation or a collection of multiple control representations?

    Get PDF
    Performing two randomly alternating tasks typically results in higher reaction times (RTs) following a task switch, relative to a task repetition. These task switch costs (TSC) reflect processes of switching between control settings for different tasks. The present study investigated whether task sets operate as a single, integrated representation or as an agglomeration of relatively independent components. In a cued task switch paradigm, target detection (present/absent) and discrimination (blue/green/right-/left-tilted) tasks alternated randomly across trials. The target was either a color or an orientation singleton among homogeneous distractors. Across two trials, the task and target-defining dimension repeated or changed randomly. For task switch trials, agglomerated task sets predict a difference between dimension changes and repetitions: joint task and dimension switches require full task set reconfiguration, while dimension repetitions permit re-using some control settings from the previous trial. By contrast, integrated task sets always require full switches, predicting dimension repetition effects (DREs) to be absent across task switches. RT analyses showed significant DREs across task switches as well as repetitions supporting the notion of agglomerated task sets. Additionally, two event-related potentials (ERP) were analyzed: the Posterior-Contralateral-Negativity (PCN) indexing spatial selection dynamics, and the Sustained-Posterior-Contralateral-Negativity (SPCN) indexing post-selective perceptual/semantic analysis. Significant DREs across task switches were observed for both the PCN and SPCN components. Together, DREs across task switches for RTs and two functionally distinct ERP components suggest that re-using control settings across different tasks is possible. The results thus support the “agglomerated-task-set” hypothesis, and are inconsistent with “integrated task sets.
    corecore