49 research outputs found

    Auditory and visual capture during focused visual attention

    Get PDF
    It is well known that auditory and visual onsets presented at a particular location can capture a person’s visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control

    Priming T2 in a visual and auditory attentional blink task

    Get PDF
    Participants performed an attentional blink (AB) task including digits as targets and letters as distractors within the visual and auditory domains. Prior to the rapid serial visual presentation, a visual or auditory prime was presented in the form of a digit that was identical to the second target (T2) on 50% of the trials. In addition to the "classic" AB effect, an overall drop in performance on T2 was observed for the trials on which the stream was preceded by an identical prime from the same modality. No cross-modal priming was evident, suggesting that the observed inhibitory priming effects are modality specific. We argue that the present findings represent a special type of negative priming operating at a low feature level. Copyright 2008 Psychonomic Society, Inc

    Pip and pop: Nonspatial auditory signals improve spatial visual search

    Get PDF
    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location or identity of the visual object. The experiments also show that the effect is not due to general alerting (because it does not occur with visual cues), nor is it due to top-down cuing of the visual change (because it still occurs when the pip is synchronized with distractors on the majority of trials). Instead, we propose that the temporal information of the auditory signal is integrated with the visual signal, generating a relatively salient emergent feature that automatically draws attention. Phenomenally, the synchronous pip makes the visual object pop out from its complex environment, providing a direct demonstration of spatially nonspecific sounds affecting competition in spatial visual processing. Keywords: attention, visual search, multisensory integration, audition, visio

    Reducing reversal errors in localizing the source of sound in virtual environment without head tracking

    Get PDF
    International audienceThis paper presents a study about the effect of using additional audio cueing and Head-Related Transfer Function (HRTF) on human performance in sound source localization task without using head movement. The existing techniques of sound spatialization generate reversal errors. We intend to reduce these errors by introducing sensory cues based on sound effects. We conducted and experimental study to evaluate the impact of additional cues in sound source localization task. The results showed the benefit of combining the additional cues and HRTF in terms of the localization accuracy and the reduction of reversal errors. This technique allows significant reduction of reversal errors compared to the use of the HRTF separately. For instance, this technique could be used to improve audio spatial alerting, spatial tracking and target detection in simulation applications when head movement is not included

    Shear Localization in Dynamic Deformation: Microstructural Evolution

    Full text link

    Factors Associated with Revision Surgery after Internal Fixation of Hip Fractures

    Get PDF
    Background: Femoral neck fractures are associated with high rates of revision surgery after management with internal fixation. Using data from the Fixation using Alternative Implants for the Treatment of Hip fractures (FAITH) trial evaluating methods of internal fixation in patients with femoral neck fractures, we investigated associations between baseline and surgical factors and the need for revision surgery to promote healing, relieve pain, treat infection or improve function over 24 months postsurgery. Additionally, we investigated factors associated with (1) hardware removal and (2) implant exchange from cancellous screws (CS) or sliding hip screw (SHS) to total hip arthroplasty, hemiarthroplasty, or another internal fixation device. Methods: We identified 15 potential factors a priori that may be associated with revision surgery, 7 with hardware removal, and 14 with implant exchange. We used multivariable Cox proportional hazards analyses in our investigation. Results: Factors associated with increased risk of revision surgery included: female sex, [hazard ratio (HR) 1.79, 95% confidence interval (CI) 1.25-2.50; P = 0.001], higher body mass index (fo

    Small RNA-based antiviral defense in insects

    Get PDF
    Contains fulltext : 131645.pdf (publisher's version ) (Open Access)Radboud Universiteit Nijmegen, 21 november 2014Promotor : Galama, J.M.D. Co-promotor : Rij, R.P. va

    The cocktail-party problem revisited: early processing and selection of multi-talker speech

    No full text
    How do we recognize what one person is saying when others are speaking at the same time? This review summarizes widespread research in psychoacoustics, auditory scene analysis, and attention, all dealing with early processing and selection of speech, which has been stimulated by this question. Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and “unmasking” resulting from binaural listening. Psychoacoustic models have been developed that can predict these effects accurately, albeit using computational approaches rather than approximations of neural processing. Grouping—the segregation and streaming of sounds—represents a subsequent processing stage that interacts closely with attention. Sounds can be easily grouped—and subsequently selected—using primitive features such as spatial location and fundamental frequency. More complex processing is required when lexical, syntactic, or semantic information is used. Whereas it is now clear that such processing can take place preattentively, there also is evidence that the processing depth depends on the task-relevancy of the sound. This is consistent with the presence of a feedback loop in attentional control, triggering enhancement of to-be-selected input. Despite recent progress, there are still many unresolved issues: there is a need for integrative models that are neurophysiologically plausible, for research into grouping based on other than spatial or voice-related cues, for studies explicitly addressing endogenous and exogenous attention, for an explanation of the remarkable sluggishness of attention focused on dynamically changing sounds, and for research elucidating the distinction between binaural speech perception and sound localization

    Hoe wij horende ziend zijn

    No full text

    Attentional Requirements on Feature Search Are Modulated by Stimulus Properties

    Get PDF
    We report a series of dual-task experiments, in which a rapid serial visual presentation (RSVP) task was combined with a visual search task. Orientation, motion, and color were used as the defining target features in the search task. Lag between target onsets was manipulated and interference between the two tasks was quantified by measuring detection scores for the search task as a function of lag. While simultaneous performance of an orientation detection task with an RSVP letter identification task resulted in a performance decrease for lags up to 320 ms, no such decrease was detected for highly salient motion- and color-defined targets. Subsequently, detectability of the motion and color feature was matched to that of the orientation-feature resulting in the reintroduction of a (smaller) performance decrease, but only during simultaneous performance (lag 0 ms). The results suggest that there are two causes for the impaired search performance occurring when a feature search task is combined with an RSVP task. The first is short-lasting interference probably due to attentional competition; the second, which plays a role only when targets for both tasks share features, is interference that may be attributed to a central processing bottleneck. © 2013 Ettwig, Bronkhorst
    corecore