2 research outputs found

    The Top-Down Influences of Characteristic Sounds on Visual Search Performance in Realistic Scenes

    Get PDF
    The purpose of this experiment was to investigate whether meaningful sounds can facilitate visual search performance in the context of realistic scenes. It also aimed to determine whether the stimulus onset asynchrony (SOA) of sound and picture is a significant factor in enhancing performance. A 3 X 4 X 2 within subject design was used with independent factors sound congruency (congruent, incongruent and white noise), SOA (-1000, -500, 0, 300 ms), and target presence (present and absent). Participants were 55 (34 female and 21 male) college aged students at San Jose State University. On each trial participants were presented with a word cue indicating the target object, then depending on the condition they either 1) heard a sound and saw a picture simultaneously (SOA 0), 2) heard a sound followed by a scene (negative SOA), or 3) viewed a scene followed by a sound (positive SOA). The results indicated a congruency effect only at the negative SOAs, when the sound preceded the picture by 1000 or 500 ms. However, we did not observe a significant advantage of -1000 SOA over -500 SOA. Moreover, performance was significantly degraded at the positive SOA 300. Overall, these results suggest that congruent characteristic sounds can enhance visual search performance in realistic scenes, provided that they are presented at least 500 ms before the picture

    Development and Evaluation of a Sound-Swapped Video Database for Misophonia.

    No full text
    Misophonia has been characterized as intense negative reactions to specific trigger sounds (often orofacial sounds like chewing, sniffling, or slurping). However, recent research suggests high-level, contextual, and multisensory factors are also involved. We recently demonstrated that neurotypicals negative reactions to aversive sounds (e.g., nails scratching a chalkboard) are attenuated when the sounds are synced with positive attributable video sources (PAVS; e.g., tearing a piece of paper). To assess whether this effect generalizes to misophonic triggers, we developed a Sound-Swapped Video (SSV) database for use in misophonia research. In Study 1, we created a set of 39 video clips depicting common trigger sounds (original video sources, OVS) and a corresponding set of 39 PAVS temporally synchronized with the OVS videos. In Study 2, participants (N = 34) rated the 39 PAVS videos for their audiovisual match and pleasantness. We selected the 20 PAVS videos with best match scores for use in Study 3. In Study 3, a new group of participants (n = 102) observed the 20 selected PAVS and 20 corresponding OVS and judged the pleasantness or unpleasantness of each sound in the two contexts accompanying each video. Afterward, participants completed the Misophonia Questionnaire (MQ). The results of Study 3 show a robust attenuating effect of PAVS videos on the reported unpleasantness of trigger sounds: trigger sounds were rated as significantly less unpleasant when paired with PAVS with than OVS. Moreover, this attenuating effect was present in nearly every participant (99 out of 102) regardless of their score on the MQ. In fact, we found a moderate positive correlation between the PAVS-OVS difference and misophonia severity scores. Overall our results provide validation that the SSV database is a useful stimulus database to study how misophonic responses can be modulated by visual contexts. Here, we release the SSV database with the best 18 PAVS and 18 OVS videos used in Study 3 along with aggregate ratings of audio-video match and pleasantness (https://osf.io/3ysfh/). We also provide detailed instructions on how to produce these videos, with the hope that this database grows and improves through collaborations with the community of misophonia researchers
    corecore