36 research outputs found

    Telephone conversation impairs sustained visual attention via a central bottleneck

    Get PDF
    Recent research has shown that holding telephone conversations disrupts one's driving ability. We asked whether this effect could be attributed to a visual attention impairment. In Experiment 1, participants conversed on a telephone or listened to a narrative while engaged in multiple object tracking (MOT), a task requiring sustained visual attention. We found that MOT was disrupted in the telephone conversation condition, relative to single-task MOT performance, but that listening to a narrative had no effect. In Experiment 2, we asked which component of conversation might be interfering with MOT performance. We replicated the conversation and single-task conditions of Experiment 1 and added two conditions in which participants heard a sequence of words over a telephone. In the shadowing condition, participants simply repeated each word in the sequence. In the generation condition, participants were asked to generate a new word based on each word in the sequence. Word generation interfered with MOT performance, but shadowing did not. The data indicate that telephone conversation disrupts attention at a central stage, the act of generating verbal stimuli, rather than at a peripheral stage, such as listening or speaking

    Shifting attention in viewer- and object-based reference frames after unilateral brain injury

    Get PDF
    The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and object-based (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection

    Visual speech differentially modulates beta, theta, and high gamma bands in auditory cortex

    Get PDF
    Speech perception is a central component of social communication. While principally an auditory process, accurate speech perception in everyday settings is supported by meaningful information extracted from visual cues (e.g., speech content, timing, and speaker identity). Previous research has shown that visual speech modulates activity in cortical areas subserving auditory speech perception, including the superior temporal gyrus (STG), potentially through feedback connections from the multisensory posterior superior temporal sulcus (pSTS). However, it is unknown whether visual modulation of auditory processing in the STG is a unitary phenomenon or, rather, consists of multiple temporally, spatially, or functionally distinct processes. To explore these questions, we examined neural responses to audiovisual speech measured from intracranially implanted electrodes within the temporal cortex of 21 patients undergoing clinical monitoring for epilepsy. We found that visual speech modulates auditory processes in the STG in multiple ways, eliciting temporally and spatially distinct patterns of activity that differ across theta, beta, and high-gamma frequency bands. Before speech onset, visual information increased high-gamma power in the posterior STG and suppressed beta power in mid-STG regions, suggesting crossmodal prediction of speech signals in these areas. After sound onset, visual speech decreased theta power in the middle and posterior STG, potentially reflecting a decrease in sustained feedforward auditory activity. These results are consistent with models that posit multiple distinct mechanisms supporting audiovisual speech perception and provide a crucial map for subsequent studies to identify the types of visual features that are encoded by these separate mechanisms.This study was supported by NIH Grant R00 DC013828 A. Beltz was supported by the Jacobs Foundation.http://deepblue.lib.umich.edu/bitstream/2027.42/167729/1/OriginalManuscript.pdfDescription of OriginalManuscript.pdf : Preprint of the article "Multiple auditory responses to visual speech"SEL

    On the Functional Significance of the P1 and N1 Effects to Illusory Figures in the Notch Mode of Presentation

    Get PDF
    The processing of Kanizsa figures have classically been studied by flashing the full “pacmen” inducers at stimulus onset. A recent study, however, has shown that it is advantageous to present illusory figures in the “notch” mode of presentation, that is by leaving the round inducers on screen at all times and by removing the inward-oriented notches delineating the illusory figure at stimulus onset. Indeed, using the notch mode of presentation, novel P1and N1 effects have been found when comparing visual potentials (VEPs) evoked by an illusory figure and the VEPs to a control figure whose onset corresponds to the removal of outward-oriented notches, which prevents their integration into one delineated form. In Experiment 1, we replicated these findings, the illusory figure was found to evoke a larger P1 and a smaller N1 than its control. In Experiment 2, real grey squares were placed over the notches so that one condition, that with inward-oriented notches, shows a large central grey square and the other condition, that with outward-oriented notches, shows four unconnected smaller grey squares. In response to these “real” figures, no P1 effect was found but a N1 effect comparable to the one obtained with illusory figures was observed. Taken together, these results suggest that the P1 effect observed with illusory figures is likely specific to the processing of the illusory features of the figures. Conversely, the fact that the N1 effect was also obtained with real figures indicates that this effect may be due to more global processes related to depth segmentation or surface/object perception

    The computation of shape orientation in search for Kanizsa figures

    No full text
    Previous studies of visual search for illusory figures have provided equivocal results, with efficient search for Kanizsa squares (eg Davis and Driver, 1994 Nature 371 291 – 293) contrasting with inefficient search for Kanizsa triangles (eg Grabowecky and Treisman, 1989 Investigative Ophthalmology and Visual Science 30 457). Here, we investigated whether shape orientation can explain these differences. The results from three experiments replicated previous findings: Kanizsa squares in experiment 1 could be detected more efficiently than Kanizsa triangles in experiment 2. In addition, when controlling for stimulus complexity in experiment 3, we found search for Kanizsa diamonds intermediate in efficiency. Taken together, these results suggest an oblique effect in search for Kanizsa figures with cardinal shape orientations leading to more efficient performance than oblique shape orientations. Our findings indicate that both shape orientation and stimulus complexity affect search for illusory figures

    Local and global level-priming occurs for hierarchical stimuli composed of outlined, but not filled-in, elements

    No full text

    Demand-based dynamic distribution of attention and monitoring of velocities during multiple-object tracking

    No full text

    Characteristic sounds make you look at target objects more quickly

    No full text

    Characteristic sounds facilitate visual search

    No full text
    corecore