54 research outputs found

    Development of multisensory spatial integration and perception in humans

    Get PDF
    Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A‐V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants’ head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25° or 45° to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25° eccentricity. In addition to this main finding, we found age‐dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life

    Especialización perceptiva multisensorial del habla en la infancia

    Get PDF
    Durante el primer año de vida, los bebés muestran una disminución en la capacidad para diferenciar sonidos del habla no presentes en su lengua materna. Este fenómeno se conoce como estrechamiento perceptivo (perceptual narrowing). Sin embargo, la percepción del habla no se basa exclusivamente en la modalidad auditiva, sino que para poder percibir adecuadamente el lenguaje, el bebé integra la información auditiva con la visual (el gesto articulatorio). Un estudio reciente demuestra que el estrechamiento perceptivo también sucede a nivel audiovisual: se observa un declive en la detección de la correspondencia sonido-gesto articulatorio (facial) en lenguas no maternas durante el primer año de vida

    Language familiarity modulates relative attention to the eyes and mouth of a talker

    Get PDF
    a b s t r a c t We investigated whether the audiovisual speech cues available in a talker's mouth elicit greater attention when adults have to process speech in an unfamiliar language vs. a familiar language. Participants performed a speech-encoding task while watching and listening to videos of a talker in a familiar language (English) or an unfamiliar language (Spanish or Icelandic). Attention to the mouth increased in monolingual subjects in response to an unfamiliar language condition but did not in bilingual subjects when the task required speech processing. In the absence of an explicit speech-processing task, subjects attended equally to the eyes and mouth in response to both familiar and unfamiliar languages. Overall, these results demonstrate that language familiarity modulates selective attention to the redundant audiovisual speech cues in a talker's mouth in adults. When our findings are considered together with similar findings from infants, they suggest that this attentional strategy emerges very early in life

    Development of multisensory spatial integration and perception in humans

    Get PDF
    Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A‐V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants’ head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25° or 45° to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25° eccentricity. In addition to this main finding, we found age‐dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life

    Heterochrony and Cross-Species Intersensory Matching by Infant Vervet Monkeys

    Get PDF
    Understanding the evolutionary origins of a phenotype requires understanding the relationship between ontogenetic and phylogenetic processes. Human infants have been shown to undergo a process of perceptual narrowing during their first year of life, whereby their intersensory ability to match the faces and voices of another species declines as they get older. We investigated the evolutionary origins of this behavioral phenotype by examining whether or not this developmental process occurs in non-human primates as well.We tested the ability of infant vervet monkeys (Cercopithecus aethiops), ranging in age from 23 to 65 weeks, to match the faces and voices of another non-human primate species (the rhesus monkey, Macaca mulatta). Even though the vervets had no prior exposure to rhesus monkey faces and vocalizations, our findings show that infant vervets can, in fact, recognize the correspondence between rhesus monkey faces and voices (but indicate that they do so by looking at the non-matching face for a greater proportion of overall looking time), and can do so well beyond the age of perceptual narrowing in human infants. Our results further suggest that the pattern of matching by vervet monkeys is influenced by the emotional saliency of the Face+Voice combination. That is, although they looked at the non-matching screen for Face+Voice combinations, they switched to looking at the matching screen when the Voice was replaced with a complex tone of equal duration. Furthermore, an analysis of pupillary responses revealed that their pupils showed greater dilation when looking at the matching natural face/voice combination versus the face/tone combination.Because the infant vervets in the current study exhibited cross-species intersensory matching far later in development than do human infants, our findings suggest either that intersensory perceptual narrowing does not occur in Old World monkeys or that it occurs later in development. We argue that these findings reflect the faster rate of neural development in monkeys relative to humans and the resulting differential interaction of this factor with the effects of early experience

    Impact of COVID-19 on cardiovascular testing in the United States versus the rest of the world

    Get PDF
    Objectives: This study sought to quantify and compare the decline in volumes of cardiovascular procedures between the United States and non-US institutions during the early phase of the coronavirus disease-2019 (COVID-19) pandemic. Background: The COVID-19 pandemic has disrupted the care of many non-COVID-19 illnesses. Reductions in diagnostic cardiovascular testing around the world have led to concerns over the implications of reduced testing for cardiovascular disease (CVD) morbidity and mortality. Methods: Data were submitted to the INCAPS-COVID (International Atomic Energy Agency Non-Invasive Cardiology Protocols Study of COVID-19), a multinational registry comprising 909 institutions in 108 countries (including 155 facilities in 40 U.S. states), assessing the impact of the COVID-19 pandemic on volumes of diagnostic cardiovascular procedures. Data were obtained for April 2020 and compared with volumes of baseline procedures from March 2019. We compared laboratory characteristics, practices, and procedure volumes between U.S. and non-U.S. facilities and between U.S. geographic regions and identified factors associated with volume reduction in the United States. Results: Reductions in the volumes of procedures in the United States were similar to those in non-U.S. facilities (68% vs. 63%, respectively; p = 0.237), although U.S. facilities reported greater reductions in invasive coronary angiography (69% vs. 53%, respectively; p < 0.001). Significantly more U.S. facilities reported increased use of telehealth and patient screening measures than non-U.S. facilities, such as temperature checks, symptom screenings, and COVID-19 testing. Reductions in volumes of procedures differed between U.S. regions, with larger declines observed in the Northeast (76%) and Midwest (74%) than in the South (62%) and West (44%). Prevalence of COVID-19, staff redeployments, outpatient centers, and urban centers were associated with greater reductions in volume in U.S. facilities in a multivariable analysis. Conclusions: We observed marked reductions in U.S. cardiovascular testing in the early phase of the pandemic and significant variability between U.S. regions. The association between reductions of volumes and COVID-19 prevalence in the United States highlighted the need for proactive efforts to maintain access to cardiovascular testing in areas most affected by outbreaks of COVID-19 infection

    Perception of auditory-visual temporal synchrony in human infants

    No full text
    Using a habituation/test procedure, the author investigated adults &apos; and infants &apos; perception of auditory-visual temporal synchrony. Participants were familiarized with a bouncing green disk and a sound that occurred each time the disk bounced. Then, they were given a series of asynchrony test trials where the sound occurred either before or after the disk bounced. The magnitude of the auditory-visual temporal asynchrony threshold differed markedly in adults and infants. The threshold for the detection of asynchrony created by a sound preceding a visible event was 65 ms in adults and 350 ms in infants and for the detection of asynchrony created by a sound following a visible event was 112 ms in adults and 450 ms in infants. Also, infants did not respond to asynchronies that exceeded intervals that yielded reliable discrimination. Infants &apos; perception of auditory-visual temporal unity is guided by a synchrony and an asynchrony window, both of which become narrower in development. Many everyday objects and events are specified by information that is concurrently available to different sensory modalities. In general, multimodal information provides for greater perceptual and response accuracy than does unimodal information (Gibson, 1979). To achieve greater accuracy, however, the observer must be able to detect the various types of relations that often unite concurrently available multimodal inputs. One ubiquitous perceptual attribute that provides an important basis for unifying multimodal sources of information is temporal synchrony. In fact, the ubiquity of intersensory temporal synchrony, and the fact that its detection is likely to require relatively simple mechanisms, makes temporal synchrony ideal as a basis for intersensory integration during the earliest stages of development (Lewkowicz, 1992a, 1994a). Indeed, empirical findings from a number of studies indicate that beginning as early as the second month of life, human infants can integrate concurrent auditory and visual input

    Developmental changes in infants' visual response to temporal frequency.

    Full text link

    Learning and discrimination of audiovisual events in human infants: the hierarchical relation between intersensory temporal synchrony and rhythmic pattern cues

    No full text
    This study examined 4- to 10-month-old infants ’ perception of audio–visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Experiment 1 established that infants of all ages could successfully discriminate between two different audiovisual rhythmic events. Experiment 2 showed that only 10-month-old infants detected a desynchronization of the auditory and visual components of a rhythmical event. Experiment 3 showed that 4- to 8-month-old infants could detect A-V desynchroni-zation but only when the audiovisual event was nonrhythmic. These results show that initially in development infants attend to the overall temporal structure of rhythmic audiovisual events but that later in development they become capable of perceiving the embedded intersensory temporal synchrony relations as well. A ticking metronome, a tap dancer, and a talking person all illustrate the fact that many events in our everyday world are specified concurrently in multiple sensory modalities. In addition, the multimodal sensory information that specifies them is distrib-uted over time. The specific way the information is distributed (i.e., its temporal structure) determines the perceptual and cogni-tive meaning of temporally defined events (Baldwin &amp; Baird, 2001; Zacks &amp; Tversky, 2001). The two best examples of the fundamental importance of temporal structure for perception and cognition are, of course, music and language. In each case, the particular temporal organization of a series of elements, be they notes or phonemes, can give rise to very different meaning
    corecore