34 research outputs found

    Enhanced Syllable Discrimination Thresholds in Musicians

    Get PDF
    Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.Grammy FoundationWilliam F. Milton Fun

    Auditory cognition and perception of action video game players

    Get PDF
    A training method to improve speech hearing in noise has proven elusive, with most methods failing to transfer to untrained tasks. One common approach to identify potentially viable training paradigms is to make use of cross-sectional designs. For instance, the consistent finding that people who chose to avidly engage with action video games as part of their normal life also show enhanced performance on non-game visual tasks has been used as a foundation to test the causal impact of such game play via true experiments (e.g., in more translational designs). However, little work has examined the association between action video game play and untrained auditory tasks, which would speak to the possible utility of using such games to improve speech hearing in noise. To examine this possibility, 80 participants with mixed action video game experience were tested on a visual reaction time task that has reliably shown superior performance in action video game players (AVGPs) compared to non-players (≤ 5 h/week across game categories) and multi-genre video game players (> 5 h/week across game categories). Auditory cognition and perception were tested using auditory reaction time and two speech-in-noise tasks. Performance of AVGPs on the visual task replicated previous positive findings. However, no significant benefit of action video game play was found on the auditory tasks. We suggest that, while AVGPs interact meaningfully with a rich visual environment during play, they may not interact with the games’ auditory environment. These results suggest that far transfer learning during action video game play is modality-specific and that an acoustically relevant auditory environment may be needed to improve auditory probabilistic thinking

    The Effect of Visual Cues on Auditory Stream Segregation in Musicians and Non-Musicians

    Get PDF
    Background: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. Methods: Musicians (N = 18) and non-musicians (N = 19) were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. Conclusions: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down) processes in auditory streaming mechanisms, and suggest that visual cue

    The effect of musical training on auditory grouping

    Get PDF
    Background. Auditory streaming is a process highly relevant to analyzing everyday sound environments, particularly with respect to timbre. The phenomenon of auditory streaming has a history of being studied in terms of Gestalt principles (Bregman, 1990), of pitch (van Noorden, 1975), of tempo (Bregman & Campbell, 1971; van Noorden, 1975), of timbre (Bregman & Pinker, 1978; Marozeau et al., 2013), and of attention (Botte et al., 1997; Carlyon & Cusack, 2001). All of these parameters influence the extent of auditory streaming in various ways. An increase in performance in many types of auditory tasks is seen in musicians, including streaming (Zendel & Alain, 2008), presumably a result of training and brain plasticity.Aims. This experiment seeks to corroborate this observed effect of musical training, and further define the effects of training on specific instrument. Another goal of this experiment is to clearly demonstrate the influence of attention on streaming.Method. In testing both non-musicians and musicians trained on specific instruments in a simple ABA-paradigm where timbre is manipulated (similar timbres presumably making streaming more difficult (Singh & Bregman, 1978; Hartmann & Johnson, 1991; Iverson et al., 1995)), we can find and analyze the fission and temporal coherence boundaries between groups. Participants will be exposed to trials via Max/MSP, and responses will be collected in the same patch. A manipulation of instructions to participants will evaluate the influence of attention on streaming: they will be instructed to hold on to either the galloping rhythm (integration) or the 2:1 rhythm (streaming).Results. This experiment corroborates the previously observed effect of musical training on perception, demonstrated by different threshold profiles between musicians and non-musicians. It also clearly demonstrates an influence of attention on streaming while suggesting further effects of training on specific instruments. The manipulation of attention formed two boundaries, identified as the fission boundary and the temporal coherence boundary. These boundaries were significantly different between musicians and non-musicians and additionally affected by specific timbres

    Short-Term Visual Deprivation Improves the Perception of Harmonicity

    Get PDF
    Neuroimaging studies have shown that the perception of auditory stimuli involves occipital cortical regions traditionally associated with visual processing, even in the absence of any overt visual component to the task. Analogous behavioral evidence of an interaction between visual and auditory processing during purely auditory tasks comes from studies of short-term visual deprivation on the perception of auditory cues, however, the results of such studies remain equivocal. Although some data have suggested that visual deprivation significantly increases loudness and pitch discrimination and reduces spatial localization inaccuracies, it is still unclear whether such improvement extends to the perception of spectrally complex cues, such as those involved in speech and music perception. We present data demonstrating that a 90-min period of visual deprivation causes a transient improvement in the perception of harmonicity: a spectrally complex cue that plays a key role in music and speech perception. The results provide clear behavioral evidence supporting a role for the visual system in the processing of complex auditory stimuli, even in the absence of any visual component to the task

    Neurophysiological Influence of Musical Training on Speech Perception

    Get PDF
    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL

    The impact of making music on aural perception and language skills: A research synthesis

    Get PDF
    This paper provides a synthesis of research on the relationship between music and language, drawing on evidence from neuroscience, psychology, sociology and education. It sets out why it has become necessary to justify the role of music in the school curriculum and summarizes the different methodologies adopted by researchers in the field. It considers research exploring the way that music and language are processed, including differences and commonalities; addresses the relative importance of genetics versus length of time committed to, and spent, making music; discusses theories of modularity and sensitive periods; sets out the OPERA hypothesis; critically evaluates research comparing musicians with non-musicians; and presents detailed accounts of intervention studies with children and those from deprived backgrounds, taking account of the importance of the nature of the musical training. It concludes that making music has a major impact on the development of language skills

    Effects of noise exposure on young adults with normal audiograms II: Behavioral measures

    Get PDF
    An estimate of lifetime noise exposure was used as the primary predictor of performance on a range of behavioral tasks: frequency and intensity difference limens, amplitude modulation detection, interaural phase discrimination, the digit triplet speech test, the co-ordinate response speech measure, an auditory localization task, a musical consonance task and a subjective report of hearing ability. One hundred and thirty-eight participants (81 females) aged 18–36 years were tested, with a wide range of self-reported noise exposure. All had normal pure-tone audiograms up to 8 kHz. It was predicted that increased lifetime noise exposure, which we assume to be concordant with noise-induced cochlear synaptopathy, would elevate behavioral thresholds, in particular for stimuli with high levels in a high spectral region. However, the results showed little effect of noise exposure on performance. There were a number of weak relations with noise exposure across the test battery, although many of these were in the opposite direction to the predictions, and none were statistically significant after correction for multiple comparisons. There were also no strong correlations between electrophysiological measures of synaptopathy published previously and the behavioral measures reported here. Consistent with our previous electrophysiological results, the present results provide no evidence that noise exposure is related to significant perceptual deficits in young listeners with normal audiometric hearing. It is possible that the effects of noise-induced cochlear synaptopathy are only measurable in humans with extreme noise exposures, and that these effects always co-occur with a loss of audiometric sensitivity
    corecore