41 research outputs found

    Effects of Near and Distant Phonological Neighbors on Picture Naming

    Get PDF
    Abstract Many studies have examined the effects of co-activation of similar words ("neighbors") during processing, with some reporting facilitative effects and others reporting inhibitory effects. Attractor dynamics has provided a promising integrated account in which distant semantic neighbors (moderately similar words) tend to facilitate processing and near semantic neighbors (highly similar words) tend to inhibit processing. This framework was extended to phonological neighbor effects on the accuracy of word production. For aphasic patients (N=62) and speeded young controls (N=32), picture naming was more accurate for words with many distant phonological neighbors (words with matching onsets) and less accurate for words with a near phonological neighbor (homophones). In addition, the sizes of the facilitative and inhibitory effects were correlated, suggesting that the mechanisms responsible for both effects are functionally integrated. These results extend an attractor dynamics framework that predicts facilitative effects of distant neighbors and inhibitory effects of near neighbors

    Comparing eye tracking with electrooculography for measuring individual sentence comprehension duration

    Get PDF
    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations

    Reaching Into Response Selection: Stimulus and Response Similarity Influence Central Operations

    Get PDF
    To behave adaptively in complex and dynamic environments, one must link perception and action to satisfy internal states, a process known as response selection (RS). A largely unexplored topic in the study of RS is how interstimulus and interresponse similarity affect performance. To examine this issue, we manipulated stimulus similarity by using colors that were either similar or dissimilar and manipulated response similarity by having participants move a mouse cursor to locations that were either close together or far apart. Stimulus and response similarity produced an interaction such that the mouse trajectory showed the greatest curvature when both were similar, a result obtained under task conditions emphasizing speed and conditions emphasizing accuracy. These findings are inconsistent with symbolic look-up accounts of RS but are consistent with central codes incorporating metrical properties of both stimuli and responses

    How Hearing Impairment Affects Sentence Comprehension: Using Eye Fixations to Investigate the Duration of Speech Processing

    Get PDF
    The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications

    Neural substrates of subphonemic variation and lexical competition in spoken word recognition

    Get PDF
    In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g. peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g. beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon.Research was supported by National Institutes of Health (NIH) [grant number: R01 DC013064] to EBM and NIH NIDCD [grant number R01 DC006220] to SEB. SG was supported by the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Programme for Centres/Units of Excellence in R&D [SEV‐2015‐490]. The contents of this paper reflect the views of the authors and not those of the funding agencies

    Word stress in speech perception

    Get PDF

    Developmental differences in the influence of phonological similarity on spoken word processing in Mandarin Chinese.

    Get PDF
    The developmental trajectory of spoken word recognition has been well established in Indo-European languages, but to date remains poorly characterized in Mandarin Chinese. In this study, typically developing children (N=17; mean age 10; 5) and adults (N=17; mean age 24) performed a picture-word matching task in Mandarin while we recorded ERPs. Mismatches diverged from expectations in different components of the Mandarin syllable; namely, word-initial phonemes, word-final phonemes, and tone. By comparing responses to different mismatch types, we uncovered evidence suggesting that both children and adults process words incrementally. However, we also observed key developmental differences in how subjects treated onset and rime mismatches. This was taken as evidence for a stronger influence of top-down processing on spoken word recognition in adults compared to children. This work therefore offers an important developmental component to theories of Mandarin spoken word recognition

    Differential processing of consonants and vowels in the auditory modality: A cross-linguistic study

    Get PDF
    International audienceFollowing the proposal by Nespor, Peña, and Mehler (2003) that consonants are more important in constraining lexical access than vowels, New, Araújo, and Nazzi (2008) demonstrated in a visual priming experiment that primes sharing consonants (jalu-JOLI) facilitate lexical access while primes sharing vowels do not (vobi-JOLI). The present study explores if this asymmetry can be extended to the auditory modality and whether language input plays a critical role as developmental studies suggest. Our experiments tested French and English as target languages and showed that consonantal information facilitated lexical decision to a greater extent than vocalic information, suggesting that the consonant advantage is independent of the language’s distributional properties. However, vowels are also facilitatory, in specific cases, with iambic English CVCV or French CVCV words. This effect is related to the preservation of the rhyme between the prime and the target (here, the final vowel), suggesting that the rhyme, in addition to consonant information and consonant skeleton information is an important unit in auditory phonological priming and spoken word recognition
    corecore