95 research outputs found

    War of the Words: Development of Inter-Lexical Inhibition in Typical Children

    Get PDF
    Spoken word recognition requires accessing the target word in the mental lexicon. It is now well known that as acoustic information unfolds over time, similar-sounding lexical candidates (e.g., cap and cat) compete until the disambiguating information (i.e., the last sound) is perceived and one word “wins”. As the word is activated it inhibits similar-sounding competitors. While this inter-lexical inhibition between words has been demonstrated in adults (Dahan, Magnuson, Tanenhaus, & Hogan, 2001; Luce & Pisoni, 1998), it is unclear how it develops. The present study used an eye-tracking paradigm to examine this inhibition in school-aged children. Participants heard words and matched them to their picture from a screen containing four pictures. Words were manipulated with cross-splicing to briefly activate a competitor and observe the resulting interference on the target word. Eye-movements to each picture were monitored to measure how strongly words compete during recognition. We found that both 7- to 8-year-old children and 12- to 13-year old children made fewer fixations to the target when the onset of the target word (e.g., cap) came from a competitor word (e.g., ca(t)p) than from a nonword (e.g., ca(ck)t). This suggests that activation of the competitor led to inhibition of the target word, resulting in less activation of the target word. There were differences in this marker of inhibition across age groups, suggesting that lexical competition undergoes developmental change even in the later years of childhood. Analyses of assessments of language, reading, perceptual reasoning, and general inhibition reveal a potential relationship between inter-lexical inhibition and reading fluency, but none with vocabulary or general inhibition

    What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations

    Get PDF
    This is the author's accepted manuscript. This article may not exactly replicate the final version published in the APA journal. It is not the copy of record. The original publication is available at http://psycnet.apa.org/index.cfm?fa=search.displayrecord&uid=2011-05323-001.Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2,880 fricative productions (Jongman, Wayland, & Wong, 2000) spanning many talker and vowel contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values and manipulated the information in the training set to contrast (a) models based on a small number of invariant cues, (b) models using all cues without compensation, and (c) models in which cues underwent compensation for contextual factors. Compensation was modeled by computing cues relative to expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed

    Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking

    Get PDF
    Available online 8 October 2021.Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.This project was supported by NIH Grant DC008089 awarded to BM. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 793919, awarded to EK. This work was partially supported by the Basque Government through the BERC 2018-2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490

    Gradient Activation of Speech Categories Facilitates Listeners’ Recovery From Lexical Garden Paths, But Not Perception of Speech-in-Noise

    Get PDF
    Published 2021 AprListeners activate speech-sound categories in a gradient way, and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula et al. (2017) suggest that the degree to which listeners maintain within-category information varies across individuals. Here we assessed the consequences of this gradiency for speech perception. To test this, we collected a measure of gradiency for different listeners using the visual analogue scaling (VAS) task used by Kapnoula et al. (2017). We also collected 2 independent measures of performance in speech perception: a visual world paradigm (VWP) task measuring participants’ ability to recover from lexical garden paths (McMurray et al., 2009) and a speech-perception task measuring participants’ perception of isolated words in noise. Our results show that categorization gradiency does not predict participants’ performance in the speech-in-noise task. However, higher gradiency predicted higher likelihood of recovery from temporarily misleading information presented in the VWP task. These results suggest that gradient activation of speech sound categories is helpful when listeners need to reconsider their initial interpretation of the input, making them more efficient in recovering from errors.This project was supported by National Institutes of Health Grant DC008089 awarded to Bob McMurray. This work was partially supported by the Basque Government through the BERC 2018-2021 Program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490. This project was partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) through the convocatoria 2016 Subprograma Estatal Ayudas para contratos para la Formación Posdoctoral 2016, Programa Estatal de Promoción del Talento y su Empleabilidad del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016, reference FJCI-2016-28019 awarded to Efthymia C. Kapnoula. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant 793919, awarded to Efthymia C. Kapnoula

    Too much of a good thing: How novelty biases and vocabulary influence known and novel referent selection in 18-month-old children and associative learning models

    Get PDF
    Identifying the referent of novel words is a complex process that young children do with relative ease. When given multiple objects along with a novel word, children select the most novel item, sometimes retaining the word‐referent link. Prior work is inconsistent, however, on the role of object novelty. Two experiments examine 18‐month‐old children's performance on referent selection and retention with novel and known words. The results reveal a pervasive novelty bias on referent selection with both known and novel names and, across individual children, a negative correlation between attention to novelty and retention of new word‐referent links. A computational model examines possible sources of the bias, suggesting novelty supports in‐the‐moment behavior but not retention. Together, results suggest that when lexical knowledge is weak, attention to novelty drives behavior, but alone does not sustain learning. Importantly, the results demonstrate that word learning may be driven, in part, by low‐level perceptual processes

    Dynamic EEG analysis during language comprehension reveals interactive cascades between perceptual processing and sentential expectations

    Get PDF
    Available online 18 October 2020.Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.This work was supported by NIH grant DC008089 awarded to BM. This work was partially supported by the Basque Government through the BERC 2018–2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015- 0490, as well as by a postdoctoral grant from the Spanish Ministry of Economy and Competitiveness (MINECO; reference FJCI-2016-28019), awarded to EK

    Contingent categorization in speech perception

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in Language Cognition and Neuroscience in 2014, available online: http://www.tandfonline.com/10.1080/01690965.2013.824995.The speech signal is notoriously variable, with the same phoneme realized differently depending on factors like talker and phonetic context. Variance in the speech signal has led to a proliferation of theories of how listeners recognize speech. A promising approach, supported by computational modeling studies, is contingent categorization, wherein incoming acoustic cues are computed relative to expectations. We tested contingent encoding empirically. Listeners were asked to categorize fricatives in CV syllables constructed by splicing the fricative from one CV syllable with the vowel from another CV syllable. The two spliced syllables always contained the same fricative, providing consistent bottom-up cues; however on some trials, the vowel and/or talker mismatched between these syllables, giving conflicting contextual information. Listeners were less accurate and slower at identifying the fricatives in mismatching splices. This suggests that listeners rely on context information beyond bottom-up acoustic cues during speech perception, providing support for contingent categorization

    Development of Twitching in Sleeping Infant Mice Depends on Sensory Experience

    Get PDF
    SummaryMyoclonic twitches are jerky movements that occur exclusively and abundantly during active (or REM) sleep in mammals, especially in early development [1–4]. In rat pups, limb twitches exhibit a complex spatiotemporal structure that changes across early development [5]. However, it is not known whether this developmental change is influenced by sensory experience, which is a prerequisite to the notion that sensory feedback from twitches not only activates sensorimotor circuits but modifies them [4]. Here, we investigated the contributions of proprioception to twitching in newborn ErbB2 conditional knockout mice that lack muscle spindles and grow up to exhibit dysfunctional proprioception [6–8]. High-speed videography of forelimb twitches unexpectedly revealed a category of reflex-like twitching—comprising an agonist twitch followed immediately by an antagonist twitch—that developed postnatally in wild-types/heterozygotes, but not in knockouts. Contrary to evidence from adults that spinal reflexes are inhibited during twitching [9–11], this finding suggests that twitches trigger the monosynaptic stretch reflex and, by doing so, contribute to its activity-dependent development [12–14]. Next, we assessed developmental changes in the frequency and organization (i.e., entropy) of more-complex, multi-joint patterns of twitching; again, wild-types/heterozygotes exhibited developmental changes in twitch patterning that were not seen in knockouts. Thus, targeted deletion of a peripheral sensor alters the normal development of local and global features of twitching, demonstrating that twitching is shaped by sensory experience. These results also highlight the potential use of twitching as a uniquely informative diagnostic tool for assessing the functional status of spinal and supraspinal circuits

    JWST Near-Infrared Detector Degradation: Finding the Problem, Fixing the Problem, and Moving Forward

    Get PDF
    The James Webb Space Telescope (JWST) is the successor to the Hubble Space Telescope. JWST will be an infrared optimized telescope, with an approximately 6.5 m diameter primary mirror, that is located at the Sun-Earth L2 Lagrange point. Three of JWST's four science instruments use Teledyne HgCdTe HAWAII-2RG (H2RG) near infrared detector arrays. During 2010, the JWST Project noticed that a few of its 5 micron cutoff H2RG detectors were degrading during room temperature storage, and NASA chartered a "Detector Degradation Failure Review Board" (DD-FRB) to investigate. The DD-FRB determined that the root cause was a design flaw that allowed indium to interdiffuse with the gold contacts and migrate into the HgCdTe detector layer. Fortunately, Teledyne already had an improved design that eliminated this degradation mechanism. During early 2012, the improved H2RG design was qualified for flight and JWST began making additional H2RGs. In this article we present the two public DD-FRB "Executiye Summaries" that: (1) determined the root cause of the detector degradation and (2) defined tests to determine whether the existing detectors are qualified for flight. We supplement these with a brief introduction to H2RG detector arrays, and a discussion of how the JWST Project is using cryogenic storage to retard the degradation rate of the existing flight spare H2RGs
    corecore