39 research outputs found

    On the Locus of L2 Lexical Fuzziness: Insights From L1 Spoken Word Recognition and Novel Word Learning

    Get PDF
    Published: 08 July 2021The examination of how words are learned can offer valuable insights into the nature of lexical representations. For example, a common assessment of novel word learning is based on its ability to interfere with other words; given that words are known to compete with each other (Luce and Pisoni, 1998; Dahan et al., 2001), we can use the capacity of a novel word to interfere with the activation of other lexical representations as a measure of the degree to which it is integrated into the mental lexicon (Leach and Samuel, 2007). This measure allows us to assess novel word learning in L1 or L2, but also the degree to which representations from the two lexica interact with each other (Marian and Spivey, 2003). Despite the somewhat independent lines of research on L1 and L2 word learning, common patterns emerge across the two literatures (Lindsay and Gaskell, 2010; Palma and Titone, 2020). In both cases, lexicalization appears to follow a similar trajectory. In L1, newly encoded words often fail at first to engage in competition with known words, but they do so later, after they have been better integrated into the mental lexicon (Gaskell and Dumay, 2003; Dumay and Gaskell, 2012; Bakker et al., 2014). Similarly, L2 words generally have a facilitatory effect, which can, however, become inhibitory in the case of more robust (high-frequency) lexical representations. Despite the similar pattern, L1 lexicalization is described in terms of inter-lexical connections (Leach and Samuel, 2007), leading to more automatic processing (McMurray et al., 2016); whereas in L2 word learning, lack of lexical inhibition is attributed to less robust (i.e., fuzzy) L2 lexical representations. Here, I point to these similarities and I use them to argue that a common mechanism may underlie similar patterns across the two literatures.Support for this project was provided by the Spanish Ministry of Economy and Competitiveness, through the Juan de la Cierva- Formación fellowship, # FJCI-2016- 28019, awarded to ECK. This work was partially supported by the Basque Government through the BERC 2018-2021 program, and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490. This project has received funding from the European Union’s Horizon 2020 Research and Innovation Program, under the Marie Skłodowska-Curie grant agreement No 793919, awarded to ECK

    Reconciling the Contradictory Effects of Production on Word Learning: Production May Help at First, but It Hurts Later

    Get PDF
    Published: March 2022Does saying a novel word help to recognize it later? Previous research on the effect of production on this aspect of word learning is inconclusive, as both facilitatory and detrimental effects of production are reported. In a set of three experiments, we sought to reconcile the seemingly contrasting findings by disentangling the production from other effects. In Experiment 1, participants learned eight new words and their visual referents. On each trial, participants heard a novel word twice: either (a) by hearing the same speaker produce it twice (Perception-Only condition) or (b) by first hearing the speaker once and then producing it themselves (Production condition). At test, participants saw two pictures while hearing a novel word and were asked to choose its correct referent. Experiment 2 was identical to Experiment 1, except that in the Perception-Only condition each word was spoken by 2 different speakers (equalizing talker variability between conditions). Experiment 3 was identical to Experiment 2, but at test words were spoken by a novel speaker to assess generalizability of the effect. Accuracy, reaction time, and eye-movements to the target image were collected. Production had a facilitatory effect during early stages of learning (after short training), but its effect became detrimental after additional training. The results help to reconcile conflicting findings regarding the role of production on word learning. This work is relevant to a wide range of research on human learning in showing that the same factor may play a different role at different stages of learning.Support for this project was provided by the Spanish Ministry of Science and Innovation, Grant PSI2017-82563-P, awarded to Arthur G. Samuel and by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva-Formación fellowship, FJCI-2016–28019, awarded to Efthymia C. Kapnoula. This work was partially supported by the Basque Government through the BERC 2018–2021 and BERC 2022–2025 programs, and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490 and CEX2020-001010-S. This project has received funding from the European Union’s Horizon 2020 research and innovation program, under the Marie Skłodowska-Curie grant agreement 793919, awarded to Efthymia C. Kapnoula

    Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking

    Get PDF
    Available online 8 October 2021.Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.This project was supported by NIH Grant DC008089 awarded to BM. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 793919, awarded to EK. This work was partially supported by the Basque Government through the BERC 2018-2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490

    Voices in the mental lexicon: Words carry indexical information that can affect access to their meaning

    Get PDF
    Available online 11 May 2019The speech signal carries both linguistic and non-linguistic information (e.g., a talker’s voice qualities; referred to as indexical information). There is evidence that indexical information can affect some aspects of spoken word recognition, but we still do not know whether and how it can affect access to a word’s meaning. A few studies support a dual-route model, in which inferences about the talker can guide access to meaning via a route external to the mental lexicon. It remains unclear whether indexical information is also encoded within the mental lexicon. The present study tests for indexical effects on spoken word recognition and referent selection within the mental lexicon. In two experiments, we manipulated voice-to-referent co-occurrence, while preventing participants from using indexical information in an explicit way. Participants learned novel words (e.g., bifa) and their meanings (e.g., kite), with each talker’s voice linked (via systematic co-occurrence) to a specific referent (e.g., bifa spoken by speaker 1 referred to a specific picture of a kite). In testing, voice-to-referent mapping either matched that of training (congruent), or not (incongruent). Participants’ looks to the target’s referent were used as an index of lexical activation. Listeners looked faster at a target’s referent on congruent than incongruent trials. The same pattern of results was observed in a third experiment, when testing was 24 hrs later. These results show that indexical information can be encoded in lexical representations and affect spoken word recognition and referent selection. Our findings are consistent with episodic and distributed views of the mental lexicon that assume multi-dimensional lexical representations.Support for this project was provided by the Spanish Ministry of Science and Innovation, Grant # PSI2014-53277 and # PSI2017-82563- P awarded to A.G.S., the Spanish Ministry of Economy and Competitiveness, Juan de la Cierva-Formación fellowship awarded to E.C.K., and the Spanish Ministry of Economy and Competitiveness, “Severo Ochoa” Programme for Centres/Units of Excellence in R&D (SEV‐2015‐490)

    Gradient Activation of Speech Categories Facilitates Listeners’ Recovery From Lexical Garden Paths, But Not Perception of Speech-in-Noise

    Get PDF
    Published 2021 AprListeners activate speech-sound categories in a gradient way, and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula et al. (2017) suggest that the degree to which listeners maintain within-category information varies across individuals. Here we assessed the consequences of this gradiency for speech perception. To test this, we collected a measure of gradiency for different listeners using the visual analogue scaling (VAS) task used by Kapnoula et al. (2017). We also collected 2 independent measures of performance in speech perception: a visual world paradigm (VWP) task measuring participants’ ability to recover from lexical garden paths (McMurray et al., 2009) and a speech-perception task measuring participants’ perception of isolated words in noise. Our results show that categorization gradiency does not predict participants’ performance in the speech-in-noise task. However, higher gradiency predicted higher likelihood of recovery from temporarily misleading information presented in the VWP task. These results suggest that gradient activation of speech sound categories is helpful when listeners need to reconsider their initial interpretation of the input, making them more efficient in recovering from errors.This project was supported by National Institutes of Health Grant DC008089 awarded to Bob McMurray. This work was partially supported by the Basque Government through the BERC 2018-2021 Program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490. This project was partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) through the convocatoria 2016 Subprograma Estatal Ayudas para contratos para la Formación Posdoctoral 2016, Programa Estatal de Promoción del Talento y su Empleabilidad del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013-2016, reference FJCI-2016-28019 awarded to Efthymia C. Kapnoula. This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant 793919, awarded to Efthymia C. Kapnoula

    Any leftovers from a discarded prediction? Evidence from eye-movements during sentence comprehension

    Get PDF
    Published online: 23 May 2019.We investigated how listeners use gender-marked adjectives to adjust lexical predictions during sentence comprehension. Participants listened to sentence fragments in Spanish (e.g. “The witch flew to the village on her…”) that created expectation for a specific noun (broomstickfem), and were completed by an adjective and a noun. The adjective either agreed (newfem), disagreed (newmasc), or was neutral (bigfem/masc) with respect to the expected noun’s gender. Using the visual-world paradigm, we monitored looks toward images of the expected noun versus an alternative of the opposite gender (helicoptermasc). While listening to the initial fragment, participants looked more towards the expected noun. Once the adjective was heard, looks shifted toward the noun that matched the adjective’s gender. Finally, upon hearing the noun, looks were affected by both previous context and adjective gender. We conclude that predictions are updated online based on gender cues, but sentence context still affects integration of the expected noun.This work was partially supported by the Ministerio de Economía, Industria y Competitividad, Gobierno de España, the Agencia Estatal de Investigación and the Fondo Europeo de Desarrollo Regional (grant PSI2015-65694-P, “Severo Ochoa” programme SEV-2015-490 for Centres of Excellence in R&D), and by the Eusko Jaurlaritza (grant PI_2016_1_0014). Further support derived from the AThEME project funded by the European Commission Seventh Framework Programme, the ERC- 2011-ADG-295362 from the European Research Council. This project was also supported by the Ministerio de Economía, Industria y Competitividad, Gobierno de España through the convocatoria 2016 Subprograma Estatal Ayudas para contratos para la Formación Posdoctoral 2016, Programa Estatal de Promoción del Talento y su Empleabilidad del Plan Estatal de Investigación Científica y Técnica y de Innovación 2013–2016, reference FJCI-2016-28019

    Dynamic EEG analysis during language comprehension reveals interactive cascades between perceptual processing and sentential expectations

    Get PDF
    Available online 18 October 2020.Understanding spoken language requires analysis of the rapidly unfolding speech signal at multiple levels: acoustic, phonological, and semantic. However, there is not yet a comprehensive picture of how these levels relate. We recorded electroencephalography (EEG) while listeners (N = 31) heard sentences in which we manipulated acoustic ambiguity (e.g., a bees/peas continuum) and sentential expectations (e.g., Honey is made by bees). EEG was analyzed with a mixed effects model over time to quantify how language processing cascades proceed on a millisecond-by-millisecond basis. Our results indicate: (1) perceptual processing and memory for fine-grained acoustics is preserved in brain activity for up to 900 msec; (2) contextual analysis begins early and is graded with respect to the acoustic signal; and (3) top-down predictions influence perceptual processing in some cases, however, these predictions are available simultaneously with the veridical signal. These mechanistic insights provide a basis for a better understanding of the cortical language network.This work was supported by NIH grant DC008089 awarded to BM. This work was partially supported by the Basque Government through the BERC 2018–2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015- 0490, as well as by a postdoctoral grant from the Spanish Ministry of Economy and Competitiveness (MINECO; reference FJCI-2016-28019), awarded to EK

    Effect of deep brain stimulation on vocal motor control mechanisms in Parkinson's disease

    Get PDF
    Published online: March 07, 2019motor symptoms in Parkinson's disease (PD); however, its effect on vocal motor function has yielded conflicted and highly variable results. The present study investigated the effects of STN-DBS on the mechanisms of vocal production and motor control. Methods: A total of 10 PD subjects with bilateral STN-DBS implantation were tested with DBS ON and OFF while they performed steady vowel vocalizations and received randomized upward or downward pitch-shift stimuli (±100 cents) in their voice auditory feedback. Results: Data showed that the magnitude of vocal compensation responses to pitch-shift stimuli was significantly attenuated during DBS ON vs. OFF (p = 0.012). This effect was direction-specific and was only observed when subjects raised their voice fundamental frequency (F0) in the opposite direction to downward stimuli (p = 0.019). In addition, we found that voice F0 perturbation (i.e. jitter) was significantly reduced during DBS ON vs. OFF (p = 0.022), and this DBS-induced modulation was positively correlated with the attenuation of vocal compensation responses to downward pitch-shift stimuli (r = +0.57, p = 0.028). Conclusions: These findings provide the first data supporting the role of STN in vocal F0 motor control in response to altered auditory feedback. The DBS-induced attenuation of vocal compensation responses may result from increased inhibitory effects of the subcortical hyperdirect (fronto-subthalamic) pathways on the vocal motor cortex, which can help stabilize voice F0 and ameliorate vocal motor symptoms by impeding PD subjects’ abnormal (i.e. overshooting) vocal responses to alterations in the auditory feedback

    Eye-tracking the time‐course of novel word learning and lexical competition in adults and children

    Get PDF
    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method--the visual world paradigm--consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing “click on the biscuit”) were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing “click on the candle”), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24 hours. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree

    Stroop effects from newly learned color words : effects of memory consolidation and episodic context

    Get PDF
    The Stroop task is an excellent tool to test whether reading a word automatically activates its associated meaning, and it has been widely used in mono- and bilingual contexts. Despite of its ubiquity, the task has not yet been employed to test the automaticity of recently established word-concept links in novel-word-learning studies, under strict experimental control of learning and testing conditions. In three experiments, we thus paired novel words with native language (German) color words via lexical association and subsequently tested these words in a manual version of the Stroop task. Two crucial findings emerged: When novel word Stroop trials appeared intermixed among native-word trials, the novel-word Stroop effect was observed immediately after the learning phase. If no native color words were present in a Stroop block, the novel-word Stroop effect only emerged 24 h later. These results suggest that the automatic availability of a novel word's meaning depends either on supportive context from the learning episode and/or on sufficient time for memory consolidation. We discuss how these results can be reconciled with the complementary learning systems account of word learning
    corecore