90 research outputs found
Recommended from our members
Modeling the Learning of the Person Case Constraint
Many domains of linguistic research posit feature bundles as an explanation for various phenomena. Such hypotheses are often evaluated on their simplicity (or parsimony). We take a complementary approach. Specifically, we evaluate different hypotheses about the representation of person features in syntax on the basis of their implications for learning the Person Case Constraint (PCC). The PCC refers to a phenomenon where certain combinations of clitics (pronominal bound morphemes) are disallowed with ditransitive verbs. We compare a simple theory of the PCC, where person features are represented as atomic units, to a feature-based theory of the PCC, where person features are represented as feature bundles. We use Bayesian modeling to compare these theories, using data based on realistic proportions of clitic combinations from child-directed speech. We find that both theories can learn the target grammar given enough data, but that the feature-based theory requires significantly less data, suggesting that developmental trajectories could provide insight into syntactic representations in this domain
A role for the developing lexicon in phonetic category acquisition
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning
Recommended from our members
The (Un)Surprising Kindergarten Path
During sentence comprehension, listeners form expectationsabout likely structures before they have reached the end of asentence. Children are more likely than adults to ignore late-arriving evidence when it contradicts their initial parse. Whilethis difference is often ascribed to developmental changes inexecutive function, this paper investigates whether statisticalproperties of child-directed speech could be responsible forchildren’s failure to revise temporarily ambiguous sentences.We examined well-studied garden-path sentences andcalculated surprisal values derived from adult and child-directed corpora at each word. For adult corpora, surprisal washighest where the sentence structure was disambiguated. Forchild corpora, however, values at the disambiguating regionwere low relative to other words in the sentence. This suggeststhat for children, the disambiguating words may be statisticallyweak cues to ruling out their original parse, and that inprinciple, the statistics of child-directed speech couldcontribute to children’s difficulty with garden-path sentences
Recommended from our members
Normalization may be ineffective for phonetic category learning
Sound categories often overlap in their acoustics, which can make phonetic learning difficult. Several studies argued that normalizing acoustics relative to context improves category separation (e.g. Dillon et al., 2013). However, recent work shows that normalization is ineffective for learning Japanese vowel length from spontaneous child-directed speech (Hitczenko et al., 2018). We show that this discrepancy arises from differences between spontaneous and controlled lab speech, and that normalization can increase category overlap when there are regularities in which contexts different sounds occur in - a hallmark of spontaneous speech. Therefore, normalization is unlikely to help in real, naturalistic phonetic learning situations
Do Infants Really Learn Phonetic Categories?
Early changes in infants’ ability to perceive native and nonnative speech sound contrasts are typically attributed to their developing knowledge of phonetic categories. We critically examine this hypothesis and argue that there is little direct evidence of category knowledge in infancy. We then propose an alternative account in which infants’ perception changes because they are learning a perceptual space that is appropriate to represent speech, without yet carving up that space into phonetic categories. If correct, this new account has substantial implications for understanding early language development
Recommended from our members
Modeling Substitution Errors in Spanish Morphology Learning
In early stages of language acquisition, children often make inflectional errors on regular verbs, e.g., Spanish-speaking children produce –a (present-tense 3rd person singular) when other inflections are expected. Most previous models of morphology learning have focused on later stages of learning relating to productionof irregular verbs. We propose a computational model of Spanish inflection learning to examine the earlier stages of learning and present a novel data set of gold-standard inflectional annotations for Spanish verbs. Our model replicatesdata from Spanish-learning children, capturing the acquisition order of different inflections and correctly predicting the substitution errors they make. Analyses show that the learning trajectory can be explained as a result of the gradualacquisition of inflection-meaning associations. Ours is the first computational model to provide an explanation for this acquisition trajectory in Spanish, and represents a theoretical advance more generally in explaining substitution errors in early morphology learning
A phonetic model of non-native spoken word processing
Non-native speakers show difficulties with spoken word processing. Many
studies attribute these difficulties to imprecise phonological encoding of
words in the lexical memory. We test an alternative hypothesis: that some of
these difficulties can arise from the non-native speakers' phonetic perception.
We train a computational model of phonetic learning, which has no access to
phonology, on either one or two languages. We first show that the model
exhibits predictable behaviors on phone-level and word-level discrimination
tasks. We then test the model on a spoken word processing task, showing that
phonology may not be necessary to explain some of the word processing effects
observed in non-native speakers. We run an additional analysis of the model's
lexical representation space, showing that the two training languages are not
fully separated in that space, similarly to the languages of a bilingual human
speaker.Comment: Accepted for publication in Proceedings of EACL-2021. 11 pages, 5
figures, 2 table
Infant phonetic learning as perceptual space learning: A crosslinguistic evaluation of computational models
- …