45 research outputs found
No frills : Simple regularities in language can go a long way in the development of word knowledge
Recent years have seen a flourishing of Natural Language Processing models that can mimic many aspects of human language fluency. These models harness a simple, decades-old idea: It is possible to learn a lot about word meanings just from exposure to language, because words similar in meaning are used in language in similar ways. The successes of these models raise the intriguing possibility that exposure to word use in language also shapes the word knowledge that children amass during development. However, this possibility is strongly challenged by the fact that models use language input and learning mechanisms that may be unavailable to children. Across three studies, we found that unrealistically complex input and learning mechanisms are unnecessary. Instead, simple regularities of word use in children's language input that they have the capacity to learn can foster knowledge about word meanings. Thus, exposure to language may play a simple but powerful role in children's growing word knowledge. A video abstract of this article can be viewed at https://youtu.be/dT83dmMffnM. RESEARCH HIGHLIGHTS: Natural Language Processing (NLP) models can learn that words are similar in meaning from higher-order statistical regularities of word use. Unlike NLP models, infants and children may primarily learn only simple co-occurrences between words. We show that infants' and children's language input is rich in simple co-occurrence that can support learning similarities in meaning between words. We find that simple co-occurrences can explain infants' and children's knowledge that words are similar in meaning
Recommended from our members
Is focusing enough in category learning?
We examined whether selective attention, which is mainly theorized as the ability to focus on the category-relevant dimension, is a sole construct in understanding category learning. As the attention literature dissociates selective attention into focusing and filtering, we argue that filtering is another component that should be considered to fully understanding category learning. In the study, we provide an experimental paradigm that can dissociate filtering from focusing. By utilizing the paradigm along with collecting individual attention control measures, we show that filtering is related to the ability to inhibit irrelevant information. We also present that the current computational models that incorporate selective attention only as an ability to focus can not explain the results from the current study
Recommended from our members
Propositional versus Associative Views of Sentence Memory
Propositional accounts assume sentences are encoded in terms of a set of arguments bound to role-fillers in a predicate,but they never specify how the role representations form in the first place. Dennis (2005) shows an alternative way tocapture role-information based on simple associations derived directly from experience in the Syntagmatic-Paradigmatic(SP) model. We argue that the evidence for the propositional view is not well-founded and explore the possibility for apure associative encoding of proposition-like information. We differentially manipulate overlap in target and distractorsentences, embedded in narratives, and directly place the propositional account against the SP view. Our first experimentprovides some evidence for an SP account, however the second experiment supports the propositional view. Our finalexperiment provides results that are difficult to explain with either account. Overall, our results support the propositionalview and show mixed evidence for the SP account
Recommended from our members
Constructing Word Meaning without Latent Representations using Spreading Activation
Models of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations.
We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation
Recommended from our members
Examining Mechanisms Underlying the Ability to Form Paradigmatic Associations
Paradigmatic associations are second-order associations where the items share a common context rather than being directly associated. Despite the importance of the structure in knowledge representation, the underlying mechanisms to form paradigmatic associations are not well studied. In the current study, we examined whether explicit attentional control is critical for forming paradigmatic associations. We used an implicit learning task, which limits the use of explicit attentional control, to see whether the associations can be formed without attentional control. Results showed evidence for learning, which implies that explicit attentional control may not be necessary for forming paradigmatic associations. We also used the n-back task to examine whether the ability to maintain information is critical for forming paradigmatic associations. Results did not provide evidence for the relationship between the two. We discuss the results in terms of the core mechanisms that may enable the formation of higher-order associations
Recommended from our members
Associations versus Propositions in Memory for Sentences
Propositional accounts of organization in memory have dominated theory in compositional semantics, but it is an openquestion whether their adoption has been necessitated by the data. We present data from a narrative comprehensionexperiment, designed to distinguish between a propositional account of semantic representation and an associative accountbased on the Syntagmatic-Paradigmatic (Dennis, 2005; SP) model. We manipulated expected propositional-interferenceby including distractor sentences that shared a verb with a target sentence. We manipulated paradigmatic-interferenceby including two distractor sentences, one of which contained a name from a target sentence. That is, we increased thesecond-order co-occurrence between a name in a target sentence and a distractor. Contrary to the propositional assumption,our results show that subjects are sensitive to second-order co-occurrence, hence favouring the associative account