3,411 research outputs found

    Exploring Language Mechanisms: The Mass-Count Distinction and The Potts Neural Network

    Get PDF
    The aim of this thesis is to explore language mechanisms in two aspects. First, the statistical properties of syntax and semantics, and second, the neural mechanisms which could be of possible use in trying to understand how the brain learns those particular statistical properties. In the first part of the thesis (part A) we focus our attention on a detailed statistical study of the syntax and semantics of the mass-count distinction in nouns. We collected a database of how 1,434 nouns are used with respect to the mass-count distinction in six languages; additional informants characterised the semantics of the underlying concepts. Results indicate only weak correlations between semantics and syntactic usage. The classification rather than being bimodal, is a graded distribution and it is similar across languages, but syntactic classes do not map onto each other, nor do they reflect, beyond weak correlations, semantic attributes of the concepts. These findings are in line with the hypothesis that much of the mass/count syntax emerges from language- and even speaker-specific grammaticalisation. Further, in chapter 3 we test the ability of a simple neural network to learn the syntactic and semantic relations of nouns, in the hope that it may throw some light on the challenges in modelling the acquisition of the mass-count syntax. It is shown that even though a simple self-organising neural network is insufficient to learn a mapping implementing a syntactic- semantic link, it does however show that the network was able to extract the concept of 'count', and to some extent that of \u2018mass\u2019 as well, without any explicit definition, from both the syntactic and from the semantic data. The second part of the thesis (part B) is dedicated to studying the properties of the Potts neural network. The Potts neural network with its adaptive dynamics represents a simplified model of cortical mechanisms. Among other cognitive phenomena, it intends to model language production by utilising the latching behaviour seen in the network. We expect that a model of language processing should robustly handle various syntactic- semantic correlations amongst the words of a language. With this aim, we test the effect on storage capacity of the Potts network when the memories stored in it share non trivial correlations. Increase in interference between stored memories due to correlations is studied along with modifications in learning rules to reduce the interference. We find that when strongly correlated memories are incorporated in the storage capacity definition, the network is able to regain its storage capacity for low sparsity. Strong correlations also affect the latching behaviour of the Potts network with the network unable to latch from one memory to another. However latching is shown to be restored by modifying the learning rule. Lastly, we look at another feature of the Potts neural network, the indication that it may exhibit spin-glass characteristics. The network is consistently shown to exhibit multiple stable degenerate energy states other than that of pure memories. This is tested for different degrees of correlations in patterns, low and high connectivity, and different levels of global and local noise. We state some of the implications that the spin-glass nature of the Potts neural network may have on language processing

    Language learning in aphasia: A narrative review and critical analysis of the literature with implications for language therapy

    Full text link
    People with aphasia (PWA) present with language deficits including word retrieval difficulties after brain damage. Language learning is an essential life-long human capacity that may support treatment-induced lan-guage recovery after brain insult. This prospect has motivated a growing interest in the study of language learning in PWA during the last few decades. Here, we critically review the current literature on language learning ability in aphasia. The existing studies in this area indicate that (i) language learning can remain functional in some PWA, (ii) inter-individual variability in learning performance is large in PWA, (iii) language processing, short-term memory and lesion site are associated with learning ability, (iv) preliminary evidence suggests a relationship between learning ability and treatment outcomes in this population. Based on the reviewed evidence, we propose a potential account for the interplay between language and memory/learning systems to explain spared/impaired language learning and its relationship to language therapy in PWA. Finally, we indicate potential avenues for future research that may promote more cross-talk between cognitive neuro-science and aphasia rehabilitation

    A new perspective on word association: how keystroke logging informs strength of word association

    Get PDF
    For many years, word association (WA) data has informed theories of the mental lexicon by analyzing the words elicited. However, findings are inconsistent and WA research is still waiting for ‘a breakthrough in methodology which can unlock its undoubted potential’ (Schmitt 2010. Researching vocabulary: A vocabulary research manual. Palgrave Macmillan, 248). In this paper, we offer a new perspective on WA by using keystroke logging (Inputlog, Leijten & Van Waes 2013. Keystroke logging in writing research: Using Inputlog to analyze and visualize writing processes. Written Communication 30(3). 358–92. Online: www.inputlog.net/description.html) to captures the processes of word production. More specifically, we analyze pause behavior during a continued, typed, word association task with 30 cue words eliciting 4 responses, per cue, to evaluate the strength of links in lexical selection processes. We show a strong positive correlation between pause length and inter-response location, providing empirical evidence which supports the established hypothesis that as more responses are elicited, links between them become weaker. Furthermore, using Fitzpatrick's response classification (2007. Word association patterns: unpacking the assumptions. International Journal of Applied Linguistics 17(3). 319–31), we found meaning-based responses were most common in the dataset generally, but, they particularly occurred after longer pauses, and exclusively so after the longest pauses. Position and form-based responses, whilst less frequent overall, typically followed the shortest pauses. In our conclusion we highlight the importance of our methodology in fine-tuning ongoing understanding of how we access the mental lexicon

    Attaining Fluency in English through Collocations

    Get PDF

    Word association patterns in a second/foreign language – what do they tell us about the L2 mental lexicon?

    Get PDF
    The aim of the article is to review the findings of research into patterns of word associations in both first and second language and discuss its relevance for the understanding of L2 lexical processes. Word association studies have been used widely in areas such as psychology and first language acquisition and have resulted in detailed descriptions of word association behaviour of speakers at different ages and stages of language development. As far as research into L2 word associations is concerned, it concentrated predominantly on the differences between native and non-native association patterns, types of links between words in the L2 mental lexicon as well as the influence of general language proficiency on word association behaviour

    Semantic networks

    Get PDF
    AbstractA semantic network is a graph of the structure of meaning. This article introduces semantic network systems and their importance in Artificial Intelligence, followed by I. the early background; II. a summary of the basic ideas and issues including link types, frame systems, case relations, link valence, abstraction, inheritance hierarchies and logic extensions; and III. a survey of ‘world-structuring’ systems including ontologies, causal link models, continuous models, relevance, formal dictionaries, semantic primitives and intersecting inference hierarchies. Speed and practical implementation are briefly discussed. The conclusion argues for a synthesis of relational graph theory, graph-grammar theory and order theory based on semantic primitives and multiple intersecting inference hierarchies
    corecore