3,677 research outputs found
Neural Mechanisms for Information Compression by Multiple Alignment, Unification and Search
This article describes how an abstract framework for perception and cognition may be realised in terms of neural mechanisms and neural processing.
This framework â called information compression by multiple alignment, unification and search (ICMAUS) â has been developed in previous research as a generalized model of any system for processing information, either natural or
artificial. It has a range of applications including the analysis and production of natural language, unsupervised inductive learning, recognition of objects and patterns, probabilistic reasoning, and others. The proposals in this article may be seen as an extension and development of
Hebbâs (1949) concept of a âcell assemblyâ.
The article describes how the concept of âpatternâ in the ICMAUS framework may be mapped onto a version of the cell
assembly concept and the way in which neural mechanisms may achieve the effect of âmultiple alignmentâ in the ICMAUS framework.
By contrast with the Hebbian concept of a cell assembly, it is proposed here that any one neuron can belong in one assembly and only one assembly. A key feature of present proposals, which is not part of the Hebbian concept, is that any cell assembly may contain âreferencesâ or âcodesâ that serve to identify one or more other cell assemblies. This mechanism allows information to be stored in a compressed form, it provides a robust mechanism by which assemblies may be connected to form hierarchies and other kinds of structure, it means that assemblies can express
abstract concepts, and it provides solutions to some of the other problems associated with cell assemblies.
Drawing on insights derived from the ICMAUS framework, the article also describes how learning may be achieved with neural mechanisms. This concept of learning is significantly different from the Hebbian concept and appears to provide a better account of what we know about human learning
Machine Analysis of Facial Expressions
No abstract
The Construction of Semantic Memory: Grammar-Based Representations Learned from Relational Episodic Information
After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation), collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the insideâoutside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of âsleep replayâ of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata
Image and interpretation using artificial intelligence to read ancient Roman texts
The ink and stylus tablets discovered at the Roman Fort of Vindolanda are a unique resource for scholars of ancient history. However, the stylus tablets have proved particularly difficult to read. This paper describes a system that assists expert papyrologists in the interpretation of the Vindolanda writing tablets. A model-based approach is taken that relies on models of the written form of characters, and statistical modelling of language, to produce plausible interpretations of the documents. Fusion of the contributions from the language, character, and image feature models is achieved by utilizing the GRAVA agent architecture that uses Minimum Description Length as the basis for information fusion across semantic levels. A system is developed that reads in image data and outputs plausible interpretations of the Vindolanda tablets
Examining the Neural Correlates of Vocabulary and Grammar Learning Using fNIRS
Adults struggle with learning language components involving categorical relations such as grammar while achieving higher proficiency in vocabulary. The cognitive and neural mechanisms modulating this learning difference remain unclear. The present thesis investigated behavioural and neural differences between vocabulary and grammar processing in adults using functional Near-Infrared Spectroscopy (fNIRS). Participants took part in an artificial language learning paradigm consisting of novel singular and plural words paired with images of common objects. Findings revealed higher accuracy scores and faster response times on semantic vocabulary judgement trials compared to grammar judgement trials. Singular vocabulary judgement was associated with neural activity in part of the pars triangularis of the right inferior frontal gyrus associated with semantic recall. On the other hand, bilateral portions of the dorsolateral prefrontal cortex were more active during grammar judgement tasks. The results are discussed with reference to the roles of memory mechanisms and interference effects in language learning
Recommended from our members
A simple, biologically plausible feature detector for language acquisition
Language has a complex grammatical system we still have to understand computationally and biologically (Hauser et al., 2002; Yang, 2013). However, some evolutionarily ancient mechanisms have been repurposed for grammar (Dehaene & Cohen, 2007; Endress, Cahill, et al., 2009; Endress, Nespor, et al., 2009; Fitch, 2017) so that we can use insight from other taxa into possible circuit level mechanisms of grammar. Drawing upon recent evidence for the importance of disinhibitory circuits across taxa and brain regions (Chevalier & Deniau, 1990; Letzkus et al., 2015; Hangya et al., 2014; Xu et al., 2013; Goddard et al., 2014; Mysore & Knudsen, 2012; Koyama et al., 2016; Koyama & Pujala, 2018), I suggest a simple circuit that explains the acquisition of core grammatical rules used in 85% of the worldâs languages (Rubino, 2013): grammatical rules based on sameness/difference relations. This circuit acts as a sameness-detector. Different items are suppressed through inhibition, but presenting two identical items leads to inhibition of inhibition. The items are thus propagated for further processing. This sameness-detector thus acts as a feature detector for a grammatical rule. I suggest that having a set of feature detectors for elementary grammatical rules might make language acquisition feasible based on relatively simple computational mechanisms
In Pursuit of the Functional Definition of a Mind: The Pivotal Role of a Discourse
This article is devoted to describing results of conceptualization of the idea of mind at the stage of maturity. Delineated the acquisition by the energy system (mind) of stable morphological characteristics, which associated with such a pivotal formation as the discourse. A qualitative structural and ontological sign of the system transition to this stage is the transformation of the verbal morphology of the mind into a discursive one. The analysis of the poststructuralist understanding of discourse in the context of the dispersion of meanings (Foucault) made it possible to formulate a notion of it as a meaning that is constituted by the relation between the discursive practice and the worldview, regarded as a meta-discourse or a global discursive formation. In consequence of this relationship, a discrete and simultaneous scattering of meanings arises, the procedural side of which is a concrete discourse, and its productive aspect is linked with the creation of a local discursive formation. Based on this view it is proposed a logical formula of discourse, which takes into account the entropy of the language and the entropy of the worldview, as a particular manifestation of the mind entropy. Using this formula and considering the reactive nature of discourse, it was developed a classification, which included such types of discourses as reactive, suggestive, synthetic and creative. In turn, the proposed types of discourses are correlated with the specific characteristics of certain activities, as a psychological category. Also, it was considered the translation of the structure of discourse dissipation from the cognitive plan into the affective sphere because of which it is formed a hierarchy of significances, which performs the sense-forming function. It was analyzed the inverse influence of the hierarchy of significances on the structure of meanings dispersion and for respective account it was introduced a conditional coefficient of the value deviation of the significance of the meanings. This parameter reflects the sense correction of the meaning that occurs in the process of the emergence of discourse from discursive practice. Thus, the discourse is presented as a complex dynamic formation of the mind arising at the maturity stage of the system as a result of the combined effect of entropic dispersion of meanings and the value deviation of their significances
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
On analogy as the motivation for grammaticalization
The number of phenomena which are gathered together under the term 'grammaticalization' is quite large and in some ways quite diverse. For the different types of grammaticalization similar motivating factors have been suggested, similar principles, clines and hierarchies. Some of Lehmann's (1982[1995], 1985) parameters, which have long been considered to characterize processes of grammaticalization, are now under attack from various quarters, and indeed the phenomenon of grammaticalization itself has been questioned as an independent mechanism in language change. This paper addresses a number of problems connected with the 'apparatus' used in grammaticalization and with the various types of grammaticalization currently distinguished. It will be argued that we get a better grip on what happens in processes of grammaticalization and lexicalization if the process is viewed in terms of an analogical, usage-based grammar, in which a distinction is made between processes taking place on a token-level and those taking place on a type-level. The model involves taking more notice of the form of linguistic signs and of the synchronic grammar system at each stage of the grammaticalization process
- âŠ