2,485 research outputs found

    Mind: meet network. Emergence of features in conceptual metaphor.

    Get PDF
    As a human product, language reflects the psychological experience of man (Radden and Dirven, 2007). One model of language and human cognition in general is connectionism, by many linguists is regarded as mathematical and, therefore, too reductive. This opinion trend seems to be reversing, however, due to the fact that many cognitive researchers begin to appreciate one attribute of network models: feature emergence. In the course of a network simulation properties emerge that were neither inbuilt nor intended by its creators (Elman, 1998), in other words, the whole becomes more than just the sum of its parts. Insight is not only drawn from the network's output, but also the means that the network utilizes to arrive at the output.\ud It may seem obvious that the events of life should be meaningful for human beings, yet there is no widely accepted theory as to how do we derive that meaning. The most promising hypothesis regarding the question how the world is meaningful to us is that of embodied cognition (cf. Turner 2009), which postulates that the functions of the brain evolved so as to ‘understand’ the body, thus grounding the mind in an experiential foundation. Yet, the relationship between the body and the mind is far from perspicuous, as research insight is still intertwined with metaphors specific for the researcher’s methodology (Eliasmith 2003). It is the aim of this paper to investigate the conceptual metaphor in a manner that will provide some insight with regard to the role that objectification, as defined by Szwedek (2002), plays in human cognition and identify one possible consequence of embodied cognition.\ud If the mechanism for concept formation, or categorization of the world, resembles a network, it is reasonable to assume that evidence for this is to be sought in language. Let us then postulate the existence of a network mechanism for categorization and concept formation present in the human mind and initially developed to cope with the world directly accessible to the early human (i.e. tangible). Such a network would convert external inputs to form an internal, multi modal representation of a perceived object in the brain. The sheer amount of available information and the computational restrictions of the brain would force some sort of data compression, or a computational funnel. It has been shown that a visual perception network of this kind can learn to accurately label patterns (Elman, 1998). What is more, the compression of data facilitated the recognition of prototypes of a given pattern category rather than its peripheral representations, an emergent property that supports the prototype theory of the mental lexicon (cf. Radden and Dirven, 2007).\ud The present project proposes that, in the domain of cognition, the process of objectification, as defined by Szwedek (2002), would be an emergent property of such a system, or that if an abstract notion is computed by a neural network designed to cope with tangible concepts the data compression mechanism would require the notion to be conceptualized as an object to permit further processing. The notion of emergence of meaning from the operation of complex systems is recognised as an important process in a number of studies on metaphor comprehension. Feature emergence is said to occur when a non-salient feature of the target and the vehicle becomes highly salient in the metaphor (Utsumi 2005). Therefore, for example, should objectification emerge as a feature in the metaphor KNOWLEDGE IS A TREASURE, the metaphor would be characterised as having more\ud features of an object than either the target or vehicle alone. This paper focuses on providing a theoretical connectionist network based on the Elman-type network (Elman, 1998) as a model of concept formation where objectification would be an emergent feature. This is followed by a psychological experiment whereby the validity of this assumption is tested through a questionnaire where two groups of participants are asked to evaluate either metaphors or their components. The model proposes an underlying relation between the mechanism for concept formation and the omnipresence of conceptual metaphors, which are interpreted as resulting from the properties of the proposed network system.\ud Thus, an evolutionary neural mechanism is proposed for categorization of the world, that is able to cope with both concrete and abstract notions and the by-product of which are the abstract language-related phenomena, i.e. metaphors. The model presented in this paper aims at providing a unified account of how the various types of phenomena, objects, feelings etc. are categorized in the human mind, drawing on evidence from language.\ud References:\ud Szwedek, Aleksander. 2002. Objectification: From Object Perception To Metaphor Creation. In B. Lewandowska-Tomaszczyk and K. Turewicz (eds). Cognitive Linguistics To-day, 159-175. Frankfurt am Main: Peter Lang.\ud Radden, Günter and Dirven, René. 2007. Cognitive English Grammar. Amsterdam/ Philadelphia: John Benjamins Publishing Company\ud Eliasmith, Chris. 2003. Moving beyond metaphors: understanding the mind for what it is. Journal of Philosophy. C(10):493- 520.\ud Elman, J. L. et al. 1998. Rethinking innateness: A connectionist perspective on development. Cambridge, MA: MIT Press\ud Turner, Mark. 2009. Categorization of Time and Space Through Language. (Paper presented at the FOCUS2009 conference "Categorization of the world through language". Serock, 25-28 February 2009).\ud Utsumi, Akira. 2005. The role of feature emergence in metaphor appreciation, Metaphor and Symbol, 20(3), 151-172

    Mechanisms for the generation and regulation of sequential behaviour

    Get PDF
    A critical aspect of much human behaviour is the generation and regulation of sequential activities. Such behaviour is seen in both naturalistic settings such as routine action and language production and laboratory tasks such as serial recall and many reaction time experiments. There are a variety of computational mechanisms that may support the generation and regulation of sequential behaviours, ranging from those underlying Turing machines to those employed by recurrent connectionist networks. This paper surveys a range of such mechanisms, together with a range of empirical phenomena related to human sequential behaviour. It is argued that the empirical phenomena pose difficulties for most sequencing mechanisms, but that converging evidence from behavioural flexibility, error data arising from when the system is stressed or when it is damaged following brain injury, and between-trial effects in reaction time tasks, point to a hybrid symbolic activation-based mechanism for the generation and regulation of sequential behaviour. Some implications of this view for the nature of mental computation are highlighted

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Principles for Consciousness in Integrated Cognitive Control

    Get PDF
    In this article we will argue that given certain conditions for the evolution of bi- \ud ological controllers, these will necessarily evolve in the direction of incorporating \ud consciousness capabilities. We will also see what are the necessary mechanics for \ud the provision of these capabilities and extrapolate this vision to the world of artifi- \ud cial systems postulating seven design principles for conscious systems. This article \ud was published in the journal Neural Networks special issue on brain and conscious- \ud ness

    Connectionist natural language processing: the state of the art

    Get PDF

    Encoding of phonology in a recurrent neural model of grounded speech

    Full text link
    We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.Comment: Accepted at CoNLL 201
    corecore