150,681 research outputs found

    Mind: meet network. Emergence of features in conceptual metaphor.

    Get PDF
    As a human product, language reflects the psychological experience of man (Radden and Dirven, 2007). One model of language and human cognition in general is connectionism, by many linguists is regarded as mathematical and, therefore, too reductive. This opinion trend seems to be reversing, however, due to the fact that many cognitive researchers begin to appreciate one attribute of network models: feature emergence. In the course of a network simulation properties emerge that were neither inbuilt nor intended by its creators (Elman, 1998), in other words, the whole becomes more than just the sum of its parts. Insight is not only drawn from the network's output, but also the means that the network utilizes to arrive at the output.\ud It may seem obvious that the events of life should be meaningful for human beings, yet there is no widely accepted theory as to how do we derive that meaning. The most promising hypothesis regarding the question how the world is meaningful to us is that of embodied cognition (cf. Turner 2009), which postulates that the functions of the brain evolved so as to ‘understand’ the body, thus grounding the mind in an experiential foundation. Yet, the relationship between the body and the mind is far from perspicuous, as research insight is still intertwined with metaphors specific for the researcher’s methodology (Eliasmith 2003). It is the aim of this paper to investigate the conceptual metaphor in a manner that will provide some insight with regard to the role that objectification, as defined by Szwedek (2002), plays in human cognition and identify one possible consequence of embodied cognition.\ud If the mechanism for concept formation, or categorization of the world, resembles a network, it is reasonable to assume that evidence for this is to be sought in language. Let us then postulate the existence of a network mechanism for categorization and concept formation present in the human mind and initially developed to cope with the world directly accessible to the early human (i.e. tangible). Such a network would convert external inputs to form an internal, multi modal representation of a perceived object in the brain. The sheer amount of available information and the computational restrictions of the brain would force some sort of data compression, or a computational funnel. It has been shown that a visual perception network of this kind can learn to accurately label patterns (Elman, 1998). What is more, the compression of data facilitated the recognition of prototypes of a given pattern category rather than its peripheral representations, an emergent property that supports the prototype theory of the mental lexicon (cf. Radden and Dirven, 2007).\ud The present project proposes that, in the domain of cognition, the process of objectification, as defined by Szwedek (2002), would be an emergent property of such a system, or that if an abstract notion is computed by a neural network designed to cope with tangible concepts the data compression mechanism would require the notion to be conceptualized as an object to permit further processing. The notion of emergence of meaning from the operation of complex systems is recognised as an important process in a number of studies on metaphor comprehension. Feature emergence is said to occur when a non-salient feature of the target and the vehicle becomes highly salient in the metaphor (Utsumi 2005). Therefore, for example, should objectification emerge as a feature in the metaphor KNOWLEDGE IS A TREASURE, the metaphor would be characterised as having more\ud features of an object than either the target or vehicle alone. This paper focuses on providing a theoretical connectionist network based on the Elman-type network (Elman, 1998) as a model of concept formation where objectification would be an emergent feature. This is followed by a psychological experiment whereby the validity of this assumption is tested through a questionnaire where two groups of participants are asked to evaluate either metaphors or their components. The model proposes an underlying relation between the mechanism for concept formation and the omnipresence of conceptual metaphors, which are interpreted as resulting from the properties of the proposed network system.\ud Thus, an evolutionary neural mechanism is proposed for categorization of the world, that is able to cope with both concrete and abstract notions and the by-product of which are the abstract language-related phenomena, i.e. metaphors. The model presented in this paper aims at providing a unified account of how the various types of phenomena, objects, feelings etc. are categorized in the human mind, drawing on evidence from language.\ud References:\ud Szwedek, Aleksander. 2002. Objectification: From Object Perception To Metaphor Creation. In B. Lewandowska-Tomaszczyk and K. Turewicz (eds). Cognitive Linguistics To-day, 159-175. Frankfurt am Main: Peter Lang.\ud Radden, Günter and Dirven, René. 2007. Cognitive English Grammar. Amsterdam/ Philadelphia: John Benjamins Publishing Company\ud Eliasmith, Chris. 2003. Moving beyond metaphors: understanding the mind for what it is. Journal of Philosophy. C(10):493- 520.\ud Elman, J. L. et al. 1998. Rethinking innateness: A connectionist perspective on development. Cambridge, MA: MIT Press\ud Turner, Mark. 2009. Categorization of Time and Space Through Language. (Paper presented at the FOCUS2009 conference "Categorization of the world through language". Serock, 25-28 February 2009).\ud Utsumi, Akira. 2005. The role of feature emergence in metaphor appreciation, Metaphor and Symbol, 20(3), 151-172

    Meshfree modelling of metal cutting using phenomenological and data-driven material models

    Get PDF
    Numerical modelling of chip formation is important for a better understanding thus for improvement of the high speed metal cutting process. The objective of this work is to develop a meshfree numerical simulation framework for the chip formation process, where both the phenomenological material model and the data-driven material model can be applied. Firstly, the phenomenological model is applied to capture the serrated chip formation, where a recently developed Galerkin type meshfree scheme, the stabilized Optimal Transportation Meshfree (OTM) method, is applied as a numerical solution method. This enables the modelling of material separation and serrated morphology generation of the cutting process in a more realistic and convenient way. The shear band formation is described by the thermal softening term in the Johnson-Cook plastic flow stress model. Using this model, it can be demonstrated that thermal softening is the main cause of the shear band formation. Additionally, it can be seen that the Johnson-Cook fracture model shows limitations in capturing the fracture on the chip upper surface. Thus, a supplementary condition for the stress triaxiality is applied. This condition allows more accurate measurements of the chip size, i.e. chip spacing, peak and valley. These improvements are demonstrated by comparing the calculated chip morphology, cutting force and chip formation process with experimental results. Subsequently, the data-driven material model is developed to replace the classical material model, where the Machine Learning (ML) based model is first trained offline to fit an observed material behaviour and later be used in online applications. However, learning and predicting history dependent material models such as plasticity is still challenging. In this work, a ML based material modelling framework is proposed for both elasticity and plasticity, in which the ML based hyperelasticity model is directly developed with the Feed forward Neural Network (FNN), whereas the ML based plasticity model is developed by using two approaches including Recurrent Neural Network (RNN) and a novel method called Proper Orthogonal Decomposition Feed forward Neural Network (PODFNN). In the latter case, the accumulated absolute strain is proposed to distinguish the loading history. Additionally, the strain-stress sequence data for the plasticity model is collected from different loading paths based on the concept of sequence for plasticity. By means of the POD, the multi-dimensional stress sequence is decoupled as independent one dimensional coefficient sequences. To apply the ML based material model in finite element analysis, the tangent matrix is derived by the automatic symbolic differentiation tool AceGen. The effectiveness and generalisation of the presented models are investigated by a series of numerical examples using both 2D and 3D finite element analysis. Finally, the ML based material model is applied to the metal cutting simulation

    IGMN: An incremental connectionist approach for concept formation, reinforcement learning and robotics

    Get PDF
    This paper demonstrates the use of a new connectionist approach, called IGMN (standing for Incremental Gaussian Mixture Network) in some state-of-the-art research problems such as incremental concept formation, reinforcement learning and robotic mapping. IGMN is inspired on recent theories about the brain, especially the Memory-Prediction Framework and the Constructivist Artificial Intelligence, which endows it with some special features that are not present in most neural network models such as MLP, RBF and GRNN. Moreover, IGMN is based on strong statistical principles (Gaussian mixture models) and asymptotically converges to the optimal regression surface as more training data arrive. Through several experiments using the proposed model it is also demonstrated that IGMN learns incrementally from data flows (each data can be immediately used and discarded), it is not sensible to initialization conditions, does not require fine-tuning its configuration parameters and has a good computational performance, thus allowing its use in real time control applications. Therefore, IGMN is a very useful machine learning tool for concept formation and robotic tasks.Key words: Artificial neural networks, Bayesian methods, concept formation, incremental learning, Gaussianmixture models, autonomous robots, reinforcement learning
    • …
    corecore