147,217 research outputs found

    The Latent Relation Mapping Engine: Algorithm and Experiments

    Full text link
    Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.Comment: related work available at http://purl.org/peter.turney

    Learning semantic sentence representations from visually grounded language without lexical knowledge

    Get PDF
    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state-of-the-art on two popular image-caption retrieval benchmark data sets: MSCOCO and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics

    The knowing ear : an Australian test of universal claims about the semantic structure of sensory verbs and their extension into the domain of cognition

    Get PDF
    In this paper we test previous claims concerning the universality of patterns of polysemy and semantic change in perception verbs. Implicit in such claims are two elements: firstly, that the sharing of two related senses A and B by a given form is cross-linguistically widespread, and matched by a complementary lack of some rival polysemy, and secondly that the explanation for the ubiquity of a given pattern of polysemy is ultimately rooted in our shared human cognitive make-up. However, in comparison to the vigorous testing of claimed universals that has occurred in phonology, syntax and even basic lexical meaning, there has been little attempt to test proposed universals of semantic extension against a detailed areal study of non-European languages. To address this problem we examine a broad range of Australian languages to evaluate two hypothesized universals: one by Viberg (1984), concerning patterns of semantic extension across sensory modalities within the domain of perception verbs (i .e. intra-field extensions), and the other by Sweetser (1990), concerning the mapping of perception to cognition (i.e. trans-field extensions). Testing against the Australian data allows one claimed universal to survive, but demolishes the other, even though both assign primacy to vision among the senses

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Cognitive control and discourse comprehension in schizophrenia.

    Get PDF
    Cognitive deficits across a wide range of domains have been consistently observed in schizophrenia and are linked to poor functional outcome (Green, 1996; Carter, 2006). Language abnormalities are among the most salient and include disorganized speech as well as deficits in comprehension. In this review, we aim to evaluate impairments of language processing in schizophrenia in relation to a domain-general control deficit. We first provide an overview of language comprehension in the healthy human brain, stressing the role of cognitive control processes, especially during discourse comprehension. We then discuss cognitive control deficits in schizophrenia, before turning to evidence suggesting that schizophrenia patients are particularly impaired at processing meaningful discourse as a result of deficits in control functions. We conclude that domain-general control mechanisms are impaired in schizophrenia and that during language comprehension this is most likely to result in difficulties during the processing of discourse-level context, which involves integrating and maintaining multiple levels of meaning. Finally, we predict that language comprehension in schizophrenia patients will be most impaired during discourse processing. We further suggest that discourse comprehension problems in schizophrenia might be mitigated when conflicting information is absent and strong relations amongst individual words are present in the discourse context."There is no "centre of Speech" in the brain any more than there is a faculty of Speech in the mind.The entire brain, more or less, is at work in a man who uses language"William JamesFrom The Principles of Psychology, 1890"The mind in dementia praecox is like an orchestra without a conductor"Kraepelin, 1919
    corecore