8,421 research outputs found

    Learning to Rank based on Analogical Reasoning

    Full text link
    Object ranking or "learning to rank" is an important problem in the realm of preference learning. On the basis of training data in the form of a set of rankings of objects represented as feature vectors, the goal is to learn a ranking function that predicts a linear order of any new set of objects. In this paper, we propose a new approach to object ranking based on principles of analogical reasoning. More specifically, our inference pattern is formalized in terms of so-called analogical proportions and can be summarized as follows: Given objects A,B,C,DA,B,C,D, if object AA is known to be preferred to BB, and CC relates to DD as AA relates to BB, then CC is (supposedly) preferred to DD. Our method applies this pattern as a main building block and combines it with ideas and techniques from instance-based learning and rank aggregation. Based on first experimental results for data sets from various domains (sports, education, tourism, etc.), we conclude that our approach is highly competitive. It appears to be specifically interesting in situations in which the objects are coming from different subdomains, and which hence require a kind of knowledge transfer.Comment: Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 8 page

    Teaching and Learning by Analogy: Psychological Perspectives on the Parables of Jesus

    Full text link
    Christian teachers are often encouraged to use Jesus’ teaching strategies as models for their own pedagogy. Jesus frequently utilized analogical comparisons, or parables, to help his learners understand elements of his Gospel message. Although teachers can use analogical models to facilitate comprehension, such models also can sow the seeds of confusion and misconception. Recent advances in cognitive psychology have provided new theoretical frameworks to help us understand how instructional analogies function in the teaching-learning process. The goal of this paper is to analyze Jesus’ analogical teaching from these psychological perspectives, with implications for all teachers who utilize instructional analogies. In addition to reviewing basic analogical learning processes, I explore a six-variable model to account systematically for potential analogical misconceptions

    A connectionist account of the emergence of the literal-metaphorical-anomalous distinction in young children

    Get PDF
    We present the first developmental computational model of metaphor comprehension, which seeks to relate the emergence of a distinction between literal and non-literal similarity in young children to the development of semantic representations. The model gradually learns to distinguish literal from metaphorical semantic juxtapositions as it acquires more knowledge about the vehicle domain. In accordance with Keil (1986), the separation of literal from metaphorical comparisons is found to depend on the maturity of the vehicle concept stored within the network. The model generates a number of explicit novel predictions

    Comparison and Mapping Facilitate Relation Discovery and Predication

    Get PDF
    Relational concepts play a central role in human perception and cognition, but little is known about how they are acquired. For example, how do we come to understand that physical force is a higher-order multiplicative relation between mass and acceleration, or that two circles are the same-shape in the same way that two squares are? A recent model of relational learning, DORA (Discovery of Relations by Analogy; Doumas, Hummel & Sandhofer, 2008), predicts that comparison and analogical mapping play a central role in the discovery and predication of novel higher-order relations. We report two experiments testing and confirming this prediction

    The Latent Relation Mapping Engine: Algorithm and Experiments

    Full text link
    Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance.Comment: related work available at http://purl.org/peter.turney
    • …
    corecore