327 research outputs found
The Latent Relation Mapping Engine: Algorithm and Experiments
Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.Comment: related work available at http://purl.org/peter.turney
Recommended from our members
Simulating the Effects of Relational Language in the Development of Spatial Mapping Abilities
Young children's performance on certain mapping tasks can be improved by introducing relational language (Gentner, 1998). We show that children's performance on a spatial mapping task can be modeled using the Structure-Mapping Engine (SME) to simulate the comparisons involved. To model the effects of relational language in our simulations, we vary the quantity and nature of the spatial relations and object descriptions represented. The results reproduce the trends observed in the developmental studies of Loewenstein & Gentner (1998; in preparation). The results of these simulations are consistent with the claim that gains in relational representation are a major contributor to the development of spatial mapping ability. We further suggest that relational language can promote relational representation
Automated Generation of Cross-Domain Analogies via Evolutionary Computation
Analogy plays an important role in creativity, and is extensively used in
science as well as art. In this paper we introduce a technique for the
automated generation of cross-domain analogies based on a novel evolutionary
algorithm (EA). Unlike existing work in computational analogy-making restricted
to creating analogies between two given cases, our approach, for a given case,
is capable of creating an analogy along with the novel analogous case itself.
Our algorithm is based on the concept of "memes", which are units of culture,
or knowledge, undergoing variation and selection under a fitness measure, and
represents evolving pieces of knowledge as semantic networks. Using a fitness
function based on Gentner's structure mapping theory of analogies, we
demonstrate the feasibility of spontaneously generating semantic networks that
are analogous to a given base network.Comment: Conference submission, International Conference on Computational
Creativity 2012 (8 pages, 6 figures
Legal analogical reasoning - the interplay between legal theory and artificial intelligence
This thesis examines and critiques attempts by researchers in the field of artificial intelligence and law to simulate legal analogical reasoning. Supported by an analysis of legal theoretical accounts of legal analogising, and an examination of approaches to simulating analogising developed in the field of artificial intelligence, it is argued that simulations of legal analogising fall far short of simulating all the is involved in human analogising. These examinations of legal theory and artificial intelligence inform a detailed critique of simulations of legal analogising. It is argued that simulations of legal analogising are limited in the kind of legal analogising they can simulate - these simulations cannot simulate the semantic flexibility that is characteristic of creative analogising. This thesis argues that one reason for current restrictions on simulations of legal analogising is that researchers in artificial intelligence and law have ignored the important role played by legal principles in legal analogising. It is argued that improvements in simulations of legal analogising will come from incorporating the influence of legal principles on legal analogising and that until researchers address this semantic flexibility and the role that legal principles play in generating it, simulations of legal analogising will be restricted and of benefit only for limited uses and in restricted areas of the law. Building on the analysis of legal theoretical accounts of legal reasoning and the examination of the processes of analogising, this thesis further argues that legal theoretical accounts of legal analogising are insufficient to account for legal analogising. This thesis argues that legal theorists have themselves ignored important aspects of legal analogising and hence that legal theoretical accounts of legal analogising are deficient. This thesis offers suggestions as to some of the modifications required in legal theory in order to better account for the processes of legal analogising
Human-Level Performance on Word Analogy Questions by Latent Relational Analysis
This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, machine translation, and information retrieval. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason/stone is analogous to the pair carpenter/wood; the relations between mason and stone are highly similar to the relations between carpenter and wood. Past work on semantic similarity measures has mainly been concerned with attributional similarity. For instance, Latent Semantic Analysis (LSA) can measure the degree of similarity between two words, but not between two relations. Recently the Vector Space Model (VSM) of information retrieval has been adapted to the task of measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus (they are not predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data (it is also used this way in LSA), and (3) automatically generated synonyms are used to explore reformulations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying noun-modifier relations, LRA achieves similar gains over the VSM, while using a smaller corpus
A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations
Recognizing analogies, synonyms, antonyms, and associations appear to be four\ud
distinct tasks, requiring distinct NLP algorithms. In the past, the four\ud
tasks have been treated independently, using a wide variety of algorithms.\ud
These four semantic classes, however, are a tiny sample of the full\ud
range of semantic phenomena, and we cannot afford to create ad hoc algorithms\ud
for each semantic phenomenon; we need to seek a unified approach.\ud
We propose to subsume a broad range of phenomena under analogies.\ud
To limit the scope of this paper, we restrict our attention to the subsumption\ud
of synonyms, antonyms, and associations. We introduce a supervised corpus-based\ud
machine learning algorithm for classifying analogous word pairs, and we\ud
show that it can solve multiple-choice SAT analogy questions, TOEFL\ud
synonym questions, ESL synonym-antonym questions, and similar-associated-both\ud
questions from cognitive psychology
Logic-Based Analogical Reasoning and Learning
Analogy-making is at the core of human intelligence and creativity with
applications to such diverse tasks as commonsense reasoning, learning, language
acquisition, and story telling. This paper contributes to the foundations of
artificial general intelligence by developing an abstract algebraic framework
for logic-based analogical reasoning and learning in the setting of logic
programming. The main idea is to define analogy in terms of modularity and to
derive abstract forms of concrete programs from a `known' source domain which
can then be instantiated in an `unknown' target domain to obtain analogous
programs. To this end, we introduce algebraic operations for syntactic program
composition and concatenation and illustrate, by giving numerous examples, that
programs have nice decompositions. Moreover, we show how composition gives rise
to a qualitative notion of syntactic program similarity. We then argue that
reasoning and learning by analogy is the task of solving analogical proportions
between logic programs. Interestingly, our work suggests a close relationship
between modularity, generalization, and analogy which we believe should be
explored further in the future. In a broader sense, this paper is a first step
towards an algebraic and mainly syntactic theory of logic-based analogical
reasoning and learning in knowledge representation and reasoning systems, with
potential applications to fundamental AI-problems like commonsense reasoning
and computational learning and creativity
- …