Article thumbnail
Location of Repository

The Latent Relation Mapping Engine: Algorithm and Experiments

By Peter D. Turney


Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance

Topics: Language, Computational Linguistics, Semantics, Machine Learning, Artificial Intelligence
Publisher: AI Access Foundation
Year: 2008
OAI identifier:

Suggested articles


  1. (1964). A heuristic program to solve geometric-analogy problems.
  2. (1997). A solution to Plato’s problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. doi
  3. (1957). A synopsis of linguistic theory 1930–1955.
  4. (2008). A uniform approach to analogies, synonyms, antonyms, and associations.
  5. (2002). Automatic labeling of semantic roles.
  6. (2001). Classifying the semantic relations in noun-compounds via a domain-specific lexical hierarchy.
  7. (1965). Cognition and Thought: An Information Processing Approach.
  8. (2004). Conceptual Spaces: The Geometry of Thought.
  9. (1997). Distributed representations of structure: A theory of analogical access and mapping. doi
  10. (2005). Efficiency vs. effectiveness in terabyte-scale information retrieval.
  11. (2001). Epilogue: Analogy as the core of cognition. In
  12. (2003). Exploring noun-modifier semantic relations.
  13. (2007). Extracting semantic representations from word cooccurrence statistics: A computational study.
  14. (1995). Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. doi
  15. (1992). High-level perception, representation, and analogy: A critique of artificial intelligence methodology.
  16. (1997). I don’t believe in word senses.
  17. (1990). Indexing by latent semantic analysis.
  18. (1997). Kernel principal component analysis.
  19. (1991). Language and the career of similarity. In
  20. (2000). Latent semantic space: Iterative scaling improves precision of interdocument similarity measurement.
  21. (1998). Lexical chains as representations of context for the detection and correction of malapropisms.
  22. (2005). Measuring semantic similarity by latent relational analysis.
  23. (1995). Mental Leaps. doi
  24. (1995). Metaphor as an emergent property of machine-readable dictionaries.
  25. (2001). Metaphor is like analogy. In
  26. (1980). Metaphors We Live By. doi
  27. (2001). Mining the Web for synonyms: PMI-IR versus LSA on TOEFL.
  28. (2004). On Intelligence. Henry Holt.
  29. (1990). Part-of-speech tagging guidelines for the Penn Treebank Project.
  30. (1999). Probabilistic Latent Semantic Indexing. doi
  31. (1997). Semantic similarity based on corpus statistics and lexical taxonomy.
  32. (2007). Semeval2007 task 04: Classification of semantic relations between nominals.
  33. (2006). Similarity of semantic relations.
  34. (2007). Strategies for lifelong knowledge extraction from the web.
  35. (1983). Structure-mapping: A theoretical framework for analogy.
  36. (2002). The computational modeling of analogy-making.
  37. (1995). The Latent Relation Mapping Engine
  38. (1989). The structure-mapping engine: Algorithm and examples.
  39. (2003). Why we’re so smart. In

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.