19 research outputs found
Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors
Recent advances in the performance of large language models (LLMs) have
sparked debate over whether, given sufficient training, high-level human
abilities emerge in such generic forms of artificial intelligence (AI). Despite
the exceptional performance of LLMs on a wide range of tasks involving natural
language processing and reasoning, there has been sharp disagreement as to
whether their abilities extend to more creative human abilities. A core example
is the ability to interpret novel metaphors. Given the enormous and non curated
text corpora used to train LLMs, a serious obstacle to designing tests is the
requirement of finding novel yet high quality metaphors that are unlikely to
have been included in the training data. Here we assessed the ability of GPT4,
a state of the art large language model, to provide natural-language
interpretations of novel literary metaphors drawn from Serbian poetry and
translated into English. Despite exhibiting no signs of having been exposed to
these metaphors previously, the AI system consistently produced detailed and
incisive interpretations. Human judges, blind to the fact that an AI model was
involved, rated metaphor interpretations generated by GPT4 as superior to those
provided by a group of college students. In interpreting reversed metaphors,
GPT4, as well as humans, exhibited signs of sensitivity to the Gricean
cooperative principle. In addition, for several novel English poems GPT4
produced interpretations that were rated as excellent or good by a human
literary critic. These results indicate that LLMs such as GPT4 have acquired an
emergent ability to interpret complex metaphors, including those embedded in
novel poems
Probabilistic Analogical Mapping with Semantic Relation Networks
The human ability to flexibly reason using analogies with domain-general
content depends on mechanisms for identifying relations between concepts, and
for mapping concepts and their relations across analogs. Building on a recent
model of how semantic relations can be learned from non-relational word
embeddings, we present a new computational model of mapping between two
analogs. The model adopts a Bayesian framework for probabilistic graph
matching, operating on semantic relation networks constructed from distributed
representations of individual concepts and of relations between concepts.
Through comparisons of model predictions with human performance in a novel
mapping task requiring integration of multiple relations, as well as in several
classic studies, we demonstrate that the model accounts for a broad range of
phenomena involving analogical mapping by both adults and children. We also
show the potential for extending the model to deal with analog retrieval. Our
approach demonstrates that human-like analogical mapping can emerge from
comparison mechanisms applied to rich semantic representations of individual
concepts and relations
Recommended from our members
Impact of Semantic Representations on Analogical Mapping with Transitive Relations
Analogy problems involving multiple ordered relations of the same type create mapping ambiguity, requiring some mechanism for relational integration to achieve mapping accuracy. We address the question of whether the integration of ordered relations depends on their logical form alone, or on semantic representations that differ across relation types. We developed a triplet mapping task that provides a basic paradigm to investigate analogical reasoning with simple relational structures. Experimental results showed that mapping performance differed across orderings based on category, linear order, and causal relations, providing evidence that each transitive relation has its own semantic representation. Hence, human analogical mapping of ordered relations does not depend solely on their formal property of transitivity. Instead, human ability to solve mapping problems by integrating relations relies on the semantics of relation representations. We also compared human performance to the performance of several vector-based computational models of analogy. These models performed above chance but fell short of human performance for some relations, highlighting the need for further model development
Recommended from our members
Predicting Patterns of Similarity Among Abstract Semantic Relations
Although models of word meanings based on distributional semantics have proved effective in predicting human judgments of similarity among individual concepts, it is less clear whether or how such models might be extended to account for judgments of similarity among relations between concepts. Here we combine an individual-differences approach with computational modeling to predict human judgments of similarity among word pairs instantiating a variety of abstract semantic relations (e.g., contrast, cause-effect, part-whole). A measure of cognitive capacity predicted individual differences in the ability to discriminate among distinct relations. The human pattern of relational similarity judgments, both at the group level and for individual participants, was best predicted by a model that takes representations of word meanings based on distributional semantics as its inputs and uses them to learn an explicit representation of relations. These findings indicate that although the meanings of abstract semantic relations are not directly coded in the meanings of individual words, important aspects of relational similarity can be derived from distributional semantics. (PsycInfo Database Record (c) 2022 APA, all rights reserved)
Recommended from our members
Asymmetry in similarity and difference judgments results from asymmetry in the complexity of the relations same and different
Explicit similarity judgments tend to emphasize relational information more than do difference judgments. We propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same, so that processing difference is more cognitively demanding than processing similarity. For both verbal comparisons between word pairs, and visual comparisons between sets of geometric shapes, we asked participants to select which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard; on ambiguous trials, one option was more featurally similar to the standard, whereas the other was more relationally similar. Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. Our results provide convergent evidence for a representational asymmetry between the relations same and different
Recommended from our members
Probabilistic Analogical Mapping With Semantic Relation Networks
The human ability to flexibly reason using analogies with domain-general content depends on mechanisms for identifying relations between concepts, and for mapping concepts and their relations across analogs. Building on a recent model of how semantic relations can be learned from nonrelational word embeddings, we present a new computational model of mapping between two analogs. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts and of relations between concepts. Through comparisons of model predictions with human performance in a novel mapping task requiring integration of multiple relations, as well as in several classic studies, we demonstrate that the model accounts for a broad range of phenomena involving analogical mapping by both adults and children. We also show the potential for extending the model to deal with analog retrieval. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations. (PsycInfo Database Record (c) 2022 APA, all rights reserved)
Recommended from our members
From Semantic Vectors to Analogical Mapping
Human reasoning goes beyond knowledge about individual entities, extending to inferences based on relations between entities. Here we focus on the use of relations in verbal analogical mapping, sketching a general approach based on assessing similarity between patterns of semantic relations between words. This approach combines research in artificial intelligence with work in psychology and cognitive science, with the aim of minimizing hand coding of text inputs for reasoning tasks. The computational framework takes as inputs vector representations of individual word meanings, coupled with semantic representations of the relations between words, and uses these inputs to form semantic-relation networks for individual analogues. Analogical mapping is operationalized as graph matching under cognitive and computational constraints. The approach highlights the central role of semantics in analogical mapping