209 research outputs found
Recommended from our members
Verbal analogy problem sets: An inventory of testing materials.
Analogical reasoning is an active topic of investigation across education, artificial intelligence (AI), cognitive psychology, and related fields. In all fields of inquiry, explicit analogy problems provide useful tools for investigating the mechanisms underlying analogical reasoning. Such sets have been developed by researchers working in the fields of educational testing, AI, and cognitive psychology. However, these analogy tests have not been systematically made accessible across all the relevant fields. The present paper aims to remedy this situation by presenting a working inventory of verbal analogy problem sets, intended to capture and organize sets from diverse sources
Recommended from our members
How Graphs Mediate Analog and Symbolic Representation
Three experiments are reported that examine the impact of people's goals and conceptual understanding on graph interpretation, in order to determine how people use graphical representations to evaluate functional dependencies between continuous variables. Subjects made inferences about the relative rate of two continuous linear variables (altitude and temperature). We varied the assignments of variables to axes, the perceived cause effect relation between the variables, and the causal status of the variable being queried. The most striking finding was that accuracy was greater when the Slope-Mapping Constraint was honored, which requires that the variable being queried - usually the effect or dependent variable, but potentially the cause instead — is assigned to the vertical axis, so that steeper lines map to faster changes in the queried variable. This constraint dominates when it conflicts with others, such as preserving the low-level mapping of altitude onto the vertical axis. Our findings emphasize the basic conclusion that graphs are not pictures, but rather symbolic systems for representing higher-order relations. We propose that graphs provide external instantiations of intermediate mental representations, which enable people to move from pictorial representations to abstractions through the use of natural mappings between perceptual properties and conceptual relations
Recommended from our members
LISA: A Computational Model of Analogical Inference and Schema Induction
The relationship between analogy and schema induction is widely acknowledged and constitutes an important motivation for developing computational models of analogical mapping. However, most models of analogical mapping provide no clear basis for supporting schema induction. We describe LISA (Hummel & Holyoak, 1996), a recent model of analog retrieval and mapping that is explicitly designed to provide a platform for schema induction and other forms of inference. LISA represents predicates and their arguments (i.e., objects or propositions) as patterns of activation distributed over units representing semantic primitives. These representations are actively (dynamically) bound into propositions by synchronizing oscillations in their activation: Arguments fire in synchrony with the case roles to which they are bound, and out of synchrony with other case roles and arguments. By activating propositions in LTM, these patterns drive analog retrieval and mapping. This approach to analog retrieval and mapping accounts for numerous findings in human analogical reasoning (Hummel & Holyoak, 1996). Augmented with a capacity for intersection discovery and unsupervised learning, the architecture supports analogical inference and schema induction as a natural consequence. We describe LISA'S account of schema induction and inference, and present some preliminary simulation results
Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors
Recent advances in the performance of large language models (LLMs) have
sparked debate over whether, given sufficient training, high-level human
abilities emerge in such generic forms of artificial intelligence (AI). Despite
the exceptional performance of LLMs on a wide range of tasks involving natural
language processing and reasoning, there has been sharp disagreement as to
whether their abilities extend to more creative human abilities. A core example
is the ability to interpret novel metaphors. Given the enormous and non curated
text corpora used to train LLMs, a serious obstacle to designing tests is the
requirement of finding novel yet high quality metaphors that are unlikely to
have been included in the training data. Here we assessed the ability of GPT4,
a state of the art large language model, to provide natural-language
interpretations of novel literary metaphors drawn from Serbian poetry and
translated into English. Despite exhibiting no signs of having been exposed to
these metaphors previously, the AI system consistently produced detailed and
incisive interpretations. Human judges, blind to the fact that an AI model was
involved, rated metaphor interpretations generated by GPT4 as superior to those
provided by a group of college students. In interpreting reversed metaphors,
GPT4, as well as humans, exhibited signs of sensitivity to the Gricean
cooperative principle. In addition, for several novel English poems GPT4
produced interpretations that were rated as excellent or good by a human
literary critic. These results indicate that LLMs such as GPT4 have acquired an
emergent ability to interpret complex metaphors, including those embedded in
novel poems
The form of analog size information in memory
The information used to choose the larger of two objects from memory was investigated in two experiments that compared the effects of a number of variables on the performance of subjects who either were instructed to use imagery in the comparison task or were not so instructed. Subjects instructed to use imagery could perform the task more quickly if they prepared themselves with an image of one of the objects at its normal size, rather than with an image that was abnormally big or small, or no image at all. Such subjects were also subject to substantial selective interference when asked to simultaneously maintain irrelevant images of digits. In contrast, when subjects were not specifically instructed to use imagery to reach their decisions, an initial image at normal size did not produce significantly faster decisions than no image, or a large or small image congruent with the correct decision. The selective interference created by simultaneously imaging digits was reduced for subjects not told to base their size comparisons on imagery. The difficulty of the size discrimination did not interact significantly with any other variable. The results suggest that subjects, unless specifically instructed to use imagery, can compare the size of objects in memory using information more abstract than visual imagery.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/23015/1/0000584.pd
Emergent Analogical Reasoning in Large Language Models
The recent advent of large language models has reinvigorated debate over
whether human cognitive capacities might emerge in such generic models given
sufficient training data. Of particular interest is the ability of these models
to reason about novel problems zero-shot, without any direct training. In human
cognition, this capacity is closely tied to an ability to reason by analogy.
Here, we performed a direct comparison between human reasoners and a large
language model (the text-davinci-003 variant of GPT-3) on a range of analogical
tasks, including a novel text-based matrix reasoning task closely modeled on
Raven's Progressive Matrices. We found that GPT-3 displayed a surprisingly
strong capacity for abstract pattern induction, matching or even surpassing
human capabilities in most settings. Our results indicate that large language
models such as GPT-3 have acquired an emergent ability to find zero-shot
solutions to a broad range of analogy problems
Probabilistic Analogical Mapping with Semantic Relation Networks
The human ability to flexibly reason using analogies with domain-general
content depends on mechanisms for identifying relations between concepts, and
for mapping concepts and their relations across analogs. Building on a recent
model of how semantic relations can be learned from non-relational word
embeddings, we present a new computational model of mapping between two
analogs. The model adopts a Bayesian framework for probabilistic graph
matching, operating on semantic relation networks constructed from distributed
representations of individual concepts and of relations between concepts.
Through comparisons of model predictions with human performance in a novel
mapping task requiring integration of multiple relations, as well as in several
classic studies, we demonstrate that the model accounts for a broad range of
phenomena involving analogical mapping by both adults and children. We also
show the potential for extending the model to deal with analog retrieval. Our
approach demonstrates that human-like analogical mapping can emerge from
comparison mechanisms applied to rich semantic representations of individual
concepts and relations
A positional discriminability model of linearorder judgments
The process of judging the relative order of stimuli in a visual array was investigated in three experiments. In the basic paradigm, a linear array of six colored lines was presented briefly, and subjects decided which of two target lines was the leftmost or rightmost (Experiment 1). The target lines appeared in all possible combinations of serial positions and reaction time (RT) was measured. Distance and semantic congruity effects were obtained, as well as a bowed serial position function. The RT pattern resembled that observed in comparable studies with memorized linear orderings. The serial position function was flattened when the background lines were homogeneously dissimilar to the target lines (Experiment 2). Both a distance effect and bowed serial position functions were obtained when subjects judged which of two target lines was below a black bar cue (Experiment 3). The results favored an analog positional discriminability model over a serial ends-inward scanning model. The positional discriminability model was proposed as a "core model" for the processes involved in judging relative order or magnitude in the domains of memory and perception
- …