14 research outputs found
Recommended from our members
An Associative Theory of Semantic Representation
We present a new version of the Syntagmatic-Paradigmatic model (SP; Dennis, 2005) as a representational substrate forencoding meaning from textual input. We depart from the earlier SP model in three ways. Instead of two multi-tracememory stores, we adopt an auto-associative network. Instead of treating a sentence as the unit of representation, we godown a scale to the level of words. Finally, we specify all stages of processing within a single architecture. We showhow the model is capable of forming representations of words that are independent of the surface-form through somequestion-answering examples. We end with a discussion of how the current model can provide a mechanistic account ofelaborative and inferential processes during comprehension
Recommended from our members
Associations versus Propositions in Memory for Sentences
Propositional accounts of organization in memory have dominated theory in compositional semantics, but it is an openquestion whether their adoption has been necessitated by the data. We present data from a narrative comprehensionexperiment, designed to distinguish between a propositional account of semantic representation and an associative accountbased on the Syntagmatic-Paradigmatic (Dennis, 2005; SP) model. We manipulated expected propositional-interferenceby including distractor sentences that shared a verb with a target sentence. We manipulated paradigmatic-interferenceby including two distractor sentences, one of which contained a name from a target sentence. That is, we increased thesecond-order co-occurrence between a name in a target sentence and a distractor. Contrary to the propositional assumption,our results show that subjects are sensitive to second-order co-occurrence, hence favouring the associative account
Recommended from our members
Propositional versus Associative Views of Sentence Memory
Propositional accounts assume sentences are encoded in terms of a set of arguments bound to role-fillers in a predicate,but they never specify how the role representations form in the first place. Dennis (2005) shows an alternative way tocapture role-information based on simple associations derived directly from experience in the Syntagmatic-Paradigmatic(SP) model. We argue that the evidence for the propositional view is not well-founded and explore the possibility for apure associative encoding of proposition-like information. We differentially manipulate overlap in target and distractorsentences, embedded in narratives, and directly place the propositional account against the SP view. Our first experimentprovides some evidence for an SP account, however the second experiment supports the propositional view. Our finalexperiment provides results that are difficult to explain with either account. Overall, our results support the propositionalview and show mixed evidence for the SP account
Recommended from our members
Paradigmatic formation through context-mediation
Words that regularly fill the same sentential slots are said to be paradigmatically related. Paradigmatic relations may be retained through a direct association or a latent representation at encoding, or by reinstating context during retrieval. We paired proper names by embedding them into two instances of the same sentence frame, each in a separate list, yielding blocks of two study-cloze sessions. The pairing between proper names was fixed across twelve blocks. In the static condition, the same sentence frames were used across blocks, while in the dynamic condition sentence frames changed for each block. Interference should accrue in both conditions if paradigmatic relations are based on a direct association or overlap in a latent representation, however, if paradigmatic relations are mediated by retrieved context then changing the sentence frame should release interference. Our results are consistent with a context-mediation account of paradigmatic relations
Recommended from our members
Constructing Word Meaning without Latent Representations using Spreading Activation
Models of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations.
We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation
Generalization at retrieval using associative networks with transient weight changes
Without having seen a bigram like “her buffalo” before, you can easily tell that it is grammatical because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog” -- the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets, and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgement-of-grammaticality and lexical decision tasks. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the association encoding those bigrams is deleted from memory. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback
Recommended from our members
Beyond Pattern Completion with Short-Term Plasticity
In a Linear Associative Net (LAN), all input settles to a singlepattern, therefore Anderson, Silverstein, Ritz, and Jones (1977)introduced saturation to force the system to reach othersteady-states in the Brain-State-in-a-Box (BSB). Unfortunately,the BSB is limited in its ability to generalize because itsresponses are restricted to previously stored patterns. We presentsimulations showing how a Dynamic-Eigen-Net (DEN), a LANwith Short-Term Plasticity (STP), overcomes thesingle-response limitation. Critically, a DEN also accommodatesnovel patterns by aligning them with encoded structure. We traina two-slot DEN on a text corpus, and provide an account oflexical decision and judgement-of-grammaticality (JOG) tasksshowing how grammatical bi-grams yield stronger responsesrelative to ungrammatical bi-grams. Finally, we present asimulation showing how a DEN is sensitive to syntacticviolations introduced in novel bi-grams. We propose DENs asassociative nets with greater promise for generalization than theclassic alternatives
Global semantic similarity effects in recognition memory: Insights from BEAGLE representations and the diffusion decision model
Recognition memory models posit that performance is impaired as the similarity between the probe cue and the contents of memory is increased (global similarity). Global similarity predictions have been commonly tested using category length designs, in which the number of items from a common taxonomic or associative category is manipulated. Prior work has demonstrated that increases in the length of associative categories show clear detriments on performance, but that result is found only inconsistently for taxonomic categories. In this work, we explored global similarity predictions using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE’s two types of word representations, item and order vectors, exhibit similarity relations that resemble relations among associative and taxonomic category members, respectively. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which simultaneously accounts for both response times and accuracy. We implemented this model in a hiearchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, suggesting that lists of unrelated words exhibit semantic structure that impairs performance. However, there were relatively small influences of global similarity among the order vectors. These results are consistent with prior work suggesting associative similarity causes stronger performance impairments than taxonomic similarity