14 research outputs found

    Paradigmatic formation through context-mediation

    No full text

    Generalization at retrieval using associative networks with transient weight changes

    No full text
    Without having seen a bigram like “her buffalo” before, you can easily tell that it is grammatical because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog” -- the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dynamic-Eigen-Nets, and provide simulations that show how they generalize to patterns that are structurally aligned with the training domain. Linear-Associative-Nets respond with the same pattern regardless of input, motivating the introduction of saturation to facilitate other response states. However, models using saturation cannot readily generalize to novel, but structurally aligned patterns. Dynamic-Eigen-Nets address this problem by dynamically biasing the eigenspectrum towards external input using temporary weight changes. We demonstrate how a two-slot Dynamic-Eigen-Net trained on a text corpus provides an account of bigram judgement-of-grammaticality and lexical decision tasks. We end with a simulation showing how a Dynamic-Eigen-Net is sensitive to syntactic violations introduced in bigrams, even after the association encoding those bigrams is deleted from memory. We propose Dynamic-Eigen-Nets as associative nets that generalize at retrieval, instead of encoding, through recurrent feedback

    Global semantic similarity effects in recognition memory: Insights from BEAGLE representations and the diffusion decision model

    No full text
    Recognition memory models posit that performance is impaired as the similarity between the probe cue and the contents of memory is increased (global similarity). Global similarity predictions have been commonly tested using category length designs, in which the number of items from a common taxonomic or associative category is manipulated. Prior work has demonstrated that increases in the length of associative categories show clear detriments on performance, but that result is found only inconsistently for taxonomic categories. In this work, we explored global similarity predictions using representations from the BEAGLE model (Jones & Mewhort, 2007). BEAGLE’s two types of word representations, item and order vectors, exhibit similarity relations that resemble relations among associative and taxonomic category members, respectively. Global similarity among item and order vectors was regressed onto drift rates in the diffusion decision model (DDM: Ratcliff, 1978), which simultaneously accounts for both response times and accuracy. We implemented this model in a hiearchical Bayesian framework across seven datasets with lists composed of unrelated words. Results indicated clear deficits due to global similarity among item vectors, suggesting that lists of unrelated words exhibit semantic structure that impairs performance. However, there were relatively small influences of global similarity among the order vectors. These results are consistent with prior work suggesting associative similarity causes stronger performance impairments than taxonomic similarity
    corecore