11 research outputs found

    The storage of semantic memories in the cortex: a computational study

    Get PDF
    The main object of this thesis is the design of structured distributed memories for the purpose of studying their storage and retrieval properties in large scale cortical auto-associative networks. For this, an autoassociative network of Potts units, coupled via tensor connections, has been proposed and analyzed as an effective model of an extensive cortical network with distinct short and long-range synaptic connections. Recently, we have clarified in what sense it can be regarded as an effective model. While the fully-connected (FC) and the very sparsely connected, that is, highly diluted (HD) limits of the model have thoroughly analyzed, the realistic case of the intermediate partial connectivity has been simply assumed to interpolate the FC and HD cases. In this thesis, we first study the storage capacity of Potts network with such intermediate connectivity. We corroborate the outcome of the analysis by showing that the resulting mean field equations are consistent with the FC and HD equations under the appropriate limits. The mean-field equations are only derived for randomly diluted connectivity (RD). Through simulations, we also study symmetric dilution (SD) and state dependent random dilution (SDRD). We find that the Potts network has a higher capacity for symmetric than for random dilution. We then turn to the core question: how to use a model originally conceived for the storage of p unrelated patterns of activity, in order to study semantic memory, which is organized in terms of the relations between the facts and the attributes of real-world knowledge. To proceed, we first formulate a mathematical model of generating patterns with correlations, as an extension of a hierarchical procedure for generating ultrametrically organized patterns. The model ascribes the correlations between patterns to the influence of underlying "factors"; if many factors act with comparable strength, their influences balance out and correlations are low; whereas if a few factors dominate, which in the model occurs for increasing values of a control parameter \u3b6, correlations between memory patterns can become much stronger. We show that the extension allows for correlations between patterns that are neither trivial (as in the random case) nor a plain tree (as in the ultrametric case), but that are highly sensitive to the values of the correlation parameters that we define. Next, we study the storage capacity of the Potts network when the patterns are correlated by way of our algorithm. We show that fewer correlated patterns can be stored and retrieved than random ones, and that the higher the degree of correlation, the lower the capacity. We find that the mean-field equations yielding the storage capacity are different from those obtained with uncorrelated patterns through only an additional term in the noise, proportional to the number of learned patterns p and to the difference between the average correlation between correlated patterns and independently generated patterns of the same sparsity. Of particular interest is the role played by the parameter we have introduced, \u3b6, which controls the strength of the influences of different factors (the "parents") in generating the memory patterns (the "children"). In particular, we find that for high values of \u3b6, so that only a handful of parents are effective, the network exhibits correlated retrieval, in which the network, though not being able to retrieve the pattern cued, settles into a configuration of high overlap with another pattern. This behavior of the network can be interpreted as reflecting the semantic structure of the correlations, in which even after capacity collapse, what the network can still do is to recognize the strongest features associated with the pattern. This observation is better quantified using the mutual information between the pattern cued and the configuration the network settles into, after retrieval dynamics. This information is found to increase from zero to a non-zero value abruptly when increasing the parameter \u3b6, akin to a phase transition. Two alternative phases are then identified, \u3b6 \u3b6 c , memories form clusters, such that while the specifics of the cued pattern cannot be retrieved, some of the structure informing the cluster of memories can still be retrieved. In a final short chapter, we attempt to understand the implications of having stored correlated memories on latching dynamics, the spontaneous behavior which has been proposed to be an emergent property, beyond the simple cued retrieval paradigm, of large cortical networks. Progress made in this direction, studying the Potts network, has so far focused on uncorrelated memories. Introducing correlations, we find a rich phase space of behaviors, from sequential retrieval of memories, to parallel retrieval of clusters of highly correlated memories and oscillations, depending on the various correlation parameters. The parameters of our algorithm may be found to emerge as critical control parameters, corresponding to the statistical features in human semantic memory most important in determining the dynamics of our trains of thoughts

    The capacity for correlated semantic memories in the cortex

    Get PDF
    A statistical analysis of semantic memory should reflect the complex, multifactorial structure of the relations among its items. Still, a dominant paradigm in the study of semantic memory has been the idea that the mental representation of concepts is structured along a simple branching tree spanned by superordinate and subordinate categories. We propose a generative model of item representation with correlations that overcomes the limitations of a tree structure. The items are generated through "factors" that represent semantic features or real-world attributes. The correlation between items has its source in the extent to which items share such factors and the strength of such factors: if many factors are balanced, correlations are overall low; whereas if a few factors dominate, they become strong. Our model allows for correlations that are neither trivial nor hierarchical, but may reproduce the general spectrum of correlations present in a dataset of nouns. We find that such correlations reduce the storage capacity of a Potts network to a limited extent, so that the number of concepts that can be stored and retrieved in a large, human-scale cortical network may still be of order 107, as originally estimated without correlations. When this storage capacity is exceeded, however, retrieval fails completely only for balanced factors; above a critical degree of imbalance, a phase transition leads to a regime where the network still extracts considerable information about the cued item, even if not recovering its detailed representation: partial categorization seems to emerge spontaneously as a consequence of the dominance of particular factors, rather than being imposed ad hoc. We argue this to be a relevant model of semantic memory resilience in Tulving's remember/know paradigms. \ua9 2018 by the authors

    Reducing a cortical network to a Potts model yields storage capacity estimates

    Get PDF
    An autoassociative network of Potts units, coupled via tensor connections, has been proposed and analysed as an effective model of an extensive cortical network with distinct short- and long-range synaptic connections, but it has not been clarified in what sense it can be regarded as an effective model. We draw here the correspondence between the two, which indicates the need to introduce a local feedback term in the reduced model, i.e., in the Potts network. An effective model allows the study of phase transitions. As an example, we study the storage capacity of the Potts network with this additional term, the local feedback w, which contributes to drive the activity of the network towards one of the stored patterns. The storage capacity calculation, performed using replica tools, is limited to fully connected networks, for which a Hamiltonian can be defined. To extend the results to the case of intermediate partial connectivity, we also derive the self-consistent signal-to-noise analysis for the Potts network; and finally we discuss implications for semantic memory in humans

    Life on the edge: Latching dynamics in a Potts neural network

    Get PDF
    We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the p-S plane, and sublinearly in p-C. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space. \ua9 2017 by the authors

    In poetry, if meter has to help memory, it takes its time [version 2; peer review: 2 approved, 2 not approved]

    Get PDF
    To test the idea that poetic meter emerged as a cognitive schema to aid verbal memory, we focused on classical Italian poetry and on three components of meter: rhyme, accent, and verse length. Meaningless poems were generated by introducing prosody-invariant non-words into passages from Dante’s Divina Commedia and Ariosto’s Orlando Furioso. We then ablated rhymes, modified accent patterns, or altered the number of syllables. The resulting versions of each non-poem were presented to Italian native speakers, who were then asked to retrieve three target non-words. Surprisingly, we found that the integrity of Dante’s meter has no significant effect on memory performance. With Ariosto, instead, removing each component downgrades memory proportionally to its contribution to perceived metric plausibility. Counterintuitively, the fully metric versions required longer reaction times, implying that activating metric schemata involves a cognitive cost. Within schema theories, this finding provides evidence for high-level interactions between procedural and episodic memory

    Latching dynamics as a basis for short-term recall

    No full text
    We discuss simple models for the transient storage in short-term memory of cortical patterns of activity, all based on the notion that their recall exploits the natural tendency of the cortex to hop from state to state—latching dynamics. We show that in one such model, and in simple spatial memory tasks we have given to human subjects, short-term memory can be limited to similar low capacity by interference effects, in tasks terminated by errors, and can exhibit similar sublinear scaling, when errors are overlooked. The same mechanism can drive serial recall if combined with weak order-encoding plasticity. Finally, even when storing randomly correlated patterns of activity the network demonstrates correlation-driven latching waves, which are reflected at the outer extremes of pattern space

    In poetry, if meter has to help memory, it takes its time

    No full text
    To test the idea that poetic meter emerged as a cognitive schema to aid verbal memory, we focused on classical Italian poetry and on three components of meter: rhyme, accent, and verse length. Meaningless poems were generated by introducing prosody-invariant non-words into passages from Dante's Divina Commedia and Ariosto's Orlando Furioso. We then ablated rhymes, modified accent patterns, or altered the number of syllables. The resulting versions of each non-poem were presented to Italian native speakers, who were then asked to retrieve three target non-words. Surprisingly, we found that the integrity of Dante's meter has no significant effect on memory performance. With Ariosto, instead, removing each component downgrades memory proportionally to its contribution to perceived metric plausibility. Counterintuitively, the fully metric versions required longer reaction times, implying that activating metric schemata involves a cognitive cost. Within schema theories, this finding provides evidence for high-level interactions between procedural and episodic memory

    Disrupting morphosyntactic and lexical semantic processing has opposite effects on the sample entropy of neural signals

    No full text
    Converging evidence in neuroscience suggests that syntax and semantics are dissociable in brain space and time. However, it is possible that partly disjoint cortical networks, operating in successive time frames, still perform similar types of neural computations. To test the alternative hypothesis, we collected EEG data while participants read sentences containing lexical semantic or morphosyntactic anomalies, resulting in N400 and P600 effects, respectively. Next, we reconstructed phase space trajectories from EEG time series, and we measured the complexity of the resulting dynamical orbits using sample entropy an index of the rate at which the system generates or loses information over time. Disrupting morphosyntactic or lexical semantic processing had opposite effects on sample entropy: it increased in the N400 window for semantic anomalies, and it decreased in the P600 window for morphosyntactic anomalies. These findings point to a fundamental divergence in the neural computations supporting meaning and grammar in language. (C) 2015 Elsevier B.V. All rights reserved
    corecore