135 research outputs found

    from the immune system to neural networks

    Get PDF
    Storing memory of molecular encounters is vital for an effective response to recurring external stimuli. Interestingly, memory strategies vary among different biological processes. These strategies range from networks that process input signals and retrieve an associative memory to specialized receptors that bind only to related stimuli. The adaptive immune system uses such a specialized strategy and can provide specific responses against many pathogens. During its response, the immune system retains some cells as memory to act quicker when reinfections with the same or evolved pathogens occur. However, differentiation of memory cells remains one of the least understood cell fate decisions in immunology. The ability of immune memory to recognize evolved pathogens makes it an ideal starting point to study learning and memory strategies for evolving environments—a topic with applications far beyond immunology. In this thesis, I present three projects that study different aspects of memory strategies for evolving stimuli. Indeed, we find that specialized memory strategies can follow the evolution of stimuli and reliably recover memory of previous encounters. In contrast, fully connected networks, such as Hopfield networks, fail to reliably recover the memory of evolving stimuli. Thus, pathogen evolution might be the reason that the immune system produces specialized memories. We further find that specialized memory receptors should trade off their maximal binding for cross-reactivity to bind to evolved targets. To produce such receptors, the differentiation into memory cells in the immune system should be highly regulated. Finally, we study update strategies of memory repertoires using an energy-based model. We find that repertoires should have a moderate risk tolerance to fluctuations in performance to adapt to the evolution of targets. Nevertheless, these systems can be very efficient in distinguishing between evolved versions of stored targets and novel random stimuli.2022-01-2

    The storage of semantic memories in the cortex: a computational study

    Get PDF
    The main object of this thesis is the design of structured distributed memories for the purpose of studying their storage and retrieval properties in large scale cortical auto-associative networks. For this, an autoassociative network of Potts units, coupled via tensor connections, has been proposed and analyzed as an effective model of an extensive cortical network with distinct short and long-range synaptic connections. Recently, we have clarified in what sense it can be regarded as an effective model. While the fully-connected (FC) and the very sparsely connected, that is, highly diluted (HD) limits of the model have thoroughly analyzed, the realistic case of the intermediate partial connectivity has been simply assumed to interpolate the FC and HD cases. In this thesis, we first study the storage capacity of Potts network with such intermediate connectivity. We corroborate the outcome of the analysis by showing that the resulting mean field equations are consistent with the FC and HD equations under the appropriate limits. The mean-field equations are only derived for randomly diluted connectivity (RD). Through simulations, we also study symmetric dilution (SD) and state dependent random dilution (SDRD). We find that the Potts network has a higher capacity for symmetric than for random dilution. We then turn to the core question: how to use a model originally conceived for the storage of p unrelated patterns of activity, in order to study semantic memory, which is organized in terms of the relations between the facts and the attributes of real-world knowledge. To proceed, we first formulate a mathematical model of generating patterns with correlations, as an extension of a hierarchical procedure for generating ultrametrically organized patterns. The model ascribes the correlations between patterns to the influence of underlying "factors"; if many factors act with comparable strength, their influences balance out and correlations are low; whereas if a few factors dominate, which in the model occurs for increasing values of a control parameter \u3b6, correlations between memory patterns can become much stronger. We show that the extension allows for correlations between patterns that are neither trivial (as in the random case) nor a plain tree (as in the ultrametric case), but that are highly sensitive to the values of the correlation parameters that we define. Next, we study the storage capacity of the Potts network when the patterns are correlated by way of our algorithm. We show that fewer correlated patterns can be stored and retrieved than random ones, and that the higher the degree of correlation, the lower the capacity. We find that the mean-field equations yielding the storage capacity are different from those obtained with uncorrelated patterns through only an additional term in the noise, proportional to the number of learned patterns p and to the difference between the average correlation between correlated patterns and independently generated patterns of the same sparsity. Of particular interest is the role played by the parameter we have introduced, \u3b6, which controls the strength of the influences of different factors (the "parents") in generating the memory patterns (the "children"). In particular, we find that for high values of \u3b6, so that only a handful of parents are effective, the network exhibits correlated retrieval, in which the network, though not being able to retrieve the pattern cued, settles into a configuration of high overlap with another pattern. This behavior of the network can be interpreted as reflecting the semantic structure of the correlations, in which even after capacity collapse, what the network can still do is to recognize the strongest features associated with the pattern. This observation is better quantified using the mutual information between the pattern cued and the configuration the network settles into, after retrieval dynamics. This information is found to increase from zero to a non-zero value abruptly when increasing the parameter \u3b6, akin to a phase transition. Two alternative phases are then identified, \u3b6 \u3b6 c , memories form clusters, such that while the specifics of the cued pattern cannot be retrieved, some of the structure informing the cluster of memories can still be retrieved. In a final short chapter, we attempt to understand the implications of having stored correlated memories on latching dynamics, the spontaneous behavior which has been proposed to be an emergent property, beyond the simple cued retrieval paradigm, of large cortical networks. Progress made in this direction, studying the Potts network, has so far focused on uncorrelated memories. Introducing correlations, we find a rich phase space of behaviors, from sequential retrieval of memories, to parallel retrieval of clusters of highly correlated memories and oscillations, depending on the various correlation parameters. The parameters of our algorithm may be found to emerge as critical control parameters, corresponding to the statistical features in human semantic memory most important in determining the dynamics of our trains of thoughts

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored

    An evolutionary basic design tool

    Get PDF
    Ankara : The Department of Graphic Design and the Institute of Economics and Social Sciences of Bilkent University, 2010.Thesis (Ph. D.) -- Bilkent University, 2010.Includes bibliographical references leaves 115-121.As a creative act, design aims at achieving innovative solutions to fulfill the requirements provided in the problem definition. In recent years, computational methods began to be used not only in design presentation but also in solution generation. The study proposes a design methodology for a particular basic design problem on the concept of emphasis. The developed methodology generates solution alternatives by carrying out genetic operations used in evolutionary design. The generated alternatives are evaluated by an objective function comprising an artificial neural network. The creative potential of the methodology is appraised by comparing the outputs of test runs with the student works for the same design task. In doing so, three different groups of students with diverse backgrounds are used.Akbulut, DilekPh.D

    Towards Lifelong Reasoning with Sparse and Compressive Memory Systems

    Get PDF
    Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During our life, we build up and retain memories that tell us where we live, what we have experienced, and who we are. Adding memory to artificial neural networks has been transformative in machine learning, allowing models to extract structure from temporal data, and more accurately model the future. However the capacity for long-range reasoning in current memory-augmented neural networks is considerably limited, in comparison to humans, despite the access to powerful modern computers. This thesis explores two prominent approaches towards scaling artificial memories to lifelong capacity: sparse access and compressive memory structures. With sparse access, the inspection, retrieval, and updating of only a very small subset of pertinent memory is considered. It is found that sparse memory access is beneficial for learning, allowing for improved data-efficiency and improved generalisation. From a computational perspective - sparsity allows scaling to memories with millions of entities on a simple CPU-based machine. It is shown that memory systems that compress the past to a smaller set of representations reduce redundancy and can speed up the learning of rare classes and improve upon classical data-structures in database systems. Compressive memory architectures are also devised for sequence prediction tasks and are observed to significantly increase the state-of-the-art in modelling natural language

    Investigating the storage capacity of a network with cell assemblies

    Get PDF
    Cell assemblies are co-operating groups of neurons believed to exist in the brain. Their existence was proposed by the neuropsychologist D.O. Hebb who also formulated a mechanism by which they could form, now known as Hebbian learning. Evidence for the existence of Hebbian learning and cell assemblies in the brain is accumulating as investigation tools improve. Researchers have also simulated cell assemblies as neural networks in computers. This thesis describes simulations of networks of cell assemblies. The feasibility of simulated cell assemblies that possess all the predicted properties of biological cell assemblies is established. Cell assemblies can be coupled together with weighted connections to form hierarchies in which a group of basic assemblies, termed primitives are connected in such a way that they form a compound cell assembly. The component assemblies of these hierarchies can be ignited independently, i.e. they are activated due to signals being passed entirely within the network, but if a sufficient number of them. are activated, they co-operate to ignite the remaining primitives in the compound assembly. Various experiments are described in which networks of simulated cell assemblies are subject to external activation involving cells in those assemblies being stimulated artificially to a high level. These cells then fire, i.e. produce a spike of activity analogous to the spiking of biological neurons, and in this way pass their activity to other cells. Connections are established, by learning in some experiments and set artificially in others, between cells within primitives and in different ones, and these connections allow activity to pass from one primitive to another. In this way, activating one or more primitives may cause others to ignite. Experiments are described in which spontaneous activation of cells aids recruitment of uncommitted cells to a neighbouring assembly. The strong relationship between cell assemblies and Hopfield nets is described. A network of simulated cells can support different numbers of assemblies depending on the complexity of those assemblies. Assemblies are classified in terms of how many primitives are present in each compound assembly and the minimum number needed to complete it. A 2-3 assembly contains 3 primitives, any 2 of which will complete it. A network of N cells can hold on the order of N 2-3 assemblies, and an architecture is proposed that contains O(N2) 3-4 assemblies. Experiments are described that show the number of connections emanating from each cell must be scaled up linearly as the number of primitives in any network .increases in order to maintain the same mean number of connections between each primitive. Restricting each cell to a maximum number of connections leads, to severe loss of performance as the size of the network increases. It is shown that the architecture can be duplicated with Hopfield nets, but that there are severe restrictions on the carrying capacity of either a hierarchy of cell assemblies or a Hopfield net storing 3-4 patterns, and that the promise of N2 patterns is largely illusory. When the number of connections from each cell is fixed as the number of primitives is increased, only O(N) cell assemblies can be stored.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Coding and learning of chemosensor array patterns in a neurodynamic model of the olfactory system

    Get PDF
    Arrays of broadly-selective chemical sensors, also known as electronic noses, have been developed during the past two decades as a low-cost and high-throughput alternative to analytical instruments for the measurement of odorant chemicals. Signal processing in these gas-sensor arrays has been traditionally performed by means of statistical and neural pattern recognition techniques. The objective of this dissertation is to develop new computational models to process gas sensor array signals inspired by coding and learning mechanisms of the biological olfactory system. We have used a neurodynamic model of the olfactory system, the KIII, to develop and demonstrate four odor processing computational functions: robust recovery of overlapping patterns, contrast enhancement, background suppression, and novelty detection. First, a coding mechanism based on the synchrony of neural oscillations is used to extract information from the associative memory of the KIII model. This temporal code allows the KIII to recall overlapping patterns in a robust manner. Second, a new learning rule that combines Hebbian and anti-Hebbian terms is proposed. This learning rule is shown to achieve contrast enhancement on gas-sensor array patterns. Third, a new local learning mechanism based on habituation is proposed to perform odor background suppression. Combining the Hebbian/anti-Hebbian rule and the local habituation mechanism, the KIII is able to suppress the response to continuously presented odors, facilitating the detection of the new ones. Finally, a new learning mechanism based on anti-Hebbian learning is proposed to perform novelty detection. This learning mechanism allows the KIII to detect the introduction of new odors even in the presence of strong backgrounds. The four computational models are characterized with synthetic data and validated on gas sensor array patterns obtained from an e-nose prototype developed for this purpose
    • …
    corecore