7,088 research outputs found

    Associative memory in artificial immune systems

    Get PDF
    The paper concentrates on analyzing associative properties of Artificial Immune Systems, especially on immunological memory, which is a member of a class of sparse and distributed associative memories [18]. This class of memories derives its associative and robust nature by sparsely sampling the input space and distributing the data among many independent agents [16]. Immunological memory is one of the defining characteristics of the adaptive immune system [4]. This memory is able to store and recall patterns when it is required, and can easily categorize new input data [11]. Immunological memory is distributed among the cells in the AIS memory population, and is robust, because when a portion of the memory population is lost, the remaining memory cells persist to produce a response. The major principle behind vaccination procedures in medicine and immunotherapy takes its source from associative properties of immunological memory [13]. Associative recall is a general phenomenon of immunological memory [18]

    Population-based incremental learning with associative memory for dynamic environments

    Get PDF
    Copyright © 2007 IEEE. Reprinted from IEEE Transactions on Evolutionary Computation. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In recent years there has been a growing interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) due to its importance in real world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPss. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multi-population, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multi-population schemes for PBILs in different dynamic environments

    Persons Versus Brains: Biological Intelligence in Human Organisms

    Get PDF
    I go deep into the biology of the human organism to argue that the psychological features and functions of persons are realized by cellular and molecular parallel distributed processing networks dispersed throughout the whole body. Persons supervene on the computational processes of nervous, endocrine, immune, and genetic networks. Persons do not go with brains

    Pavlov's dog associative learning demonstrated on synaptic-like organic transistors

    Full text link
    In this letter, we present an original demonstration of an associative learning neural network inspired by the famous Pavlov's dogs experiment. A single nanoparticle organic memory field effect transistor (NOMFET) is used to implement each synapse. We show how the physical properties of this dynamic memristive device can be used to perform low power write operations for the learning and implement short-term association using temporal coding and spike timing dependent plasticity based learning. An electronic circuit was built to validate the proposed learning scheme with packaged devices, with good reproducibility despite the complex synaptic-like dynamic of the NOMFET in pulse regime

    Brain-inspired conscious computing architecture

    Get PDF
    What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Non-Convex Multi-species Hopfield models

    Full text link
    In this work we introduce a multi-species generalization of the Hopfield model for associative memory, where neurons are divided into groups and both inter-groups and intra-groups pair-wise interactions are considered, with different intensities. Thus, this system contains two of the main ingredients of modern Deep neural network architectures: Hebbian interactions to store patterns of information and multiple layers coding different levels of correlations. The model is completely solvable in the low-load regime with a suitable generalization of the Hamilton-Jacobi technique, despite the Hamiltonian can be a non-definite quadratic form of the magnetizations. The family of multi-species Hopfield model includes, as special cases, the 3-layers Restricted Boltzmann Machine (RBM) with Gaussian hidden layer and the Bidirectional Associative Memory (BAM) model.Comment: This is a pre-print of an article published in J. Stat. Phy

    Evolution of associative learning in chemical networks

    Get PDF
    Organisms that can learn about their environment and modify their behaviour appropriately during their lifetime are more likely to survive and reproduce than organisms that do not. While associative learning – the ability to detect correlated features of the environment – has been studied extensively in nervous systems, where the underlying mechanisms are reasonably well understood, mechanisms within single cells that could allow associative learning have received little attention. Here, using in silico evolution of chemical networks, we show that there exists a diversity of remarkably simple and plausible chemical solutions to the associative learning problem, the simplest of which uses only one core chemical reaction. We then asked to what extent a linear combination of chemical concentrations in the network could approximate the ideal Bayesian posterior of an environment given the stimulus history so far? This Bayesian analysis revealed the ’memory traces’ of the chemical network. The implication of this paper is that there is little reason to believe that a lack of suitable phenotypic variation would prevent associative learning from evolving in cell signalling, metabolic, gene regulatory, or a mixture of these networks in cells

    Multitasking associative networks

    Full text link
    We introduce a bipartite, diluted and frustrated, network as a sparse restricted Boltzman machine and we show its thermodynamical equivalence to an associative working memory able to retrieve multiple patterns in parallel without falling into spurious states typical of classical neural networks. We focus on systems processing in parallel a finite (up to logarithmic growth in the volume) amount of patterns, mirroring the low-level storage of standard Amit-Gutfreund-Sompolinsky theory. Results obtained trough statistical mechanics, signal-to-noise technique and Monte Carlo simulations are overall in perfect agreement and carry interesting biological insights. Indeed, these associative networks pave new perspectives in the understanding of multitasking features expressed by complex systems, e.g. neural and immune networks.Comment: to appear on Phys.Rev.Let
    corecore