9,496 research outputs found
Population-based incremental learning with associative memory for dynamic environments
Copyright © 2007 IEEE. Reprinted from IEEE Transactions on Evolutionary Computation.
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected].
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In recent years there has been a growing interest in studying evolutionary algorithms (EAs) for dynamic optimization problems (DOPs) due to its importance in real world applications. Several approaches, such as the memory and multiple population schemes, have been developed for EAs to address dynamic problems. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of EAs, for DOPss. A PBIL-specific associative memory scheme, which stores best solutions as well as corresponding environmental information in the memory, is investigated to improve its adaptability in dynamic environments. In this paper, the interactions between the memory scheme and random immigrants, multi-population, and restart schemes for PBILs in dynamic environments are investigated. In order to better test the performance of memory schemes for PBILs and other EAs in dynamic environments, this paper also proposes a dynamic environment generator that can systematically generate dynamic environments of different difficulty with respect to memory schemes. Using this generator a series of dynamic environments are generated and experiments are carried out to compare the performance of investigated algorithms. The experimental results show that the proposed memory scheme is efficient for PBILs in dynamic environments and also indicate that different interactions exist between the memory scheme and random immigrants, multi-population schemes for PBILs in different dynamic environments
Synthetic associative learning in engineered multicellular consortia
Associative learning is one of the key mechanisms displayed by living
organisms in order to adapt to their changing environments. It was early
recognized to be a general trait of complex multicellular organisms but also
found in "simpler" ones. It has also been explored within synthetic biology
using molecular circuits that are directly inspired in neural network models of
conditioning. These designs involve complex wiring diagrams to be implemented
within one single cell and the presence of diverse molecular wires become a
challenge that might be very difficult to overcome. Here we present three
alternative circuit designs based on two-cell microbial consortia able to
properly display associative learning responses to two classes of stimuli and
displaying long and short-term memory (i. e. the association can be lost with
time). These designs might be a helpful approach for engineering the human gut
microbiome or even synthetic organoids, defining a new class of decision-making
biological circuits capable of memory and adaptation to changing conditions.
The potential implications and extensions are outlined.Comment: 5 figure
Persons Versus Brains: Biological Intelligence in Human Organisms
I go deep into the biology of the human organism to argue that the psychological features and functions of persons are realized by cellular and molecular parallel distributed processing networks dispersed throughout the whole body. Persons supervene on the computational processes of nervous, endocrine, immune, and genetic networks. Persons do not go with brains
Non-Convex Multi-species Hopfield models
In this work we introduce a multi-species generalization of the Hopfield
model for associative memory, where neurons are divided into groups and both
inter-groups and intra-groups pair-wise interactions are considered, with
different intensities. Thus, this system contains two of the main ingredients
of modern Deep neural network architectures: Hebbian interactions to store
patterns of information and multiple layers coding different levels of
correlations. The model is completely solvable in the low-load regime with a
suitable generalization of the Hamilton-Jacobi technique, despite the
Hamiltonian can be a non-definite quadratic form of the magnetizations. The
family of multi-species Hopfield model includes, as special cases, the 3-layers
Restricted Boltzmann Machine (RBM) with Gaussian hidden layer and the
Bidirectional Associative Memory (BAM) model.Comment: This is a pre-print of an article published in J. Stat. Phy
Pavlov's dog associative learning demonstrated on synaptic-like organic transistors
In this letter, we present an original demonstration of an associative
learning neural network inspired by the famous Pavlov's dogs experiment. A
single nanoparticle organic memory field effect transistor (NOMFET) is used to
implement each synapse. We show how the physical properties of this dynamic
memristive device can be used to perform low power write operations for the
learning and implement short-term association using temporal coding and spike
timing dependent plasticity based learning. An electronic circuit was built to
validate the proposed learning scheme with packaged devices, with good
reproducibility despite the complex synaptic-like dynamic of the NOMFET in
pulse regime
StochKit-FF: Efficient Systems Biology on Multicore Architectures
The stochastic modelling of biological systems is an informative, and in some
cases, very adequate technique, which may however result in being more
expensive than other modelling approaches, such as differential equations. We
present StochKit-FF, a parallel version of StochKit, a reference toolkit for
stochastic simulations. StochKit-FF is based on the FastFlow programming
toolkit for multicores and exploits the novel concept of selective memory. We
experiment StochKit-FF on a model of HIV infection dynamics, with the aim of
extracting information from efficiently run experiments, here in terms of
average and variance and, on a longer term, of more structured data.Comment: 14 pages + cover pag
Parallel processing in immune networks
In this work we adopt a statistical mechanics approach to investigate basic,
systemic features exhibited by adaptive immune systems. The lymphocyte network
made by B-cells and T-cells is modeled by a bipartite spin-glass, where,
following biological prescriptions, links connecting B-cells and T-cells are
sparse. Interestingly, the dilution performed on links is shown to make the
system able to orchestrate parallel strategies to fight several pathogens at
the same time; this multitasking capability constitutes a remarkable, key
property of immune systems as multiple antigens are always present within the
host. We also define the stochastic process ruling the temporal evolution of
lymphocyte activity, and show its relaxation toward an equilibrium measure
allowing statistical mechanics investigations. Analytical results are compared
with Monte Carlo simulations and signal-to-noise outcomes showing overall
excellent agreement. Finally, within our model, a rationale for the
experimentally well-evidenced correlation between lymphocytosis and
autoimmunity is achieved; this sheds further light on the systemic features
exhibited by immune networks.Comment: 21 pages, 9 figures; to appear in Phys. Rev.
NASA JSC neural network survey results
A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc
Evolution of associative learning in chemical networks
Organisms that can learn about their environment and modify their behaviour appropriately during their lifetime are more likely to survive and reproduce than organisms that do not. While associative learning – the ability to detect correlated features of the environment – has been studied extensively in nervous systems, where the underlying mechanisms are reasonably well understood, mechanisms within single cells that could allow associative learning have received little attention. Here, using in silico evolution of chemical networks, we show that there exists a diversity of remarkably simple and plausible chemical solutions to the associative learning problem, the simplest of which uses only one core chemical reaction. We then asked to what extent a linear combination of chemical concentrations in the network could approximate the ideal Bayesian posterior of an environment given the stimulus history so far? This Bayesian analysis revealed the ’memory traces’ of the chemical network. The implication of this paper is that there is little reason to believe that a lack of suitable phenotypic variation would prevent associative learning from evolving in cell signalling, metabolic, gene regulatory, or a mixture of these networks in cells
- …