3,215 research outputs found

    Associative memory in gene regulation networks

    No full text
    The pattern of gene expression in the phenotype of an organism is determined in part by the dynamical attractors of the organism’s gene regulation network. Changes to the connections in this network over evolutionary time alter the adult gene expression pattern and hence the fitness of the organism. However, the evolution of structure in gene expression networks (potentially reflecting past selective environments) and its affordances and limitations with respect to enhancing evolvability is poorly understood in general. In this paper we model the evolution of a gene regulation network in a controlled scenario. We show that selected changes to connections in the regulation network make the currently selected gene expression pattern more robust to environmental variation. Moreover, such changes to connections are necessarily ‘Hebbian’ – ‘genes that fire together wire together’ – i.e. genes whose expression is selected for in the same selective environments become co-regulated. Accordingly, in a manner formally equivalent to well-understood learning behaviour in artificial neural networks, a gene expression network will therefore develop a generalised associative memory of past selected phenotypes. This theoretical framework helps us to better understand the relationship between homeostasis and evolvability (i.e. selection to reduce variability facilitates structured variability), and shows that, in principle, a gene regulation network has the potential to develop ‘recall’ capabilities normally reserved for cognitive systems

    Adaptation without natural selection

    No full text
    Document is itself an extended abstract

    If you can't be with the one you love, love the one you're with: How individual habituation of agent interactions improves global utility

    No full text
    Simple distributed strategies that modify the behaviour of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimise their individual utilities by coordinating (or anti-coordinating) with their neighbours, to maximise the pay-offs from randomly weighted pair-wise games. In general, agents will opt for the behaviour that is the best compromise (for them) of the many conflicting constraints created by their neighbours, but the attractors of the system as a whole will not maximise total utility. We then consider agents that act as 'creatures of habit' by increasing their preference to coordinate (anti-coordinate) with whichever neighbours they are coordinated (anti-coordinated) with at the present moment. These preferences change slowly while the system is repeatedly perturbed such that it settles to many different local attractors. We find that under these conditions, with each perturbation there is a progressively higher chance of the system settling to a configuration with high total utility. Eventually, only one attractor remains, and that attractor is very likely to maximise (or almost maximise) global utility. This counterintutitve result can be understood using theory from computational neuroscience; we show that this simple form of habituation is equivalent to Hebbian learning, and the improved optimisation of global utility that is observed results from wellknown generalisation capabilities of associative memory acting at the network scale. This causes the system of selfish agents, each acting individually but habitually, to collectively identify configurations that maximise total utility

    The Principles of Social Order. Selected Essays of Lon L. Fuller, edited With an introduction by Kenneth I. Winston

    Get PDF
    The electron spins of semiconductor defects can have complex interactions with their host, particularly in polar materials like SiC where electrical and mechanical variables are intertwined. By combining pulsed spin resonance with ab initio simulations, we show that spin-spin interactions in 4H-SiC neutral divacancies give rise to spin states with a strong Stark effect, sub-10(-6) strain sensitivity, and highly spin-dependent photoluminescence with intensity contrasts of 15%-36%. These results establish SiC color centers as compelling systems for sensing nanoscale electric and strain fields

    Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

    Full text link
    We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth
    • 

    corecore