332 research outputs found

    Imitative and Direct Learning as Interacting Factors in Life History Evolution

    Get PDF
    The idea that lifetime learning can have a significant effect on life history evolution has recently been explored using a series of artificial life simulations. These involved populations of competing individuals evolving by natural selection to learn to perform well on simplified abstract tasks, with the learning consisting of identifying regularities in their environment. In reality, there is more to learning than that type of direct individual experience, because it often includes a substantial degree of social learning that involves various forms of imitation of what other individuals have learned before them. This article rectifies that omission by incorporating memes and imitative learning into revised versions of the previous approach. To do this reliably requires formulating and testing a general framework for meme-based simulations that will enable more complete investigations of learning as a factor in any life history evolution scenarios. It does that by simulating imitative information transfer in terms of memes being passed between individuals, and developing a process for merging that information with the (possibly inconsistent) information acquired by direct experience, leading to a consistent overall body of learning. The proposed framework is tested on a range of learning variations and a representative set of life history factors to confirm the robustness of the approach. The simulations presented illustrate the types of interactions and tradeoffs that can emerge, and indicate the kinds of species-specific models that could be developed with this approach in the future. </jats:p

    Double dissociation, modularity, and distributed organization

    Get PDF
    Muller argues that double dissociations do not imply underlying modularity of the cognitive system, citing neural networks as examples of fully distributed systems that can give rise to double dissociations. We challenge this claim, noting that such double dissociations typically do not ''scale-up,'', and that even some single dissociations can be difficult to account for in a distributed system

    Artificial Bee Colony training of neural networks: comparison with back-propagation

    Get PDF
    Abstract: The Artificial Bee Colony (ABC) is a swarm intelligence algorithm for optimization that has previously been applied to the training of neural networks. This paper examines more carefully the performance of the ABC algorithm for optimizing the connection weights of feed-forward neural networks for classification tasks, and presents a more rigorous comparison with the traditional Back-Propagation (BP) training algorithm. The empirical results for benchmark problems demonstrate that using the standard “stopping early ” approach with optimized learning parameters leads to improved BP performance over the previous comparative study, and that a simple variation of the ABC approach provides improved ABC performance too. With both improvements applied, the ABC approach does perform very well on small problems, but the generalization performances achieved are only significantly better than standard BP on one out of six datasets, and the training times increase rapidly as the size of the problem grows. If different, evolutionary optimized, BP learning rates are allowed for the two layers of the neural network, BP is significantly better than the ABC on two of the six datasets, and not significantly different on the other four

    Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks

    Get PDF
    Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented

    N=4 Twisted Superspace from Dirac-Kahler Twist and Off-shell SUSY Invariant Actions in Four Dimensions

    Full text link
    We propose N=4 twisted superspace formalism in four dimensions by introducing Dirac-Kahler twist. In addition to the BRST charge as a scalar counter part of twisted supercharge we find vector and tensor twisted supercharges. By introducing twisted chiral superfield we explicitly construct off-shell twisted N=4 SUSY invariant action. We can propose variety of supergauge invariant actions by introducing twisted vector superfield. We may, however, need to find further constraints to identify twisted N=4 super Yang-Mills action. We propose a superconnection formalism of twisted superspace where constraints play a crucial role. It turns out that N=4 superalgebra of Dirac-Kahler twist can be decomposed into N=2 sectors. We can then construct twisted N=2 super Yang-Mills actions by the superconnection formalism of twisted superspace in two and four dimensions.Comment: 62page

    N=2 Supersymmetric Model with Dirac-Kahler Fermions from Generalized Gauge Theory in Two Dimensions

    Full text link
    We investigate the generalized gauge theory which has been proposed previously and show that in two dimensions the instanton gauge fixing of the generalized topological Yang-Mills action leads to a twisted N=2 supersymmetric action. We have found that the R-symmetry of N=2 supersymmetry can be identified with the flavour symmetry of Dirac-Kahler fermion formulation. Thus the procedure of twist allows topological ghost fields to be interpreted as the Dirac-Kahler matter fermions.Comment: 22 pages, LaTe

    Twisted Superspace on a Lattice

    Full text link
    We propose a new formulation which realizes exact twisted supersymmetry for all the supercharges on a lattice by twisted superspace formalism. We show explicit examples of N=2 twisted supersymmetry invariant BF and Wess-Zumino models in two dimensions. We introduce mild lattice noncommutativity to preserve Leibniz rule on the lattice. The formulation is based on the twisted superspace formalism for N=D=2 supersymmetry which was proposed recently. From the consistency condition of the noncommutativity of superspace, we find an unexpected three-dimensional lattice structure which may reduce into two dimensional lattice where the superspace describes semilocally scattered fermions and bosons within a double size square lattice.Comment: 49 pages, 17 figures; v2: Fig.2 is replace

    On the importance of sluggish state memory for learning long term dependency

    Get PDF
    The vanishing gradients problem inherent in Simple Recurrent Networks (SRN) trained with back-propagation, has led to a significant shift towards the use of Long Short-term Memory (LSTM) and Echo State Networks (ESN), which overcome this problem through either second order error-carousel schemes or different learning algorithms respectively. This paper re-opens the case for SRN-based approaches, by considering a variant, the Multi-recurrent Network (MRN). We show that memory units embedded within its architecture can ameliorate against the vanishing gradient problem, by providing variable sensitivity to recent and more historic information through layer- and self-recurrent links with varied weights, to form a so-called sluggish state-based memory. We demonstrate that an MRN, optimised with noise injection, is able to learn the long term dependency within a complex grammar induction task, significantly outperforming the SRN, NARX and ESN. Analysis of the internal representations of the networks, reveals that sluggish state-based representations of the MRN are best able to latch on to critical temporal dependencies spanning variable time delays, to maintain distinct and stable representations of all underlying grammar states. Surprisingly, the ESN was unable to fully learn the dependency problem, suggesting the major shift towards this class of models may be premature
    corecore