2,511 research outputs found

    Fixation of genetic variation and optimization of gene expression: The speed of evolution in isolated lizard populations undergoing Reverse Island Syndrome

    Get PDF
    The ecological theory of island biogeography suggests that mainland populations should be more genetically divergent from those on large and distant islands rather than from those on small and close islets. Some island populations do not evolve in a linear way, but the process of divergence occurs more rapidly because they undergo a series of phenotypic changes, jointly known as the Island Syndrome. A special case is Reversed Island Syndrome (RIS), in which populations show drastic phenotypic changes both in body shape, skin colouration, age of sexual maturity, aggressiveness, and food intake rates. The populations showing the RIS were observed on islets nearby mainland and recently raised, and for this they are useful models to study the occurrence of rapid evolutionary change. We investigated the timing and mode of evolution of lizard populations adapted through selection on small islets. For our analyses, we used an ad hoc model system of three populations: wild-type lizards from the mainland and insular lizards from a big island (Capri, Italy), both Podarcis siculus siculus not affected by the syndrome, and a lizard population from islet (Scopolo) undergoing the RIS (called P. s. coerulea because of their melanism). The split time of the big (Capri) and small (Scopolo) islands was determined using geological events, like sea-level rises. To infer molecular evolution, we compared five complete mitochondrial genomes for each population to reconstruct the phylogeography and estimate the divergence time between island and mainland lizards. We found a lower mitochondrial mutation rate in Scopolo lizards despite the phenotypic changes achieved in approximately 8,000 years. Furthermore, transcriptome analyses showed significant differential gene expression between islet and mainland lizard populations, suggesting the key role of plasticity in these unpredictable environments

    Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks

    No full text
    The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks

    Biologically inspired evolutionary temporal neural circuits

    Get PDF
    Biological neural networks have always motivated creation of new artificial neural networks, and in this case a new autonomous temporal neural network system. Among the more challenging problems of temporal neural networks are the design and incorporation of short and long-term memories as well as the choice of network topology and training mechanism. In general, delayed copies of network signals can form short-term memory (STM), providing a limited temporal history of events similar to FIR filters, whereas the synaptic connection strengths as well as delayed feedback loops (ER circuits) can constitute longer-term memories (LTM). This dissertation introduces a new general evolutionary temporal neural network framework (GETnet) through automatic design of arbitrary neural networks with STM and LTM. GETnet is a step towards realization of general intelligent systems that need minimum or no human intervention and can be applied to a broad range of problems. GETnet utilizes nonlinear moving average/autoregressive nodes and sub-circuits that are trained by enhanced gradient descent and evolutionary search in terms of architecture, synaptic delay, and synaptic weight spaces. The mixture of Lamarckian and Darwinian evolutionary mechanisms facilitates the Baldwin effect and speeds up the hybrid training. The ability to evolve arbitrary adaptive time-delay connections enables GETnet to find novel answers to many classification and system identification tasks expressed in the general form of desired multidimensional input and output signals. Simulations using Mackey-Glass chaotic time series and fingerprint perspiration-induced temporal variations are given to demonstrate the above stated capabilities of GETnet

    Associative memory in gene regulation networks

    No full text
    The pattern of gene expression in the phenotype of an organism is determined in part by the dynamical attractors of the organism’s gene regulation network. Changes to the connections in this network over evolutionary time alter the adult gene expression pattern and hence the fitness of the organism. However, the evolution of structure in gene expression networks (potentially reflecting past selective environments) and its affordances and limitations with respect to enhancing evolvability is poorly understood in general. In this paper we model the evolution of a gene regulation network in a controlled scenario. We show that selected changes to connections in the regulation network make the currently selected gene expression pattern more robust to environmental variation. Moreover, such changes to connections are necessarily ‘Hebbian’ – ‘genes that fire together wire together’ – i.e. genes whose expression is selected for in the same selective environments become co-regulated. Accordingly, in a manner formally equivalent to well-understood learning behaviour in artificial neural networks, a gene expression network will therefore develop a generalised associative memory of past selected phenotypes. This theoretical framework helps us to better understand the relationship between homeostasis and evolvability (i.e. selection to reduce variability facilitates structured variability), and shows that, in principle, a gene regulation network has the potential to develop ‘recall’ capabilities normally reserved for cognitive systems

    Lamarck's Revenge: Inheritance of Learned Traits Can Make Robot Evolution Better

    Full text link
    Evolutionary robot systems offer two principal advantages: an advanced way of developing robots through evolutionary optimization and a special research platform to conduct what-if experiments regarding questions about evolution. Our study sits at the intersection of these. We investigate the question ``What if the 18th-century biologist Lamarck was not completely wrong and individual traits learned during a lifetime could be passed on to offspring through inheritance?'' We research this issue through simulations with an evolutionary robot framework where morphologies (bodies) and controllers (brains) of robots are evolvable and robots also can improve their controllers through learning during their lifetime. Within this framework, we compare a Lamarckian system, where learned bits of the brain are inheritable, with a Darwinian system, where they are not. Analyzing simulations based on these systems, we obtain new insights about Lamarckian evolution dynamics and the interaction between evolution and learning. Specifically, we show that Lamarckism amplifies the emergence of `morphological intelligence', the ability of a given robot body to acquire a good brain by learning, and identify the source of this success: `newborn' robots have a higher fitness because their inherited brains match their bodies better than those in a Darwinian system.Comment: preprint-nature scientific report. arXiv admin note: text overlap with arXiv:2303.1259

    A Species-Conserving Genetic Algorithm for Multimodal Optimization

    Get PDF
    The problem of multimodal functional optimization has been addressed by much research producing many different search techniques. Niche Genetic Algorithms is one area that has attempted to solve this problem. Many Niche Genetic Algorithms use some type of radius. When multiple optima occur within the radius, these algorithms have a difficult time locating them. Problems that have arbitrarily close optima create a greater problem. This paper presents a new Niche Genetic Algorithm framework called Dynamic-radius Species-conserving Genetic Algorithm. This new framework extends existing Genetic Algorithm research. This new framework enhances an existing Niche Genetic Algorithm in two ways. As the name implies the radius of the algorithm varies during execution. A uniform radius can cause issues if it is not set correctly during initialization. A dynamic radius compensates for these issues. The framework does not attempt to locate all of the optima in a single pass. It attempts to find some optima and then uses a tabu list to exclude those areas of the domain for future iterations. To exclude these previously located optima, the framework uses a fitness sharing approach and a seed exclusion approach. This new framework addresses many areas of difficulty in current multimodal functional optimization research. This research used the experimental research methodology. A series of classic benchmark functional optimization problems were used to compare this framework to other algorithms. These other algorithms represented classic and current Niche Genetic Algorithms. Results from this research show that this new framework does very well in locating optima in a variety of benchmark functions. In functions that have arbitrarily close optima, the framework outperforms other algorithms. Compared to other Niche Genetic Algorithms the framework does equally well in locating optima that are not arbitrarily close. Results indicate that varying the radius during execution and the use of a tabu list assists in solving functional optimization problems for continuous functions that have arbitrarily close optima

    2020 Conference Abstracts: Annual Undergraduate Research Conference at the Interface of Biology and Mathematics

    Get PDF
    Schedule and abstract book for the Twelfth Annual Undergraduate Research Conference at the Interface of Biology and Mathematics Date: October 31 - November 1, 2020Location: The 2020 conference was conducted remotely due to COVID-19 concerns, utilizing the sococo platform that allows personal avatars to move between rooms and sessions, interact in small groups and also participate in zoom sessions.Keynote Speaker: Gerardo Chowell, Population Health Sciences, Georgia State Univ. School of Public Health, AtlantaFeatured Speaker: Olivia Prosper, Mathematics, Univ. of Tennessee, Knoxvill
    corecore