48 research outputs found

    Evolvable hardware: An outlook

    No full text
    In this paper, we explore the potential of Evolvable Hardware (EHW) for online adaptation in real-time applications. We follow a top-down approach here. We first review existing adaptation and learning techniques and take a look at their suitability for driving hardware evolution. Then we discuss some research problems whose solution will improve the performance of EHW.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    DOI: 10.1017/S000000000000000 Printed in the United Kingdom Honesty and Deception in Populations of Selfish, Adaptive Individuals

    No full text
    Biologists have mostly studied under what circumstances honest signaling is stable. Stability, however, is not sufficient to explain the emergence of honest signaling. We study the evolution of honest signaling between selfish, adaptive individuals and observe that honest signaling can emerge through learning. More importantly, honest signaling may emerge in cases where it is not evolutionarystable. In such cases, honesty and dishonesty co-exist. Furthermore, honest signaling does not necessarily emerge in cases where it is evolutionary stable. We show that the latter is due to the existence of other, more important equilibria and that the importance of equilibria is related to Pareto-optimality.

    Building a genetic programming framework: The added-value of design patterns

    No full text
    info:eu-repo/semantics/publishe

    Substitution Matrix Based Kernel Functions for Protein Secondary Structure Prediction

    No full text
    Different approaches to using substitution matrices in kernel functions for protein secondary structure prediction (PSSP) with support vector machines are investigated. This work introduces a number of kernel functions that calculate inner products between amino acid sequences based on the entries of a substitution matrix (SM), i.e. a matrix that contains evolutionary information about the substitutability of the different amino acids that make up proteins. The starting point is always the same, i.e. a pseudo inner product (PI) between amino acid sequences making use of a SM. It is shown what conditions a SM should satisfy in order for the PI to make sense and subsequently it is shown how a substitution distance (SD) based on the PI can be defined. Next, different ways of using both the PI and the SD in kernel functions for support vector machine (SVM) learning are discussed. In a series of experiments the different kernel functions are compared with each other and with other kernel functions that do not make use of a SM. The results show that the information contained in a SM can have a positive influence on the PSSP results, provided that it is employed in the correct way
    corecore