20 research outputs found

    Program schemes with deep pushdown storage.

    Get PDF
    Inspired by recent work of Meduna on deep pushdown automata, we consider the computational power of a class of basic program schemes, TeX, based around assignments, while-loops and non- deterministic guessing but with access to a deep pushdown stack which, apart from having the usual push and pop instructions, also has deep-push instructions which allow elements to be pushed to stack locations deep within the stack. We syntactically define sub-classes of TeX by restricting the occurrences of pops, pushes and deep-pushes and capture the complexity classes NP and PSPACE. Furthermore, we show that all problems accepted by program schemes of TeX are in EXPTIME

    On-line genetic programming of multi-hop broadcast protocols in ad hoc networks

    Get PDF
    From the family of ad hoc communication protocols the most challenging ones are those, that are designed to disseminate messages to all, or most of the nodes in the system. By their nature, these kinds of protocols use significant network resources, as the communication must involve a large fraction of the network nodes. Reducing the network load can be achieved by using the available local broadcast medium (radio channel), but it is not trivial how to select the set of nodes that should participate in the dissemination process. Previous attempts delivered algorithms that can provide reasonable performance and reliability but mostly for specific cases of ad hoc networks. In this paper a new way of tackling the broadcast problem is presented that takes no assumptions about the nature of the underlying network. Instead of using hand-optimizing protocols, we propose a framework for a self-optimizing and self-managing system inspired by natural selection and evolution. A generic distributed feed-forward performance evaluation criterion based on natural selection is presented along with an implementation of a virtual machine and a corresponding language for Genetic Programming to be used in tandem with the natural selection process

    Evolving Recursive Programs using Non-recursive Scaffolding

    Get PDF
    Genetic programming has proven capable of evolving solutions to a wide variety of problems. However, the successes have largely been with programs without iteration or recursion; evolving recursive programs has turned out to be particularly challenging. The main obstacle to evolving recursive programs seems to be that they are particularly fragile to the application of search operators: a small change in a correct recursive program generally produces a completely wrong program. In this paper, we present a simple and general method that allows us to pass back and forth from a recursive program to an associated non-recursive program. Finding a recursive program can be reduced to evolving non-recursive programs followed by converting the optimum non-recursive program found to the associated optimum recursive program. This avoids the fragility problem above, as evolution does not search the space of recursive programs. We present promising experimental results on a test-bed of recursive problems

    MTGP: Combining Metamorphic Testing and Genetic Programming

    Full text link
    Genetic programming is an evolutionary approach known for its performance in program synthesis. However, it is not yet mature enough for a practical use in real-world software development, since usually many training cases are required to generate programs that generalize to unseen test cases. As in practice, the training cases have to be expensively hand-labeled by the user, we need an approach to check the program behavior with a lower number of training cases. Metamorphic testing needs no labeled input/output examples. Instead, the program is executed multiple times, first on a given (randomly generated) input, followed by related inputs to check whether certain user-defined relations between the observed outputs hold. In this work, we suggest MTGP, which combines metamorphic testing and genetic programming and study its performance and the generalizability of the generated programs. Further, we analyze how the generalizability depends on the number of given labeled training cases. We find that using metamorphic testing combined with labeled training cases leads to a higher generalization rate than the use of labeled training cases alone in almost all studied configurations. Consequently, we recommend researchers to use metamorphic testing in their systems if the labeling of the training data is expensive

    Memory with memory in genetic programming

    Get PDF
    We introduce Memory with Memory Genetic Programming (MwM-GP), where we use soft assignments and soft return operations. Instead of having the new value completely overwrite the old value of registers or memory, soft assignments combine such values. Similarly, in soft return operations the value of a function node is a blend between the result of a calculation and previously returned results. In extensive empirical tests, MwM-GP almost always does as well as traditional GP, while significantly outperforming it in several cases. MwM-GP also tends to be far more consistent than traditional GP. The data suggest that MwM-GP works by successively refining an approximate solution to the target problem and that it is much less likely to have truly ineffective code. MwM-GP can continue to improve over time, but it is less likely to get the sort of exact solution that one might find with traditional GP
    corecore