584 research outputs found

    PonyGE2: Grammatical Evolution in Python

    Full text link
    Grammatical Evolution (GE) is a population-based evolutionary algorithm, where a formal grammar is used in the genotype to phenotype mapping process. PonyGE2 is an open source implementation of GE in Python, developed at UCD's Natural Computing Research and Applications group. It is intended as an advertisement and a starting-point for those new to GE, a reference for students and researchers, a rapid-prototyping medium for our own experiments, and a Python workout. As well as providing the characteristic genotype to phenotype mapping of GE, a search algorithm engine is also provided. A number of sample problems and tutorials on how to use and adapt PonyGE2 have been developed.Comment: 8 pages, 4 figures, submitted to the 2017 GECCO Workshop on Evolutionary Computation Software Systems (EvoSoft

    A New Design Method Framework for Open Origami Design Problems

    Get PDF
    With the development of computer science and manufacturing techniques, modern origami is no longer just used for making artistic shapes as its traditional counterpart was many centuries ago. Instead, the outstanding lightweight and high flexibility of origami structures has expanded their engineering application in aerospace, medical devices, and architecture. In order to support the automatic design of more complex modern origami structures, several computational origami design methods have been established. However these methods still focus on the problem of determining a crease pattern to fold into an exact pre-determined shape. And these methods apply deductive logic and function for only one type of topological origami structure. In order to drop the topological constraints on the shapes, this dissertation introduces the research on the development and implementation of the abductive evolutionary design methods to open origami design problems, which is asking for their designs to achieve geometric and functional requirements instead of an exact shape. This type of open origami design problem has no formal computational solutions yet. Since the open origami design problem requires searching for solutions among arbitrary candidates without fixing to a certain topological formation, it is NP-complete in computational complexity. Therefore, this research selects the genetic algorithm (GA) and one of its variations – the computational evolutionary embryogeny (CEE) – to solve origami problems. The dissertation made two major contributions. One contribution is on creating the GA-based/abstract design method framework on open origami design problems. The other contribution is on the geometric representation of origami designs that directs the definition and mapping of their genetic representation and physical representation. This research introduced two novel geometric representations, which are the “ice-cracking” and the pixelated multicellular representation (PMR). The proposed design methods and the adapted evolutionary operators have been testified by two open origami design problems of making flat-foldable shapes with desired profile area and rigid-foldable 3D water containers with desired volume. The results have proved the proposed methods widely applicable and highly effective in solving the open origami design problems

    Questions related to Bitcoin and other Informational Money

    Get PDF
    A collection of questions about Bitcoin and its hypothetical relatives Bitguilder and Bitpenny is formulated. These questions concern technical issues about protocols, security issues, issues about the formalizations of informational monies in various contexts, and issues about forms of use and misuse. Some questions are formulated in the more general setting of informational monies and near-monies. We also formulate questions about legal, psychological, and ethical aspects of informational money. Finally we formulate a number of questions concerning the economical merits of and outlooks for Bitcoin.Comment: 31 pages. In v2 the section on patterns for use and misuse has been improved and expanded with so-called contaminations. Other small improvements were made and 13 additional references have been include

    Computational intelligence based architecture for cognitive agents

    Get PDF
    AbstractWe discuss some limitations of reflexive agents to motivate the need to develop cognitive agents and propose a hierarchical, layered, architecture for cognitive agents. Our examples often involve the discussion of cognitive agents in highway traffic models. A cognitive agent is an agent capable of performing cognitive acts, i.e. a sequence of the following activities: “Perceiving” information in the environment and provided by other agents, “Reasoning” about this information using existing knowledge, “Judging” the obtained information using existing knowledge, “Responding” to other cognitive agents or to the external environment, as it may be required, and “Learning”, i.e. changing (and, hopefully augmenting) the existing knowledge if the newly acquired information allows it. We describe how computational intelligence techniques (e.g., fuzzy logic, neural networks, genetic algorithms, etc) allow mimicking to a certain extent the cognitive acts performed by human beings. The order with which the cognitive actions take place is important and so is the order with which the various computational intelligence techniques are applied. We believe that a hierarchical layered model should be defined for the generic cognitive agents in a style akin to the hierarchical OSI 7 layer model used in data communication. We outline in broad sense such a reference model

    Diversity Control in Evolutionary Computation using Asynchronous Dual-Populations with Search Space Partitioning

    Get PDF
    Diversity control is vital for effective global optimization using evolutionary computation (EC) techniques. This paper classifies the various diversity control policies in the EC literature. Many research works have attributed the high risk of premature convergence to sub-optimal solutions to the poor exploration capabilities resulting from diversity collapse. Also, excessive cost of convergence to optimal solution has been linked to the poor exploitation capabilities necessary to focus the search. To address this exploration-exploitation trade-off, this paper deploys diversity control policies that ensure sustained exploration of the search space without compromising effective exploitation of its promising regions. First, a dual-pool EC algorithm that facilitates a temporal evolution-diversification strategy is proposed. Then a quasi-random heuristic initialisation based on search space partitioning (SSP) is introduced to ensure uniform sampling of the initial search space. Second, for the diversity measurement, a robust convergence detection mechanism that combines a spatial diversity measure; and a population evolvability measure is utilised. It was found that the proposed algorithm needed a pool size of only 50 samples to converge to optimal solutions of a variety of global optimization benchmarks. Overall, the proposed algorithm yields a 33.34% reduction in the cost incurred by a standard EC algorithm. The outcome justifies the efficacy of effective diversity control on solving complex global optimization landscapes. Keywords: Diversity, exploration-exploitation tradeoff, evolutionary algorithms, heuristic initialisation, taxonomy

    Cloud-based Bioinformatics Framework for Next-Generation Sequencing Data

    Get PDF
    Huang L. Cloud-based Bioinformatics Framework for Next-Generation Sequencing Data. Bielefeld: Universität Bielefeld; 2019.The increasing amount of next-generation sequencing data introduces a fundamental challenge on large scale genomic analytics. Storing and processing large amounts of sequencing data requires considerable hardware resources and efficient software that can fully utilize these resources. Nowadays, both industrial enterprises and nonprofit institutes are providing robust and easy-access cloud services for studies in life science. To facilitate genomic data analyses on such powerful computing resources, distributed bioinformatics tools are needed. However, most of existing tools have low scalability on the distributed computing cloud. Thus, in this thesis, I developed a cloud based bioinformatics framework that mainly addresses two computational challenges: (i) the run time intensive challenge in the sequence mapping process and (ii) the memory intensive challenge in the de novo genome assembly process. For sequence mapping, I have natively implemented an Apache Spark based distributed sequence mapping tool called Sparkhit. It uses the q-gram filter and Pigeonhole principle to accelerate the speeds of fragment recruitment and short read mapping processes. These algorithms are implemented in the Spark extended MapReduce model. Sparkhit runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. For de novo genome assembly, I have invented a new data structure called Reflexible Distributed K-mer (RDK) and natively implemented a distributed genome assembler called Reflexiv. Reflexiv is built on top of the Apache Spark platform, uses Spark Resilient Distributed Dataset (RDD) to distributed large amount of k-mers across the cluster and assembles the genome in a recursive way. As a result, Reflexiv runs 8-17 times faster than Ray assembler and 5-18 times faster than AbySS assembler on the clusters deployed at the de.NBI cloud. In addition, I have incorporated a variety of analytical methods into the framework. I have also developed a tool wrapper to distribute external tools and Docker containers on the Spark cluster. As a large scale genomic use case, my framework processed 100 terabytes of data across four genomic projects on the Amazon cloud in 21 hours. Furthermore, the application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 hours, presenting an approach to easily associate large amounts of public datasets with reference data. Thus, my work contributes to the interdisciplinary research of life science and distributed cloud computing by improving existing methods with a new data structure, new algorithms, and robust distributed implementations
    • …
    corecore