53 research outputs found

    Simplification of genetic programs: a literature survey

    Get PDF
    Genetic programming (GP), a widely used evolutionary computing technique, suffers from bloat—the problem of excessive growth in individuals’ sizes. As a result, its ability to efficiently explore complex search spaces reduces. The resulting solutions are less robust and generalisable. Moreover, it is difficult to understand and explain models which contain bloat. This phenomenon is well researched, primarily from the angle of controlling bloat: instead, our focus in this paper is to review the literature from an explainability point of view, by looking at how simplification can make GP models more explainable by reducing their sizes. Simplification is a code editing technique whose primary purpose is to make GP models more explainable. However, it can offer bloat control as an additional benefit when implemented and applied with caution. Researchers have proposed several simplification techniques and adopted various strategies to implement them. We organise the literature along multiple axes to identify the relative strengths and weaknesses of simplification techniques and to identify emerging trends and areas for future exploration. We highlight design and integration challenges and propose several avenues for research. One of them is to consider simplification as a standalone operator, rather than an extension of the standard crossover or mutation operators. Its role is then more clearly complementary to other GP operators, and it can be integrated as an optional feature into an existing GP setup. Another proposed avenue is to explore the lack of utilisation of complexity measures in simplification. So far, size is the most discussed measure, with only two pieces of prior work pointing out the benefits of using time as a measure when controlling bloat

    On-the-fly simplification of genetic programming models

    Get PDF
    The last decade has seen amazing performance improvements in deep learning. However, the black-box nature of this approach makes it difficult to provide explanations of the generated models. In some fields such as psychology and neuroscience, this limitation in explainability and interpretability is an important issue. Approaches such as genetic programming are well positioned to take the lead in these fields because of their inherent white box nature. Genetic programming, inspired by Darwinian theory of evolution, is a population-based search technique capable of exploring a highdimensional search space intelligently and discovering multiple solutions. However, it is prone to generate very large solutions, a phenomenon often called “bloat”. The bloated solutions are not easily understandable. In this paper, we propose two techniques for simplifying the generated models. Both techniques are tested by generating models for a well-known psychology experiment. The validity of these techniques is further tested by applying them to a symbolic regression problem. Several population dynamics are studied to make sure that these techniques are not compromising diversity – an important measure for finding better solutions. The results indicate that the two techniques can be both applied independently and simultaneously and that they are capable of finding solutions at par with those generated by the standard GP algorithm – but with significantly reduced program size. There was no loss in diversity nor reduction in overall fitness. In fact, in some experiments, the two techniques even improved fitness

    Darwinian Data Structure Selection

    Get PDF
    Data structure selection and tuning is laborious but can vastly improve an application's performance and memory footprint. Some data structures share a common interface and enjoy multiple implementations. We call them Darwinian Data Structures (DDS), since we can subject their implementations to survival of the fittest. We introduce ARTEMIS a multi-objective, cloud-based search-based optimisation framework that automatically finds optimal, tuned DDS modulo a test suite, then changes an application to use that DDS. ARTEMIS achieves substantial performance improvements for \emph{every} project in 55 Java projects from DaCapo benchmark, 88 popular projects and 3030 uniformly sampled projects from GitHub. For execution time, CPU usage, and memory consumption, ARTEMIS finds at least one solution that improves \emph{all} measures for 86%86\% (37/4337/43) of the projects. The median improvement across the best solutions is 4.8%4.8\%, 10.1%10.1\%, 5.1%5.1\% for runtime, memory and CPU usage. These aggregate results understate ARTEMIS's potential impact. Some of the benchmarks it improves are libraries or utility functions. Two examples are gson, a ubiquitous Java serialization framework, and xalan, Apache's XML transformation tool. ARTEMIS improves gson by 16.516.5\%, 1%1\% and 2.2%2.2\% for memory, runtime, and CPU; ARTEMIS improves xalan's memory consumption by 23.523.5\%. \emph{Every} client of these projects will benefit from these performance improvements.Comment: 11 page

    GP Representation Space Reduction Using a Tiered Search Scheme

    Get PDF
    The size and complexity of a GP representation space is defined by the set of functions and terminals used, the arity of those functions, and the maximal depth of candidate solution trees in the space. Practice has shown that some means to reduce the size or bias the search must be provided. Adaptable Constrained Genetic Programming (ACGP) can discover beneficial substructures and probabilistically bias the search to promote the use of these substructures. ACGP has two operating modes: a more efficient low granularity mode (1st order heuristics) and a less efficient higher granularity mode (2nd order heuristics). Both of these operating modes produce probabilistic models, or heuristics, that bias the search for the solution to the problem at hand. The higher granularity mode should produce better models and thus improve GP performance, but in reality it does not always happen. This research analyzes the two modes, identifies problems and circumstances where the higher granularity search should be advantageous but is not, and then proposes a new methodology that divides the ACGP search into two-tiers. The first tier search exploits the computational efficiency of 1st order ACGP and builds a low granularity probabilistic model. This initial model is then used to condition the higher granularity search. The combined search scheme results in better solution fitness scores and lower computational time compared to a standard GP application or either mode of ACGP alone

    A foundation for synthesising programming language semantics

    Get PDF
    Programming or scripting languages used in real-world systems are seldom designed with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal semantics. This can take months or years of effort. Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging, as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis contains an analysis of their challenge, as well as the first steps towards a solution. Scaling methods with the size of the language is very difficult due to state space explosion, so this thesis proposes an incremental approach to learning the translation rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the conditions for incremental learning. The central definition of the new formalisation is the desugaring extension problem, i.e. extending a set of established translation rules by synthesising new ones. In a synthesis algorithm, the choice of search space is important and non-trivial, as it needs to strike a good balance between expressiveness and efficiency. The rest of the thesis focuses on defining search spaces for translation rules via typing rules. Two prerequisites are required for comparing search spaces. The first is a series of benchmarks, a set of source and target languages equipped with intended translation rules between them. The second is an enumerative synthesis algorithm for efficiently enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected from a type system for ensuring that typed programs be efficiently enumerable. The thesis presents and empirically evaluates two search spaces. A baseline search space yields the first practical solution to the challenge. The second search space is based on a natural heuristic for translation rules, limiting the usage of variables so that they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis and empirical comparison to the baseline, I then show that using linear types can speed up the synthesis of translation rules by an order of magnitude

    Smart Sensing: Selection, Prediction and Monitoring

    Get PDF
    A sensor is a device which is used to detect physical parameters of interest like temperature, pressure, or strain, performing the so called sensing process. This kind of device has been widely adopted in different fields such as aeronautics, automotive, security, logistics, health-care and more. The essential difference between a smart sensor and a standard sensor is its intelligence capability: smart sensors are able to capture and elaborate data from the environment while communicating and interacting with other systems in order to make predictions and find intelligent solutions based on the application needs. The first part of this thesis is focused on the problem of sensor selection in the context of virtual sensing of temperature in indoor environments, a topic of paramount importance which allows to increase the accuracy of the predictive models employed in the following phases by providing more informative data. In particular, virtual sensing refers to the process of estimating or predicting physical parameters without relying on physical sensors, using computational algorithms and predictive models to gather and analyze data for accurate predictions. We analyze the literature, propose and evaluate methodologies and solutions for sensor selection and placement based on machine learning techniques, including evolutionary algorithms. Thereafter, once determined which physical sensors to wield, the focus shifts to the actual methodology for virtual sensing strategies for the prediction of temperatures allowing to uniformly monitor uncovered or unreachable locations, reducing the sensors deployment costs and providing, at the same time, a fallback solution in case of sensor failures. For this purpose, we conduct a comprehensive assessment of different virtual sensing strategies including novel solutions proposed based on recurrent neural networks and graph neural networks able to effectively exploit spatio-temporal features. The methodologies considered so far are able to accurately complete the information coming from real physical sensors, allowing us to effectively carry out monitoring tasks such as anomaly or event detection. Therefore, the final part of this work looks at sensors from another, more formal, point of view. Specifically, it is devoted to the study and design of a framework aimed at pairing monitoring and machine learning techniques in order to detect, in a preemptive manner, critical behaviours of a system that could lead to a failure. This is done extracting interpretable properties, expressed in a given temporal logic formalism, from sensor data. The proposed framework is evaluated through an experimental assessment performed on benchmark datasets, and then compared to previous approaches from the literature

    Theory grounded design of genetic programming and parallel evolutionary algorithms

    Get PDF
    Evolutionary algorithms (EAs) have been successfully applied to many problems and applications. Their success comes from being general purpose, which means that the same EA can be used to solve different problems. Despite that, many factors can affect the behaviour and the performance of an EA and it has been proven that there isn't a particular EA which can solve efficiently any problem. This opens to the issue of understanding how different design choices can affect the performance of an EA and how to efficiently design and tune one. This thesis has two main objectives. On the one hand we will advance the theoretical understanding of evolutionary algorithms, particularly focusing on Genetic Programming and Parallel Evolutionary algorithms. We will do that trying to understand how different design choices affect the performance of the algorithms and providing rigorously proven bounds of the running time for different designs. This novel knowledge, built upon previous work on the theoretical foundation of EAs, will then help for the second objective of the thesis, which is to provide theory grounded design for Parallel Evolutionary Algorithms and Genetic Programming. This will consist in being inspired by the analysis of the algorithms to produce provably good algorithm designs
    corecore