331 research outputs found

    Modelling evolvability in genetic programming

    Get PDF
    We develop a tree-based genetic programming system, capable of modelling evolvability during evolution through artificial neural networks (ANN) and exploiting those networks to increase the generational fitness of the system. This thesis is empirically focused; we study the effects of evolvability selection under varying conditions to demonstrate the effectiveness of evolvability selection. Evolvability is the capacity of an individual to improve its future fitness. In genetic programming (GP), we typically measure how well a program performs a given task at its current capacity only. We improve upon GP by directly selecting for evolvability. We construct a system, Sample-Evolvability Genetic Programming (SEGP), that estimates the true evolvability of a program by conducting a limited number of evolvability samples. Evolvability is sampled by conducting a number of genetic operations upon a program and comparing the fitnesses of resulting programs with the original. SEGP is able to achieve an increase in fitness at a cost of increased computational complexity. We then construct a system which improves upon SEGP, Model-Evolvability Genetic Programming (MEGP), that models the true evolvability of a program by training an ANN to predict its evolvability. MEGP reduces the computational cost of sampling evolvability while maintaining the fitness gains. MEGP is empirically shown to improve generational fitness for a streaming domain, in exchange for an upfront increase in computational time

    Fairness-Aware Hyperparameter Optimization

    Get PDF
    In recent years, increased usage of machine learning algorithms has been accompanied by several reports of machine bias in areas from recidivism assessment, to job-applicant screening tools, and estimating mortgage default risk. Additionally, recent advances in machine learning have prominently featured so-called "black-box" models (e.g. neural networks), in which we can see its inputs and outputs, but with limited capability for inspecting its decision-making process. As a result, it is increasingly imperative to monitor and control fairness of developed models for detecting discrimination against sub-groups of the population (e.g. based on race, gender, or age). State-of-the-art machine learning algorithms require the definition of a large number of hyperparameters to govern how they learn and generalize to unseen data. Current hyperparameter search algorithms aim to tune these knobs in order to optimize for a global performance metric (e.g. accuracy). At the same time, fairness metrics are equally impacted by varying hyperparameter values, but there is comparatively little research on optimizing for multiple objectives. Consequently, we aim to study how to achieve efficient hyperparameter optimization for multi-objective goals, and corresponding trade-offs. We develop a hyperparameter optimization framework that supports the definition of secondary objectives or constraints, and experiment with multiple fairness metrics (e.g. equality of opportunity). Furthermore, we explore a fraud detection case study, and assess the framework's effectiveness in this context

    Systems for AutoML Research

    Get PDF

    Automatic machine learning:methods, systems, challenges

    Get PDF
    • …
    corecore