7,733 research outputs found

    A Fuzzy Rule-Based System to Predict Energy Consumption of Genetic Programming Algorithms

    Get PDF
    In recent years, the energy-awareness has become one of the most interesting areas in our environmentally conscious society. Algorithm designers have been part of this, particularly when dealing with networked devices and, mainly, when handheld ones are involved. Although studies in this area has increased, not many of them have focused on Evolutionary Algorithms. To the best of our knowledge, few attempts have been performed before for modeling their energy consumption considering different execution devices. In this work, we propose a fuzzy rulebased system to predict energy comsumption of a kind of Evolutionary Algorithm, Genetic Prohramming, given the device in wich it will be executed, its main parameters, and a measurement of the difficulty of the problem addressed. Experimental results performed show that the proposed model can predict energy consumption with very low error values.We acknowledge support from Spanish Ministry of Economy and Competitiveness under projects TIN2014-56494-C4-[1,2,3]-P and TIN2017-85727-C4- [2,4]-P, Regional Government of Extremadura, Department of Commerce and Economy, conceded by the European Regional Development Fund, a way to build Europe, under the project IB16035, and Junta de Extremadura FEDER, projects GR15068 and GR15130

    Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena

    Full text link
    Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is further complicated by many theoretical issues, such as the I-equivalence among different structures. In this work, we focus on a specific subclass of BNs, named Suppes-Bayes Causal Networks (SBCNs), which include specific structural constraints based on Suppes' probabilistic causation to efficiently model cumulative phenomena. Here we compare the performance, via extensive simulations, of various state-of-the-art search strategies, such as local search techniques and Genetic Algorithms, as well as of distinct regularization methods. The assessment is performed on a large number of simulated datasets from topologies with distinct levels of complexity, various sample size and different rates of errors in the data. Among the main results, we show that the introduction of Suppes' constraints dramatically improve the inference accuracy, by reducing the solution space and providing a temporal ordering on the variables. We also report on trade-offs among different search techniques that can be efficiently employed in distinct experimental settings. This manuscript is an extended version of the paper "Structural Learning of Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018 International Conference on Computational Science

    Prediction in Financial Markets: The Case for Small Disjuncts

    Get PDF
    Predictive models in regression and classification problems typically have a single model that covers most, if not all, cases in the data. At the opposite end of the spectrum is a collection of models each of which covers a very small subset of the decision space. These are referred to as “small disjuncts.” The tradeoffs between the two types of models have been well documented. Single models, especially linear ones, are easy to interpret and explain. In contrast, small disjuncts do not provide as clean or as simple an interpretation of the data, and have been shown by several researchers to be responsible for a disproportionately large number of errors when applied to out of sample data. This research provides a counterpoint, demonstrating that “simple” small disjuncts provide a credible model for financial market prediction, a problem with a high degree of noise. A related novel contribution of this paper is a simple method for measuring the “yield” of a learning system, which is the percentage of in sample performance that the learned model can be expected to realize on out-of-sample data. Curiously, such a measure is missing from the literature on regression learning algorithms.NYU Stern School of Busines

    A Field Guide to Genetic Programming

    Get PDF
    xiv, 233 p. : il. ; 23 cm.Libro ElectrónicoA Field Guide to Genetic Programming (ISBN 978-1-4092-0073-4) is an introduction to genetic programming (GP). GP is a systematic, domain-independent method for getting computers to solve problems automatically starting from a high-level statement of what needs to be done. Using ideas from natural evolution, GP starts from an ooze of random computer programs, and progressively refines them through processes of mutation and sexual recombination, until solutions emerge. All this without the user having to know or specify the form or structure of solutions in advance. GP has generated a plethora of human-competitive results and applications, including novel scientific discoveries and patentable inventions. The authorsIntroduction -- Representation, initialisation and operators in Tree-based GP -- Getting ready to run genetic programming -- Example genetic programming run -- Alternative initialisations and operators in Tree-based GP -- Modular, grammatical and developmental Tree-based GP -- Linear and graph genetic programming -- Probalistic genetic programming -- Multi-objective genetic programming -- Fast and distributed genetic programming -- GP theory and its applications -- Applications -- Troubleshooting GP -- Conclusions.Contents xi 1 Introduction 1.1 Genetic Programming in a Nutshell 1.2 Getting Started 1.3 Prerequisites 1.4 Overview of this Field Guide I Basics 2 Representation, Initialisation and GP 2.1 Representation 2.2 Initialising the Population 2.3 Selection 2.4 Recombination and Mutation Operators in Tree-based 3 Getting Ready to Run Genetic Programming 19 3.1 Step 1: Terminal Set 19 3.2 Step 2: Function Set 20 3.2.1 Closure 21 3.2.2 Sufficiency 23 3.2.3 Evolving Structures other than Programs 23 3.3 Step 3: Fitness Function 24 3.4 Step 4: GP Parameters 26 3.5 Step 5: Termination and solution designation 27 4 Example Genetic Programming Run 4.1 Preparatory Steps 29 4.2 Step-by-Step Sample Run 31 4.2.1 Initialisation 31 4.2.2 Fitness Evaluation Selection, Crossover and Mutation Termination and Solution Designation Advanced Genetic Programming 5 Alternative Initialisations and Operators in 5.1 Constructing the Initial Population 5.1.1 Uniform Initialisation 5.1.2 Initialisation may Affect Bloat 5.1.3 Seeding 5.2 GP Mutation 5.2.1 Is Mutation Necessary? 5.2.2 Mutation Cookbook 5.3 GP Crossover 5.4 Other Techniques 32 5.5 Tree-based GP 39 6 Modular, Grammatical and Developmental Tree-based GP 47 6.1 Evolving Modular and Hierarchical Structures 47 6.1.1 Automatically Defined Functions 48 6.1.2 Program Architecture and Architecture-Altering 50 6.2 Constraining Structures 51 6.2.1 Enforcing Particular Structures 52 6.2.2 Strongly Typed GP 52 6.2.3 Grammar-based Constraints 53 6.2.4 Constraints and Bias 55 6.3 Developmental Genetic Programming 57 6.4 Strongly Typed Autoconstructive GP with PushGP 59 7 Linear and Graph Genetic Programming 61 7.1 Linear Genetic Programming 61 7.1.1 Motivations 61 7.1.2 Linear GP Representations 62 7.1.3 Linear GP Operators 64 7.2 Graph-Based Genetic Programming 65 7.2.1 Parallel Distributed GP (PDGP) 65 7.2.2 PADO 67 7.2.3 Cartesian GP 67 7.2.4 Evolving Parallel Programs using Indirect Encodings 68 8 Probabilistic Genetic Programming 8.1 Estimation of Distribution Algorithms 69 8.2 Pure EDA GP 71 8.3 Mixing Grammars and Probabilities 74 9 Multi-objective Genetic Programming 75 9.1 Combining Multiple Objectives into a Scalar Fitness Function 75 9.2 Keeping the Objectives Separate 76 9.2.1 Multi-objective Bloat and Complexity Control 77 9.2.2 Other Objectives 78 9.2.3 Non-Pareto Criteria 80 9.3 Multiple Objectives via Dynamic and Staged Fitness Functions 80 9.4 Multi-objective Optimisation via Operator Bias 81 10 Fast and Distributed Genetic Programming 83 10.1 Reducing Fitness Evaluations/Increasing their Effectiveness 83 10.2 Reducing Cost of Fitness with Caches 86 10.3 Parallel and Distributed GP are Not Equivalent 88 10.4 Running GP on Parallel Hardware 89 10.4.1 Master–slave GP 89 10.4.2 GP Running on GPUs 90 10.4.3 GP on FPGAs 92 10.4.4 Sub-machine-code GP 93 10.5 Geographically Distributed GP 93 11 GP Theory and its Applications 97 11.1 Mathematical Models 98 11.2 Search Spaces 99 11.3 Bloat 101 11.3.1 Bloat in Theory 101 11.3.2 Bloat Control in Practice 104 III Practical Genetic Programming 12 Applications 12.1 Where GP has Done Well 12.2 Curve Fitting, Data Modelling and Symbolic Regression 12.3 Human Competitive Results – the Humies 12.4 Image and Signal Processing 12.5 Financial Trading, Time Series, and Economic Modelling 12.6 Industrial Process Control 12.7 Medicine, Biology and Bioinformatics 12.8 GP to Create Searchers and Solvers – Hyper-heuristics xiii 12.9 Entertainment and Computer Games 127 12.10The Arts 127 12.11Compression 128 13 Troubleshooting GP 13.1 Is there a Bug in the Code? 13.2 Can you Trust your Results? 13.3 There are No Silver Bullets 13.4 Small Changes can have Big Effects 13.5 Big Changes can have No Effect 13.6 Study your Populations 13.7 Encourage Diversity 13.8 Embrace Approximation 13.9 Control Bloat 13.10 Checkpoint Results 13.11 Report Well 13.12 Convince your Customers 14 Conclusions Tricks of the Trade A Resources A.1 Key Books A.2 Key Journals A.3 Key International Meetings A.4 GP Implementations A.5 On-Line Resources 145 B TinyGP 151 B.1 Overview of TinyGP 151 B.2 Input Data Files for TinyGP 153 B.3 Source Code 154 B.4 Compiling and Running TinyGP 162 Bibliography 167 Inde

    Impossibility Results in AI: A Survey

    Get PDF
    An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation

    Universal psychometrics: measuring cognitive abilities in the machine kingdom

    Full text link
    We present and develop the notion of ‘universal psychometrics’ as a subject of study, and eventually a discipline, that focusses on the measurement of cognitive abilities for the machine kingdom, which comprises any (cognitive) system, individual or collective, either artificial, biological or hybrid. Universal psychometrics can be built, of course, upon the experience, techniques and methodologies from (human) psychometrics, comparative cognition and related areas. Conversely, the perspective and techniques which are being developed in the area of machine intelligence measurement using (algorithmic) information theory can be of much broader applicability and implication outside artificial intelligence. This general approach to universal psychometrics spurs the re-understanding of most (if not all) of the big issues about the measurement of cognitive abilities, and creates a new foundation for (re)defining and mathematically formalising the concept of cognitive task, evaluable subject, interface, task choice, difficulty, agent response curves, etc. We introduce the notion of a universal cognitive test and discuss whether (and when) it may be necessary for exploring the machine kingdom. On the issue of intelligence and very general abilities, we also get some results and connections with the related notions of no-free-lunch theorems and universal priorsWe thank the anonymous reviewers for their comments. This work was supported by the MEC-MINECO projects CONSOLIDER-INGENIO CSD2007-00022 and TIN 2010-21062-C02-02, GVA project PROMETEO/2008/051, the COST -European Cooperation in the field of Scientific and Technical Research IC0801 ATHernández Orallo, J.; Dowe, DL.; Hernández Lloreda, MV. (2014). Universal psychometrics: measuring cognitive abilities in the machine kingdom. Cognitive Systems Research. 27:50-74. https://doi.org/10.1016/j.cogsys.2013.06.001S50742

    MONEDA: scalable multi-objective optimization with a neural network-based estimation of distribution algorithm

    Get PDF
    The Extension Of Estimation Of Distribution Algorithms (Edas) To The Multiobjective Domain Has Led To Multi-Objective Optimization Edas (Moedas). Most Moedas Have Limited Themselves To Porting Single-Objective Edas To The Multi-Objective Domain. Although Moedas Have Proved To Be A Valid Approach, The Last Point Is An Obstacle To The Achievement Of A Significant Improvement Regarding "Standard" Multi-Objective Optimization Evolutionary Algorithms. Adapting The Model-Building Algorithm Is One Way To Achieve A Substantial Advance. Most Model-Building Schemes Used So Far By Edas Employ Off-The-Shelf Machine Learning Methods. However, The Model-Building Problem Has Particular Requirements That Those Methods Do Not Meet And Even Evade. The Focus Of This Paper Is On The Model- Building Issue And How It Has Not Been Properly Understood And Addressed By Most Moedas. We Delve Down Into The Roots Of This Matter And Hypothesize About Its Causes. To Gain A Deeper Understanding Of The Subject We Propose A Novel Algorithm Intended To Overcome The Draw-Backs Of Current Moedas. This New Algorithm Is The Multi-Objective Neural Estimation Of Distribution Algorithm (Moneda). Moneda Uses A Modified Growing Neural Gas Network For Model-Building (Mb-Gng). Mb-Gng Is A Custom-Made Clustering Algorithm That Meets The Above Demands. Thanks To Its Custom-Made Model-Building Algorithm, The Preservation Of Elite Individuals And Its Individual Replacement Scheme, Moneda Is Capable Of Scalably Solving Continuous Multi-Objective Optimization Problems. It Performs Better Than Similar Algorithms In Terms Of A Set Of Quality Indicators And Computational Resource Requirements.This work has been funded in part by projects CNPq BJT 407851/2012-7, FAPERJ APQ1 211.451/2015, MINECO TEC2014-57022-C2-2-R and TEC2012-37832-C02-01

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore