3,139 research outputs found

    Meta-learning computational intelligence architectures

    Get PDF
    In computational intelligence, the term \u27memetic algorithm\u27 has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a \u27meme\u27 has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as \u27memetic algorithm\u27 is too specific, and ultimately a misnomer, as much as a \u27meme\u27 is defined too generally to be of scientific use. In this dissertation the notion of memes and meta-learning is extended from a computational viewpoint and the purpose, definitions, design guidelines and architecture for effective meta-learning are explored. The background and structure of meta-learning architectures is discussed, incorporating viewpoints from psychology, sociology, computational intelligence, and engineering. The benefits and limitations of meme-based learning are demonstrated through two experimental case studies -- Meta-Learning Genetic Programming and Meta- Learning Traveling Salesman Problem Optimization. Additionally, the development and properties of several new algorithms are detailed, inspired by the previous case-studies. With applications ranging from cognitive science to machine learning, meta-learning has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning --Abstract, page iii

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Interpreting Housing Prices with a MultidisciplinaryApproach Based on Nature-Inspired Algorithms and Quantum Computing

    Get PDF
    Current technology still does not allow the use of quantum computers for broader and individual uses; however, it is possible to simulate some of its potentialities through quantum computing. Quantum computing can be integrated with nature-inspired algorithms to innovatively analyze the dynamics of the real estate market or any other economic phenomenon. With this main aim, this study implements a multidisciplinary approach based on the integration of quantum computing and genetic algorithms to interpret housing prices. Starting from the principles of quantum programming, the work applies genetic algorithms for the marginal price determination of relevant real estate characteristics for a particular segment of Naples’ real estate market. These marginal prices constitute the quantum program inputs to provide, as results, the purchase probabilities corresponding to each real estate characteristic considered. The other main outcomes of this study consist of a comparison of the optimal quantities for each real estate characteristic as determined by the quantum program and the average amounts of the same characteristics but relative to the real estate data sampled, as well as the weights of the same characteristics obtained with the implementation of genetic algorithms. With respect to the current state of the art, this study is among the first regarding the application of quantum computing to interpretation of selling prices in local real estate markets

    Feature Grouping-based Feature Selection

    Get PDF

    Evolutionary multiobjective optimization for automatic agent-based model calibration: A comparative study

    Get PDF
    This work was supported by the Spanish Agencia Estatal de Investigacion, the Andalusian Government, the University of Granada, and European Regional Development Funds (ERDF) under Grants EXASOCO (PGC2018-101216-B-I00), SIMARK (P18-TP-4475), and AIMAR (A-TIC-284-UGR18). Manuel Chica was also supported by the Ramon y Cajal program (RYC-2016-19800).The authors would like to thank the ``Centro de Servicios de Informática y Redes de Comunicaciones'' (CSIRC), University of Granada, for providing the computing resources (Alhambra supercomputer).Complex problems can be analyzed by using model simulation but its use is not straight-forward since modelers must carefully calibrate and validate their models before using them. This is specially relevant for models considering multiple outputs as its calibration requires handling different criteria jointly. This can be achieved using automated calibration and evolutionary multiobjective optimization methods which are the state of the art in multiobjective optimization as they can find a set of representative Pareto solutions under these restrictions and in a single run. However, selecting the best algorithm for performing automated calibration can be overwhelming. We propose to deal with this issue by conducting an exhaustive analysis of the performance of several evolutionary multiobjective optimization algorithms when calibrating several instances of an agent-based model for marketing with multiple outputs. We analyze the calibration results using multiobjective performance indicators and attainment surfaces, including a statistical test for studying the significance of the indicator values, and benchmarking their performance with respect to a classical mathematical method. The results of our experimentation reflect that those algorithms based on decomposition perform significantly better than the remaining methods in most instances. Besides, we also identify how different properties of the problem instances (i.e., the shape of the feasible region, the shape of the Pareto front, and the increased dimensionality) erode the behavior of the algorithms to different degrees.Spanish Agencia Estatal de InvestigacionAndalusian GovernmentUniversity of GranadaEuropean Commission PGC2018-101216-B-I00 P18-TP-4475 A-TIC-284-UGR18Spanish Government RYC-2016-1980

    Optimization Algorithms in Project Scheduling

    Get PDF
    Scheduling, or planning in a general perspective, is the backbone of project management; thus, the successful implementation of project scheduling is a key factor to projects’ success. Due to its complexity and challenging nature, scheduling has become one of the most famous research topics within the operational research context, and it has been widely researched in practical applications within various industries, especially manufacturing, construction, and computer engineering. Accordingly, the literature is rich with many implementations of different optimization algorithms and their extensions within the project scheduling problem (PSP) analysis field. This study is intended to exhibit the general modelling of the PSP, and to survey the implementations of various optimization algorithms adopted for solving the different types of the PSP

    A Bayesian Approach for Software Release Planning under Uncertainty

    Get PDF
    Release planning — deciding what features to implement in upcoming releases of a software system— is a critical activity in iterative software development.Many release planning methods exist but most ignore the inevitable uncertainty of future development effort and business value. The thesis investigates how to analyse uncertainty during release planning and whether analysing uncertainty leads to better decisions than if uncertainty is ignored. The thesis’s first contribution is a novel release planning method designed to analyse uncertainty in the context of the Incremental Funding Method, an incremental cost-value based approach to software development. Our method uses triangular distributions, Monte-Carlo simulation and multi-objective optimisation to shortlist release plans that maximise expected net present value and minimise investment cost and risk. The second contribution is a new release planning method, called BEARS, designed to analyse uncertainty in the context of fixed-date release processes.Fixed-date release processes are more common in industry than fixed-scope release processes. BEARS models uncertainty about feature development time and economic value using lognormal distributions. It then uses Monte-Carlo simulation and search-based multi-objective optimisation to shortlist release plans that maximise expected net present value and expected punctuality. The method helps release planners explore possible tradeoffs between these two objectives. The thesis’ third contribution is an experiment to study whether analysing uncertainty using BEARS leads to shortlisting better release plans than if uncertainty is ignored, or if uncertainty is analysed assuming fixed-scope releases. The experiment compares 5 different release planning models on 32 release planning problems.The results show that analysing uncertainty using BEARS leads to shortlisting release plans with higher expected net present value and higher expected punctuality than methods that ignore uncertainty or that assume fixed-scope releases.Our experiment therefore shows that analysing uncertainty can lead to better release planning decisions than if uncertainty is ignored
    • …
    corecore