169 research outputs found

    Co-optimization: a generalization of coevolution

    Get PDF
    Many problems encountered in computer science are best stated in terms of interactions amongst individuals. For example, many problems are most naturally phrased in terms of finding a candidate solution which performs best against a set of test cases. In such situations, methods are needed to find candidate solutions which are expected to perform best over all test cases. Coevolution holds the promise of addressing such problems by employing principles from biological evolution, where populations of candidate solutions and test cases are evolved over time to produce higher quality solutions...This thesis presents a generalization of coevolution to co-optimization, where optimization techniques that do not rely on evolutionary principles may be used. Instead of introducing a new addition to coevolution in order to make it better suited for a particular class of problems, this thesis suggests removing the evolutionary model in favor of a technique better suited for that class of problems --Abstract, page iii

    Iterative robust search (iRoSe) : a framework for coevolutionary hyperparameter optimisation

    Get PDF
    Finding an optimal hyperparameter configuration for machine learning algorithms is challenging due to hyperparameter effects that could vary with algorithms, dataset and distribution, as also due to the large combinatorial search space of hyperparameter values requiring expensive trials. Furthermore, extant optimisation procedures that search out optima randomly and in a manner non-specific to the optimisation problem, when viewed through the "No Free Lunches" theorem, could be considered a priori unjustifiable. In seeking a coevolutionary, adaptive strategy that robustifies the search for optimal hyperparameter values, we investigate specifics of the optimisation problem through 'macro-modelling' that abstracts out the complexity of the algorithm in terms of signal, control factors, noise factors and response. We design and run a budgeted number of 'proportionally balanced' trials using a predetermined mix of candidate control factors. Based on the responses from these proportional trials, we conduct 'main effects analysis' of individual hyperparameters of the algorithm, in terms of the signal to noise ratio, to derive hyperparameter configurations that enhance targeted performance characteristics through additivity. We formulate an iterative Robust Search (iRoSe) hyperparameter optimisation framework that leverages these problem-specific insights. Initialised with a valid hyperparameter configuration, iRoSe evidences ability to adaptively converge to a configuration that produces effective gain in performance characteristic, through designed search trials that are justifiable through extant theory. We demonstrate the iRoSe optimisation framework on a Deep Neural Network and CIFAR-10 dataset, comparing it to Bayesian optimisation procedure, to highlight the transformation achieved

    Genetic Transfer or Population Diversification? Deciphering the Secret Ingredients of Evolutionary Multitask Optimization

    Full text link
    Evolutionary multitasking has recently emerged as a novel paradigm that enables the similarities and/or latent complementarities (if present) between distinct optimization tasks to be exploited in an autonomous manner simply by solving them together with a unified solution representation scheme. An important matter underpinning future algorithmic advancements is to develop a better understanding of the driving force behind successful multitask problem-solving. In this regard, two (seemingly disparate) ideas have been put forward, namely, (a) implicit genetic transfer as the key ingredient facilitating the exchange of high-quality genetic material across tasks, and (b) population diversification resulting in effective global search of the unified search space encompassing all tasks. In this paper, we present some empirical results that provide a clearer picture of the relationship between the two aforementioned propositions. For the numerical experiments we make use of Sudoku puzzles as case studies, mainly because of their feature that outwardly unlike puzzle statements can often have nearly identical final solutions. The experiments reveal that while on many occasions genetic transfer and population diversity may be viewed as two sides of the same coin, the wider implication of genetic transfer, as shall be shown herein, captures the true essence of evolutionary multitasking to the fullest.Comment: 7 pages, 6 figure

    Two enhancements for improving the convergence speed of a robust multi-objective coevolutionary algorithm.

    Get PDF
    We describe two enhancements that significantly improve the rapid convergence behavior of DECM02 - a previously proposed robust coevolutionary algorithm that integrates three different multi-objective space exploration paradigms: differential evolution, two-tier Pareto-based selection for survival and decomposition-based evolutionary guidance. The first enhancement is a refined active search adaptation mechanism that relies on run-time sub-population performance indicators to estimate the convergence stage and dynamically adjust and steer certain parts of the coevolutionary process in order to improve its overall efficiency. The second enhancement consists in a directional intensification operator that is applied in the early part of the run during the decomposition-based search phases. This operator creates new random local linear individuals based on the recent historically successful solution candidates of a given directional decomposition vector. As the two efficiency-related enhancements are complementary, our results show that the resulting coevolutionary algorithm is a highly competitive improvement of the baseline strategy when considering a comprehensive test set aggregated from 25 (standard) benchmark multi-objective optimization problems

    Impossibility Results in AI: A Survey

    Get PDF
    An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one. As such, these results serve as guidelines, reminders, and warnings to AI safety, AI policy, and governance researchers. These might enable solutions to some long-standing questions in the form of formalizing theories in the framework of constraint satisfaction without committing to one option. In this paper, we have categorized impossibility theorems applicable to the domain of AI into five categories: deduction, indistinguishability, induction, tradeoffs, and intractability. We found that certain theorems are too specific or have implicit assumptions that limit application. Also, we added a new result (theorem) about the unfairness of explainability, the first explainability-related result in the induction category. We concluded that deductive impossibilities deny 100%-guarantees for security. In the end, we give some ideas that hold potential in explainability, controllability, value alignment, ethics, and group decision-making. They can be deepened by further investigation

    Coevolution of Second-order-mutant

    Get PDF
    One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneously using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multi-problem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case

    The Implications of the No-Free-Lunch Theorems for Meta-induction

    Full text link
    The important recent book by G. Schurz appreciates that the no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior -- they prove that there are "as many priors" (loosely speaking) for which any induction algorithm AA out-generalizes some induction algorithm BB as vice-versa. Importantly though, in addition to the NFL theorems, there are many \textit{free lunch} theorems. In particular, the NFL theorems can only be used to compare the \textit{marginal} expected performance of an induction algorithm AA with the marginal expected performance of an induction algorithm BB. There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms. As I describe, the meta-induction algorithms that Schurz advocate as a "solution to Hume's problem" are just an example of such a free lunch based on correlations among the generalization errors of induction algorithms. I end by pointing out that the prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference.Comment: 14 page

    Swarm-Based Metaheuristic Algorithms and No-Free-Lunch Theorems

    Get PDF
    • …
    corecore