37 research outputs found

    Genetic Algorithm Based on Schemata Theory

    Get PDF

    Coevolutionary GA with schema extraction by machine learning techniques and its application to knapsack problems

    Get PDF
    The authors introduce a novel coevolutionary genetic algorithm with schema extraction by machine learning techniques. Our CGA consists of two GA populations: the first GA (H-GA) searches for the solutions in the given problems and the second GA (P-GA) searches for effective schemata of the H-GA. We aim to improve the search ability of our CGA by extracting more efficiently useful schemata from the H-GA population, and then incorporating those extracted schemata in a natural manner into the P-GA. Several computational simulations on multidimensional knapsack problems confirm the effectiveness of the proposed method</p

    Coevolutionary genetic algorithm for constraint satisfaction with a genetic repair operator for effective schemata formation

    Get PDF
    We discuss a coevolutionary genetic algorithm for constraint satisfaction. Our basic idea is to explore effective genetic information in the population, i.e., schemata, and to exploit the genetic information in order to guide the population to better solutions. Our coevolutionary genetic algorithm (CGA) consists of two GA populations; the first GA, called “H-GA”, searches for the solutions in a given environment (problem), and the second GA, called “P-GA”, searches for effective genetic information involved in the H-GA, namely, good schemata. Thus, each individual in P-GA consists of alleles in H-GA or “don't care” symbol representing a schema in the H-GA. These GA populations separately evolve in each genetic space at different abstraction levels and affect with each other by two genetic operators: “superposition” and “transcription”. We then applied our CGA to constraint satisfaction problems (CSPs) incorporating a new stochastic “repair” operator for P-GA to raise the consistency of schemata with the (local) constraint conditions in CSPs. We carried out two experiments: First, we examined the performance of CGA on various “general” CSPs that are generated randomly for a wide variety of “density” and “tightness” of constraint conditions in the CSPs that are the basic measures of characterizing CSPs. Next, we examined “structured” CSPs involving latent “cluster” structures among the variables in the CSPs. For these experiments, computer simulations confirmed us the effectiveness of our CGA</p

    Evolutionary Decomposition of Complex Design Spaces

    Get PDF
    This dissertation investigates the support of conceptual engineering design through the decomposition of multi-dimensional search spaces into regions of high performance. Such decomposition helps the designer identify optimal design directions by the elimination of infeasible or undesirable regions within the search space. Moreover, high levels of interaction between the designer and the model increases overall domain knowledge and significantly reduces uncertainty relating to the design task at hand. The aim of the research is to develop the archetypal Cluster Oriented Genetic Algorithm (COGA) which achieves search space decomposition by using variable mutation (vmCOGA) to promote diverse search and an Adaptive Filter (AF) to extract solutions of high performance [Parmee 1996a, 1996b]. Since COGAs are primarily used to decompose design domains of unknown nature within a real-time environment, the elimination of apriori knowledge, speed and robustness are paramount. Furthermore COGA should promote the in-depth exploration of the entire search space, sampling all optima and the surrounding areas. Finally any proposed system should allow for trouble free integration within a Graphical User Interface environment. The replacement of the variable mutation strategy with a number of algorithms which increase search space sampling are investigated. Utility is then increased by incorporating a control mechanism that maintains optimal performance by adapting each algorithm throughout search by means of a feedback measure based upon population convergence. Robustness is greatly improved by modifying the Adaptive Filter through the introduction of a process that ensures more accurate modelling of the evolving population. The performance of each prospective algorithm is assessed upon a suite of two-dimensional test functions using a set of novel performance metrics. A six dimensional test function is also developed where the areas of high performance are explicitly known, thus allowing for evaluation under conditions of increased dimensionality. Further complexity is introduced by two real world models described by both continuous and discrete parameters. These relate to the design of conceptual airframes and cooling hole geometries within a gas turbine. Results are promising and indicate significant improvement over the vmCOGA in terms of all desired criteria. This further supports the utilisation of COGA as a decision support tool during the conceptual phase of design.British Aerospace plc, Warton and Rolls Royce plc, Filto

    Automatic Test Data Generation Using Constraint Programming and Search Based Software Engineering Techniques

    Get PDF
    RÉSUMÉ Prouver qu'un logiciel correspond Ă  sa spĂ©cification ou exposer des erreurs cachĂ©es dans son implĂ©mentation est une tĂąche de test trĂšs difficile, fastidieuse et peut coĂ»ter plus de 50% de coĂ»t total du logiciel. Durant la phase de test du logiciel, la gĂ©nĂ©ration des donnĂ©es de test est l'une des tĂąches les plus coĂ»teuses. Par consĂ©quent, l'automatisation de cette tĂąche permet de rĂ©duire considĂ©rablement le coĂ»t du logiciel, le temps de dĂ©veloppement et les dĂ©lais de commercialisation. Plusieurs travaux de recherche ont proposĂ© des approches automatisĂ©es pour gĂ©nĂ©rer des donnĂ©es de test. Certains de ces travaux ont montrĂ© que les techniques de gĂ©nĂ©ration des donnĂ©es de test qui sont basĂ©es sur des mĂ©taheuristiques (SB-STDG) peuvent gĂ©nĂ©rer automatiquement des donnĂ©es de test. Cependant, ces techniques sont trĂšs sensibles Ă  leur orientation qui peut avoir un impact sur l'ensemble du processus de gĂ©nĂ©ration des donnĂ©es de test. Une insuffisance d'informations pertinentes sur le problĂšme de gĂ©nĂ©ration des donnĂ©es de test peut affaiblir l'orientation et affecter nĂ©gativement l'efficacitĂ© et l'effectivitĂ© de SB-STDG. Dans cette thĂšse, notre proposition de recherche est d'analyser statiquement le code source pour identifier et extraire des informations pertinentes afin de les exploiter dans le processus de SB-STDG pourrait offrir davantage d'orientation et ainsi d'amĂ©liorer l'efficacitĂ© et l'effectivitĂ© de SB-STDG. Pour extraire des informations pertinentes pour l'orientation de SB-STDG, nous analysons de maniĂšre statique la structure interne du code source en se concentrant sur six caractĂ©ristiques, i.e., les constantes, les instructions conditionnelles, les arguments, les membres de donnĂ©es, les mĂ©thodes et les relations. En mettant l'accent sur ces caractĂ©ristiques et en utilisant diffĂ©rentes techniques existantes d'analyse statique, i.e, la programmation par contraintes (CP), la thĂ©orie du schĂ©ma et certains analyses statiques lĂ©gĂšres, nous proposons quatre approches: (1) en mettant l'accent sur les arguments et les instructions conditionnelles, nous dĂ©finissons une approche hybride qui utilise les techniques de CP pour guider SB-STDG Ă  rĂ©duire son espace de recherche; (2) en mettant l'accent sur les instructions conditionnelles et en utilisant des techniques de CP, nous dĂ©finissons deux nouvelles mĂ©triques qui mesurent la difficultĂ© Ă  satisfaire une branche (i.e., condition), d'o˘ nous tirons deux nouvelles fonctions objectif pour guider SB-STDG; (3) en mettant l'accent sur les instructions conditionnelles et en utilisant la thĂ©orie du schĂ©ma, nous adaptons l'algorithme gĂ©nĂ©tique pour mieux rĂ©pondre au problĂšme de la gĂ©nĂ©ration de donnĂ©es de test; (4) en mettant l'accent sur les arguments, les instructions conditionnelles, les constantes, les membres de donnĂ©es, les mĂ©thodes et les relations, et en utilisant des analyses statiques lĂ©gĂšres, nous dĂ©finissons un gĂ©nĂ©rateur d'instance qui gĂ©nĂšre des donnĂ©es de test candidates pertinentes et une nouvelle reprĂ©sentation du problĂšme de gĂ©nĂ©ration des donnĂ©es de test orientĂ©-objet qui rĂ©duit implicitement l'espace de recherche de SB-STDG. Nous montrons que les analyses statiques aident Ă  amĂ©liorer l'efficacitĂ© et l'effectivitĂ© de SB-STDG. Les rĂ©sultats obtenus dans cette thĂšse montrent des amĂ©liorations importantes en termes d'efficacitĂ© et d'effectivitĂ©. Ils sont prometteurs et nous espĂ©rons que d'autres recherches dans le domaine de la gĂ©nĂ©ration des donnĂ©es de test pourraient amĂ©liorer davantage l'efficacitĂ© ou l'effectivitĂ©.----------ABSTRACT Proving that some software system corresponds to its specification or revealing hidden errors in its implementation is a time consuming and tedious testing process, accounting for 50% of the total software. Test-data generation is one of the most expensive parts of the software testing phase. Therefore, automating this task can significantly reduce software cost, development time, and time to market. Many researchers have proposed automated approaches to generate test data. Among the proposed approaches, the literature showed that Search-Based Software Test-data Generation (SB-STDG) techniques can automatically generate test data. However, these techniques are very sensitive to their guidance which impact the whole test-data generation process. The insufficiency of information relevant about the test-data generation problem can weaken the SB-STDG guidance and negatively affect its efficiency and effectiveness. In this dissertation, our thesis is statically analyzing source code to identify and extract relevant information to exploit them in the SB-STDG process could offer more guidance and thus improve the efficiency and effectiveness of SB-STDG. To extract information relevant for SB-STDG guidance, we statically analyze the internal structure of the source code focusing on six features, i.e., constants, conditional statements, arguments, data members, methods, and relationships. Focusing on these features and using different existing techniques of static analysis, i.e., constraints programming (CP), schema theory, and some lightweight static analyses, we propose four approaches: (1) focusing on arguments and conditional statements, we define a hybrid approach that uses CP techniques to guide SB-STDG in reducing its search space; (2) focusing on conditional statements and using CP techniques, we define two new metrics that measure the difficulty to satisfy a branch, hence we derive two new fitness functions to guide SB-STDG; (3) focusing on conditional statements and using schema theory, we tailor genetic algorithm to better fit the problem of test-data generation; (4) focusing on arguments, conditional statements, constants, data members, methods, and relationships, and using lightweight static analyses, we define an instance generator that generates relevant test-data candidates and a new representation of the problem of object-oriented test-data generation that implicitly reduces the SB-STDG search space. We show that using static analyses improve the SB-STDG efficiency and effectiveness. The achieved results in this dissertation show an important improvements in terms of effectiveness and efficiency. They are promising and we hope that further research in the field of test-data generation might improve efficiency or effectiveness

    Automatisation du processus de construction des structures de données floues

    Get PDF
    Notion de base sur la logique floue -- Problématique et motivation de la recherche -- SystÚmes à base de connaissances -- Génération automatique de bases de connaissances floues -- Généralités sur les algorithmes génétiques -- Généralités sur le procédé de pùtes thermomécanique -- Recherche proposée -- algorithmes génétiques hybride et binaire pour la génération automatique de bases de connaissances -- Stratégies multicombinatoires pour éviter la convergence prématurée dans les algorithmes génétiques -- Prédiction en ligne de la blancheur ISO de la pùte thermomécanique -- Real/binary-like coded versus binary coded genetic algorithms to automatically generate fuzzy knowledge bases : a comparative study -- Fuzzy decision support system -- Automatic generation of fuzzy knowledge bases using GAs -- Learning process -- Validation results -- Multi-combinative strategy to avoid premature convergence in genetically-generated fuzzy knowledge bases -- Introduction and problem definition -- Real/binary like coded genetic algorithm -- Performance criteria -- Evolutionary strategy -- Application to experimental data -- Online prediction of pulp brightness using fuzzy logic models -- The Chips management system -- Experiment plan for data collection -- Selection of the influencing variables -- Genetic-based learning process -- Performance criterion -- Evolutionary strategy -- Learning the FKBs for brightness prediction -- Learning the FKBs using laboratory variables

    From specialists to generalists : inductive biases of deep learning for higher level cognition

    Full text link
    Les rĂ©seaux de neurones actuels obtiennent des rĂ©sultats de pointe dans une gamme de domaines problĂ©matiques difficiles. Avec suffisamment de donnĂ©es et de calculs, les rĂ©seaux de neurones actuels peuvent obtenir des rĂ©sultats de niveau humain sur presque toutes les tĂąches. En ce sens, nous avons pu former des spĂ©cialistes capables d'effectuer trĂšs bien une tĂąche particuliĂšre, que ce soit le jeu de Go, jouer Ă  des jeux Atari, manipuler le cube Rubik, mettre des lĂ©gendes sur des images ou dessiner des images avec des lĂ©gendes. Le prochain dĂ©fi pour l'IA est de concevoir des mĂ©thodes pour former des gĂ©nĂ©ralistes qui, lorsqu'ils sont exposĂ©s Ă  plusieurs tĂąches pendant l'entraĂźnement, peuvent s'adapter rapidement Ă  de nouvelles tĂąches inconnues. Sans aucune hypothĂšse sur la distribution gĂ©nĂ©ratrice de donnĂ©es, il peut ne pas ĂȘtre possible d'obtenir une meilleure gĂ©nĂ©ralisation et une meilleure adaptation Ă  de nouvelles tĂąches (inconnues). Les rĂ©seaux de neurones actuels obtiennent des rĂ©sultats de pointe dans une gamme de domaines problĂ©matiques difficiles. Une possibilitĂ© fascinante est que l'intelligence humaine et animale puisse ĂȘtre expliquĂ©e par quelques principes, plutĂŽt qu'une encyclopĂ©die de faits. Si tel Ă©tait le cas, nous pourrions plus facilement Ă  la fois comprendre notre propre intelligence et construire des machines intelligentes. Tout comme en physique, les principes eux-mĂȘmes ne suffiraient pas Ă  prĂ©dire le comportement de systĂšmes complexes comme le cerveau, et des calculs importants pourraient ĂȘtre nĂ©cessaires pour simuler l'intelligence humaine. De plus, nous savons que les vrais cerveaux intĂšgrent des connaissances a priori dĂ©taillĂ©es spĂ©cifiques Ă  une tĂąche qui ne pourraient pas tenir dans une courte liste de principes simples. Nous pensons donc que cette courte liste explique plutĂŽt la capacitĂ© des cerveaux Ă  apprendre et Ă  s'adapter efficacement Ă  de nouveaux environnements, ce qui est une grande partie de ce dont nous avons besoin pour l'IA. Si cette hypothĂšse de simplicitĂ© des principes Ă©tait correcte, cela suggĂ©rerait que l'Ă©tude du type de biais inductifs (une autre façon de penser aux principes de conception et aux a priori, dans le cas des systĂšmes d'apprentissage) que les humains et les animaux exploitent pourrait aider Ă  la fois Ă  clarifier ces principes et Ă  fournir source d'inspiration pour la recherche en IA. L'apprentissage en profondeur exploite dĂ©jĂ  plusieurs biais inductifs clĂ©s, et mon travail envisage une liste plus large, en se concentrant sur ceux qui concernent principalement le traitement cognitif de niveau supĂ©rieur. Mon travail se concentre sur la conception de tels modĂšles en y incorporant des hypothĂšses fortes mais gĂ©nĂ©rales (biais inductifs) qui permettent un raisonnement de haut niveau sur la structure du monde. Ce programme de recherche est Ă  la fois ambitieux et pratique, produisant des algorithmes concrets ainsi qu'une vision cohĂ©rente pour une recherche Ă  long terme vers la gĂ©nĂ©ralisation dans un monde complexe et changeant.Current neural networks achieve state-of-the-art results across a range of challenging problem domains. Given enough data, and computation, current neural networks can achieve human-level results on mostly any task. In the sense, that we have been able to train \textit{specialists} that can perform a particular task really well whether it's the game of GO, playing Atari games, Rubik's cube manipulation, image caption or drawing images given captions. The next challenge for AI is to devise methods to train \textit{generalists} that when exposed to multiple tasks during training can quickly adapt to new unknown tasks. Without any assumptions about the data generating distribution it may not be possible to achieve better generalization and adaption to new (unknown) tasks. A fascinating possibility is that human and animal intelligence could be explained by a few principles (rather than an encyclopedia). If that was the case, we could more easily both understand our own intelligence and build intelligent machines. Just like in physics, the principles themselves would not be sufficient to predict the behavior of complex systems like brains, and substantial computation might be needed to simulate human intelligence. In addition, we know that real brains incorporate some detailed task-specific a priori knowledge which could not fit in a short list of simple principles. So we think of that short list rather as explaining the ability of brains to learn and adapt efficiently to new environments, which is a great part of what we need for AI. If that simplicity of principles hypothesis was correct it would suggest that studying the kind of inductive biases (another way to think about principles of design and priors, in the case of learning systems) that humans and animals exploit could help both clarify these principles and provide inspiration for AI research. Deep learning already exploits several key inductive biases, and my work considers a larger list, focusing on those which concern mostly higher-level cognitive processing. My work focuses on designing such models by incorporating in them strong but general assumptions (inductive biases) that enable high-level reasoning about the structure of the world. This research program is both ambitious and practical, yielding concrete algorithms as well as a cohesive vision for long-term research towards generalization in a complex and changing world

    Sélection et réglage de paramÚtres pour l'optimisation de logiciels d'ordonnancement industriel

    Get PDF
    L’utilisation d’un logiciel d’ordonnancement industriel fait intervenir une multitude de paramĂštres dont le rĂ©glage influence fortement la qualitĂ© des rĂ©sultats. A l’heure actuelle, ce rĂ©glage est effectuĂ© de façon manuelle, aprĂšs un travail souvent fastidieux au cours de l’installation initiale du logiciel’ De plus, une fois spĂ©cifiĂ©es, les valeurs de ces paramĂštres sont rarement remises en cause par les utilisateurs, du fait de leur manque d’expĂ©rience et du nombre important de paramĂštres Ă  ajuster. L’idĂ©e que nous dĂ©veloppons ici consiste Ă  utiliser des mĂ©taheuristiques pour automatiser cette tĂąche. Deux problĂšmes seront abordĂ©s : la sĂ©lection des paramĂštres pertinents et leur rĂ©glage en fonction des exigences de l’utilisateur. Nous proposons de rĂ©soudre ces deux problĂšmes de façon simultanĂ©e, en introduisant des stratĂ©gies de sĂ©lection au sein des mĂ©taheuristiques. Cette approche est appliquĂ©e au logiciel d’ordonnancement OrtemsÂź et validĂ©e sur plusieurs cas industriels. ABSTRACT : The use of scheduling software requires to set-up a number of parameters that have a direct influence on the schedule quality. Nowadays, this set-up is obtained manually after an extensive effort during initial software installation. Moreover, this set-up is rarely called into question by users, due to their lack of experience and to the high number of parameters involved. It is suggested in this thesis the use of metaheuristics to automate this task. Two problems are considered: selection of relevant parameters and their tuning according to user requirements. We suggest here an approach to solve these problems simultaneously, based on the combination of metaheuristics with some parameter selection strategies. An implementation framework has been developed and tested on an industrial scheduler, named OrtemsÂź. The first results of the use of this framework on real industrial databases are described and commented
    corecore