10 research outputs found

    Statistical Tree-based Population Seeding for Rolling Horizon EAs in General Video Game Playing

    Get PDF
    Multiple Artificial Intelligence (AI) methods have been proposed over recent years to create controllers to play multiple video games of different nature and complexity without revealing the specific mechanics of each of these games to the AI methods. In recent years, Evolutionary Algorithms (EAs) employing rolling horizon mechanisms have achieved extraordinary results in these type of problems. However, some limitations are present in Rolling Horizon EAs making it a grand challenge of AI. These limitations include the wasteful mechanism of creating a population and evolving it over a fraction of a second to propose an action to be executed by the game agent. Another limitation is to use a scalar value (fitness value) to direct evolutionary search instead of accounting for a mechanism that informs us how a particular agent behaves during the rolling horizon simulation. In this work, we address both of these issues. We introduce the use of a statistical tree that tackles the latter limitation. Furthermore, we tackle the former limitation by employing a mechanism that allows us to seed part of the population using Monte Carlo Tree Search, a method that has dominated multiple General Video Game AI competitions. We show how the proposed novel mechanism, called Statistical Tree-based Population Seeding, achieves better results compared to vanilla Rolling Horizon EAs in a set of 20 games, including 10 stochastic and 10 deterministic games

    Towards the control of cell states in gene regulatory networks by evolving Boolean networks

    Get PDF
    Biological cell behaviours emerge from complex patterns of interactions between genes and their products, known as gene regulatory networks (GRNs). More specifically, GRNs are complex dynamical structures that orchestrate the activities of biological cells by governing the expression of mRNA and proteins. Many computational models of these networks have been shown to be able to carry out complex computation in an efficient and robust manner, particularly in the domains of control and signal processing. GRNs play a central role within living organisms and efficient strategies for controlling their dynamics need to be developed. For instance, the ability to push a cell towards or away from certain behaviours, is an important aim in fields such as medicine and synthetic biology. This could, for example, help to find novel approaches in the design of therapeutic drugs. However, current approaches to controlling these networks exhibit poor scalability and limited generality. This thesis proposes a new approach and an alternative method for performing state space targeting in GRNs, by coupling an artificial GRN to an existing GRN. This idea is tested in simulation by coupling together Boolean networks that represent controlled and controller systems. Evolutionary algorithms are used to evolve the controller Boolean networks. Controller Boolean networks are applied to a range of controlled Boolean networks including Boolean models of actual biological circuits, each with different dynamics. The results show that controller Boolean networks can be optimised to control trajectories in the target networks. Also, the approach scales well as the target network size increases. The use of Boolean modelling is potentially advantageous from an implementation perspective, since synthetic biology techniques can be used to refine an optimised controller Boolean network into an in vivo form, which could then control a genetic network directly from within a cell

    Numerical Simplification and its Effect on Fragment Distributions in Genetic Programming

    No full text
    In tree-based genetic programming (GP) there is a tendency for the program trees to increase in size from one generation to the next. If this increase in program size is not accompanied by an improvement in fitness then this unproductive increase is known as bloat. It is standard practice to place some form of control on program size. This can be done by limiting the number of nodes or the depth of the program trees, or by adding a component to the fitness function that rewards smaller programs (parsimony pressure) or by simplifying individual programs using algebraic methods. This thesis proposes a novel program simplification method called numerical simplification that uses only the range of values the nodes take during fitness evaluation. The effect of online program simplification, both algebraic and numerical, on program size and resource usage is examined. This thesis also examines the distribution of program fragments within a genetic programming population and how this is changed by using simplification. It is shown that both simplification approaches result in reductions in average program size, memory used and computation time and that numerical simplification performs at least as well as algebraic simplification, and in some cases will outperform algebraic simplification. This reduction in program size and the resources required to process the GP run come without any significant reduction in accuracy. It is also shown that although the two online simplification methods destroy some existing program fragments, they generate new fragments during evolution, which compensates for any negative effects from the disruption of existing fragments. It is also shown that, after the first few generations, the rate new fragments are created, the rate fragments are lost from the population, and the number of distinct (different) fragments in the population remain within a very narrow range of values for the remainder of the run

    Bioinformatics Applications Based On Machine Learning

    Get PDF
    The great advances in information technology (IT) have implications for many sectors, such as bioinformatics, and has considerably increased their possibilities. This book presents a collection of 11 original research papers, all of them related to the application of IT-related techniques within the bioinformatics sector: from new applications created from the adaptation and application of existing techniques to the creation of new methodologies to solve existing problems

    Systolic genetic search, a parallel metaheuristic for GPUs

    Get PDF
    La utilización de unidades de procesamiento gráfico (GPUs) para la resolución de problemas de propósito general ha experimentado un crecimiento vertiginoso en los últimos años, sustentado en su amplia disponibilidad, su bajo costo económico y en contar con una arquitectura inherentemente paralela, así como en la aparición de lenguajes de programación de propósito general que han facilitado el desarrollo de aplicaciones en estas plataformas. En este contexto, el diseño de nuevos algoritmos paralelos que puedan beneficiarse del uso de GPUs es una línea de investigación prometedora e interesante. Las metaheurísticas son algoritmos estocásticos capaces de encontrar soluciones muy precisas (muchas veces óptimas) a problemas de optimización en un tiempo razonable. Sin embargo, como muchos problemas de optimización involucran tareas que exigen grandes recursos computacionales y/o el tamaño de las instancias que se están abordando actualmente se están volviendo muy grandes, incluso las metaheurísticas pueden ser computacionalmente muy costosas. En este escenario, el paralelismo surge como una alternativa exitosa con el fin de acelerar la búsqueda de este tipo de algoritmos. Además de permitir reducir el tiempo de ejecución de los algoritmos, las metaheurísticas paralelas a menudo son capaces de mejorar la calidad de los resultados obtenidos por los algoritmos secuenciales tradicionales.Si bien el uso de GPUs ha representado un dominio inspirador también para la investigación en metaheurísticas paralelas, la mayoría de los trabajos previos tenían como objetivo portar una familia existente de algoritmos a este nuevo tipo de hardware. Como consecuencia, muchas publicaciones están dirigidas a mostrar el ahorro en tiempo de ejecución que se puede lograr al ejecutar los diferentes tipos paralelos de metaheurísticas existentes en GPU. En otras palabras, a pesar de que existe un volumen considerable de trabajo sobre este tópico, se han propuesto pocas ideas novedosas que busquen diseñar nuevos algoritmos y/o modelos de paralelismo que exploten explícitamente el alto grado de paralelismo disponible en las arquitecturas de las GPUs. Esta tesis aborda el diseño de una propuesta innovadora de algoritmo de optimización paralelo denominada Búsqueda Genética Sistólica (SGS), que combina ideas de los campos de metaheurísticas y computación sistólica. SGS, así como la computación sistólica, se inspiran en el mismo fenómeno biológico: la contracción sistólica del corazón que hace posible la circulación de la sangre. En SGS, las soluciones circulan de forma síncrona a través de una grilla (rejilla) de celdas. Cuando dos soluciones se encuentran en una celda se aplican operadores evolutivos adaptados para generar nuevas soluciones que continúan moviéndose a través de la grilla (rejilla). La implementación de esta nueva propuesta saca partido especialmente de las características específicas de las GPUs. Un extenso análisis experimental que considera varios problemas de benchmark clásicos y dos problemas del mundo real del área de Ingeniería de Software, muestra que el nuevo algoritmo propuesto es muy efectivo, encontrando soluciones óptimas o casi óptimas en tiempos de ejecución cortos. Además, los resultados numéricos obtenidos por SGS son competitivos con los resultados del estado del arte para los dos problemas del mundo real en cuestión. Por otro lado, la implementación paralela en GPU de SGS ha logrado un alto rendimiento, obteniendo grandes reducciones de tiempo de ejecución con respecto a la implementación secuencial y mostrando que escala adecuadamente cuando se consideran instancias de tamaño creciente. También se ha realizado un análisis teórico de las capacidades de búsqueda de SGS para comprender cómo algunos aspectos del diseño del algoritmo afectan a sus resultados numéricos. Este análisis arroja luz sobre algunos aspectos del funcionamiento de SGS que pueden utilizarse para mejorar el diseño del algoritmo en futuras variantes

    Explainable methods for knowledge graph refinement and exploration via symbolic reasoning

    Get PDF
    Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.Wissensgraphen haben viele Anwendungen in verschiedenen Bereichen, beispielsweise im Finanz- und Gesundheitswesen. Wissensgraphen sind jedoch unvollständig und enthalten auch ungültige Daten. Hohe Abdeckung und Korrektheit erfordern neue Methoden zur Wissensgraph-Erweiterung und Wissensgraph-Validierung. Beide Aufgaben zusammen werden als Wissensgraph-Verfeinerung bezeichnet. Ein wichtiger Aspekt dabei ist die Erklärbarkeit und Verständlichkeit von Wissensgraphinhalten für Nutzer. In Anwendungen ist darüber hinaus die nutzerseitige Exploration von Wissensgraphen von besonderer Bedeutung. Suchen und Navigieren im Graph hilft dem Anwender, die Wissensinhalte und ihre Limitationen besser zu verstehen. Aufgrund der riesigen Menge an vorhandenen Entitäten und Fakten ist die Wissensgraphen-Exploration eine Herausforderung. Taxonomische Typsystem helfen dabei, sind jedoch für tiefergehende Exploration nicht ausreichend. Diese Dissertation adressiert die Herausforderungen der Wissensgraph-Verfeinerung und der Wissensgraph-Exploration durch algorithmische Inferenz über dem Wissensgraph. Sie erweitert logisches Schlussfolgern und kombiniert es mit anderen Methoden, insbesondere mit neuronalen Wissensgraph-Einbettungen und mit Text-Mining. Diese neuen Methoden liefern Ausgaben mit Erklärungen für Nutzer. Die Dissertation umfasst folgende Beiträge: Insbesondere leistet die Dissertation folgende Beiträge: • Zur Wissensgraph-Erweiterung präsentieren wir ExRuL, eine Methode zur Revision von Horn-Regeln durch Hinzufügen von Ausnahmebedingungen zum Rumpf der Regeln. Die erweiterten Regeln können neue Fakten inferieren und somit Lücken im Wissensgraphen schließen. Experimente mit großen Wissensgraphen zeigen, dass diese Methode Fehler in abgeleiteten Fakten erheblich reduziert und nutzerfreundliche Erklärungen liefert. • Mit RuLES stellen wir eine Methode zum Lernen von Regeln vor, die auf probabilistischen Repräsentationen für fehlende Fakten basiert. Das Verfahren erweitert iterativ die aus einem Wissensgraphen induzierten Regeln, indem es neuronale Wissensgraph-Einbettungen mit Informationen aus Textkorpora kombiniert. Bei der Regelgenerierung werden neue Metriken für die Regelqualität verwendet. Experimente zeigen, dass RuLES die Qualität der gelernten Regeln und ihrer Vorhersagen erheblich verbessert. • Zur Unterstützung der Wissensgraph-Validierung wird ExFaKT vorgestellt, ein Framework zur Konstruktion von Erklärungen für Faktkandidaten. Die Methode transformiert Kandidaten mit Hilfe von Regeln in eine Menge von Aussagen, die leichter zu finden und zu validieren oder widerlegen sind. Die Ausgabe von ExFaKT ist eine Menge semantischer Evidenzen für Faktkandidaten, die aus Textkorpora und dem Wissensgraph extrahiert werden. Experimente zeigen, dass die Transformationen die Ausbeute und Qualität der entdeckten Erklärungen deutlich verbessert. Die generierten unterstützen Erklärungen unterstütze sowohl die manuelle Wissensgraph- Validierung durch Kuratoren als auch die automatische Validierung. • Zur Unterstützung der Wissensgraph-Exploration wird ExCut vorgestellt, eine Methode zur Erzeugung von informativen Entitäts-Clustern mit Erklärungen unter Verwendung von Wissensgraph-Einbettungen und automatisch induzierten Regeln. Eine Cluster-Erklärung besteht aus einer Kombination von Relationen zwischen den Entitäten, die den Cluster identifizieren. ExCut verbessert gleichzeitig die Cluster- Qualität und die Cluster-Erklärbarkeit durch iteratives Verschränken des Lernens von Einbettungen und Regeln. Experimente zeigen, dass ExCut Cluster von hoher Qualität berechnet und dass die Cluster-Erklärungen für Nutzer informativ sind

    Genetic Programming for Classification with Unbalanced Data

    No full text
    In classification,machine learning algorithms can suffer a performance bias when data sets are unbalanced. Binary data sets are unbalanced when one class is represented by only a small number of training examples (called the minority class), while the other class makes up the rest (majority class). In this scenario, the induced classifiers typically have high accuracy on the majority class but poor accuracy on the minority class. As the minority class typically represents the main class-of-interest in many real-world problems, accurately classifying examples from this class can be at least as important as, and in some cases more important than, accurately classifying examples from the majority class. Genetic Programming (GP) is a promising machine learning technique based on the principles of Darwinian evolution to automatically evolve computer programs to solve problems. While GP has shown much success in evolving reliable and accurate classifiers for typical classification tasks with balanced data, GP, like many other learning algorithms, can evolve biased classifiers when data is unbalanced. This is because traditional training criteria such as the overall success rate in the fitness function in GP, can be influenced by the larger number of examples from the majority class. This thesis proposes a GP approach to classification with unbalanced data. The goal is to develop new internal cost-adjustment techniques in GP to improve classification performances on both the minority class and the majority class. By focusing on internal cost-adjustment within GP rather than the traditional databalancing techniques, the unbalanced data can be used directly or "as is" in the learning process. This removes any dependence on a sampling algorithm to first artificially re-balance the input data prior to the learning process. This thesis shows that by developing a number of new methods in GP, genetic program classifiers with good classification ability on the minority and the majority classes can be evolved. This thesis evaluates these methods on a range of binary benchmark classification tasks with unbalanced data. This thesis demonstrates that unlike tasks with multiple balanced classes where some dynamic (non-static) classification strategies perform significantly better than the simple static classification strategy, either a static or dynamic strategy shows no significant difference in the performance of evolved GP classifiers on these binary tasks. For this reason, the rest of the thesis uses this static classification strategy. This thesis proposes several new fitness functions in GP to perform cost adjustment between the minority and the majority classes, allowing the unbalanced data sets to be used directly in the learning process without sampling. Using the Area under the Receiver Operating Characteristics (ROC) curve (also known as the AUC) to measure how well a classifier performs on the minority and majority classes, these new fitness functions find genetic program classifiers with high AUC on the tasks on both classes, and with fast GP training times. These GP methods outperform two popular learning algorithms, namely, Naive Bayes and Support Vector Machines on the tasks, particularly when the level of class imbalance is large, where both algorithms show biased classification performances. This thesis also proposes a multi-objective GP (MOGP) approach which treats the accuracies of the minority and majority classes separately in the learning process. The MOGP approach evolves a good set of trade-off solutions (a Pareto front) in a single run that perform as well as, and in some cases better than, multiple runs of canonical single-objective GP (SGP). In SGP, individual genetic program solutions capture the performance trade-off between the two objectives (minority and majority class accuracy) using an ROC curve; whereas in MOGP, this requirement is delegated to multiple genetic program solutions along the Pareto front. This thesis also shows how multiple Pareto front classifiers can be combined into an ensemble where individual members vote on the class label. Two ensemble diversity measures are developed in the fitness functions which treat the diversity on both the minority and the majority classes as equally important; otherwise, these measures risk being biased toward the majority class. The evolved ensembles outperform their individual members on the tasks due to good cooperation between members. This thesis further improves the ensemble performances by developing a GP approach to ensemble selection, to quickly find small groups of individuals that cooperate very well together in the ensemble. The pruned ensembles use much fewer individuals to achieve performances that are as good as larger (unpruned) ensembles, particularly on tasks with high levels of class imbalance, thereby reducing the total time to evaluate the ensemble

    Typogenetic design - aesthetic decision support for architectural shape generation

    Get PDF
    Typogenetic Design is an interactive computational design system combining generative design, evolutionary search and architectural optimisation technology. The active tool for supporting design decisions during architectural shape generation uses an aesthetic system to guide the search process. This aesthetic system directs the search process toward preferences expressed interactively by the designer. An image input as design reference is integrated by means of shape comparison to provide direction to the exploratory search. During the shape generation process, the designer can choose solutions interactively in a graphical user interface. Those choices are then used to support the selection process as part of the fitness function by online classification. Enhancing human decision making capabilities in human-in-the-loop design systems addresses the complexity of architecture in respect to aesthetic requirements. On the strength of machine learning, the integral performance trade-off during multi-criteria optimisation was extended to address aesthetic preferences. The tacit knowledge and subjective understanding of designers can be used in the shape generation process based on interactive mechanisms. As a result, an integrated support system for performance-based design was developed and tested. Closing the loop from design to construction using design optimisation of structural nodes in a set of case studies confirmed the need for intuitive design systems, interfaces and mechanisms to make architectural optimisation more accessible and intuitive to handle. This dissertation investigated Typogenetic Design as a tool for initial morphological search. Novel instruments for human interaction with design systems were developed using mixed-method research. The present investigation consists of an in-depth technological enquiry into the use of interactive generative design for exploratory search as an integrated support system for performance-based design. Associated project-based research on the design potential of Typogenetic Design showcases the application of the design system for architecture. Generative design as an expressive tool to produce architectural geometries was investigated in regard to its ability to drive initial morphological search of complex geometries. The reinterpretation of processes and boosting of productivity by artificial intelligence was instrumental in exploring a holistic approach combining quantitative and qualitative criteria in a human-in-the-loop system. The shift in focus from an objective to a subjective understanding of computational design processes indicates a perspective change from optimisation to learning as a computational paradigm. Integrating learning capabilities in architectural optimisation enhances the capability of architects to explore large design spaces of emergent representations using evolutionary search. The shift from design automation to interactive generative design introduces the possibility for designers to evaluate shape solutions based on their knowledge and expertise to the computational system. At the same time, the aesthetic system is trained in adaptation to the choices made by the designer. Furthermore, an initial image input allows the designer to add a design reference to the Typogenetic Design process. Shape comparison using a similarity measure provides additional guidance to the architectural shape generation using grammar evolution. Finally, a software prototype was built and tested by means of user-experience evaluation. These participant experiments led to the specification of custom software requirements for the software implementation of a parametric Typogenetic tool. I explored semi-automated design in application to different design cases using the software prototype of Typogenetic Design. Interactive mass-customisation is a promising application of Typogenetic Design to interactively specify product structure and component composition. The semi-automated design paradigm is one step on the way to moderating the balance between automation and control of computational design systems

    Towards adaptive anomaly detection systems using boolean combination of hidden Markov models

    Get PDF
    Anomaly detection monitors for significant deviations from normal system behavior. Hidden Markov Models (HMMs) have been successfully applied in many intrusion detection applications, including anomaly detection from sequences of operating system calls. In practice, anomaly detection systems (ADSs) based on HMMs typically generate false alarms because they are designed using limited representative training data and prior knowledge. However, since new data may become available over time, an important feature of an ADS is the ability to accommodate newly-acquired data incrementally, after it has originally been trained and deployed for operations. Incremental re-estimation of HMM parameters raises several challenges. HMM parameters should be updated from new data without requiring access to the previously-learned training data, and without corrupting previously-learned models of normal behavior. Standard techniques for training HMM parameters involve iterative batch learning, and hence must observe the entire training data prior to updating HMM parameters. Given new training data, these techniques must restart the training procedure using all (new and previously-accumulated) data. Moreover, a single HMM system for incremental learning may not adequately approximate the underlying data distribution of the normal process, due to the many local maxima in the solution space. Ensemble methods have been shown to alleviate knowledge corruption, by combining the outputs of classifiers trained independently on successive blocks of data. This thesis makes contributions at the HMM and decision levels towards improved accuracy, efficiency and adaptability of HMM-based ADSs. It first presents a survey of techniques found in literature that may be suitable for incremental learning of HMM parameters, and assesses the challenges faced when these techniques are applied to incremental learning scenarios in which the new training data is limited and abundant. Consequently, An efficient alternative to the Forward-Backward algorithm is first proposed to reduce the memory complexity without increasing the computational overhead of HMM parameters estimation from fixed-size abundant data. Improved techniques for incremental learning of HMM parameters are then proposed to accommodate new data over time, while maintaining a high level of performance. However, knowledge corruption caused by a single HMM with a fixed number of states remains an issue. To overcome such limitations, this thesis presents an efficient system to accommodate new data using a learn-and-combine approach at the decision level. When a new block of training data becomes available, a new pool of base HMMs is generated from the data using a different number of HMM states and random initializations. The responses from the newly-trained HMMs are then combined to those of the previously-trained HMMs in receiver operating characteristic (ROC) space using novel Boolean combination (BC) techniques. The learn-and-combine approach allows to select a diversified ensemble of HMMs (EoHMMs) from the pool, and adapts the Boolean fusion functions and thresholds for improved performance, while it prunes redundant base HMMs. The proposed system is capable of changing its desired operating point during operations, and this point can be adjusted to changes in prior probabilities and costs of errors. During simulations conducted for incremental learning from successive data blocks using both synthetic and real-world system call data sets, the proposed learn-and-combine approach has been shown to achieve the highest level of accuracy than all related techniques. In particular, it can sustain a significantly higher level of accuracy than when the parameters of a single best HMM are re-estimated for each new block of data, using the reference batch learning and the proposed incremental learning techniques. It also outperforms static fusion techniques such as majority voting for combining the responses of new and previously-generated pools of HMMs. Ensemble selection techniques have been shown to form compact EoHMMs for operations, by selecting diverse and accurate base HMMs from the pool while maintaining or improving the overall system accuracy. Pruning has been shown to prevents pool sizes from increasing indefinitely with the number of data blocks acquired over time. Therefore, the storage space for accommodating HMMs parameters and the computational costs of the selection techniques are reduced, without negatively affecting the overall system performance. The proposed techniques are general in that they can be employed to adapt HMM-based systems to new data, within a wide range of application domains. More importantly, the proposed Boolean combination techniques can be employed to combine diverse responses from any set of crisp or soft one- or two-class classifiers trained on different data or features or trained according to different parameters, or from different detectors trained on the same data. In particular, they can be effectively applied when training data is limited and test data is imbalanced
    corecore