13 research outputs found

    Using Evolutionary Strategies for the Real-Time Learning of Controllers for Autonomous Agents in Xpilot-AI

    Get PDF
    Real-time learning is the process of an artificial intelligence agent learning behavior(s) at the same pace as it operates in the real world. Video games tend to be an excellent locale for testing real-time learning agents, as the action happens at real speeds with a good visual feedback mechanism, coupled with the possibility of comparing human performance to that of the agent\u27s. In addition, players want to be competing against a consistently challenging opponent. This paper is a discussion of a controller for an agent in the space combat game Xpilot and the evolution of said controller using two different methods. The controller is a multilayer neural network, which controls all facets of the agent\u27s behavior that are not created in the initial set-up. The neural network is evolved using 1-to-1 evolutionary strategies in one method and genetic algorithms in the other method. Using three independent trials per methodology, it was shown that evolutionary strategies learned faster, while genetic algorithms learned more consistently, leading to the idea that genetic algorithms may be superior when there is ample time before use, but evolutionary strategies are better when pressed for learning time as in real-time learning

    A new hybrid evolutionary algorithm for the treatment of equality constrained MOPs

    Get PDF
    Multi-objective evolutionary algorithms are widely used by researchers and practitioners to solve multi-objective optimization problems (MOPs), since they require minimal assumptions and are capable of computing a finite size approximation of the entire solution set in one run of the algorithm. So far, however, the adequate treatment of equality constraints has played a minor role. Equality constraints are particular since they typically reduce the dimension of the search space, which causes problems for stochastic search algorithms such as evolutionary strategies. In this paper, we show that multi-objective evolutionary algorithms hybridized with continuation-like techniques lead to fast and reliable numerical solvers. For this, we first propose three new problems with different characteristics that are indeed hard to solve by evolutionary algorithms. Next, we develop a variant of NSGA-II with a continuation method. We present numerical results on several equality-constrained MOPs to show that the resulting method is highly competitive to state-of-the-art evolutionary algorithms.Peer ReviewedPostprint (published version

    Learning concurrently partition granularities and rule bases of Mamdani fuzzy systems in a multi-objective evolutionary framework

    Get PDF
    AbstractIn this paper we propose a multi-objective evolutionary algorithm to generate Mamdani fuzzy rule-based systems with different good trade-offs between complexity and accuracy. The main novelty of the algorithm is that both rule base and granularity of the uniform partitions defined on the input and output variables are learned concurrently. To this aim, we introduce the concepts of virtual and concrete rule bases: the former is defined on linguistic variables, all partitioned with a fixed maximum number of fuzzy sets, while the latter takes into account, for each variable, a number of fuzzy sets as determined by the specific partition granularity of that variable. We exploit a chromosome composed of two parts, which codify the variables partition granularities, and the virtual rule base, respectively. Genetic operators manage virtual rule bases, whereas fitness evaluation relies on an appropriate mapping strategy between virtual and concrete rule bases. The algorithm has been tested on two real-world regression problems showing very promising results

    EA/G-GA for Single Machine Scheduling Problems with Earliness/Tardiness Costs

    Get PDF
    [[abstract]]An Estimation of Distribution Algorithm (EDA), which depends on explicitly sampling mechanisms based on probabilistic models with information extracted from the parental solutions to generate new solutions, has constituted one of the major research areas in the field of evolutionary computation. The fact that no genetic operators are used in EDAs is a major characteristic differentiating EDAs from other genetic algorithms (GAs). This advantage, however, could lead to premature convergence of EDAs as the probabilistic models are no longer generating diversified solutions. In our previous research [1], we have presented the evidences that EDAs suffer from the drawback of premature convergency, thus several important guidelines are provided for the design of effective EDAs. In this paper, we validated one guideline for incorporating other meta-heuristics into the EDAs. An algorithm named “EA/G-GA” is proposed by selecting a well-known EDA, EA/G, to work with GAs. The proposed algorithm was tested on the NP-Hard single machine scheduling problems with the total weighted earliness/tardiness cost in a just-in-time environment. The experimental results indicated that the EA/G-GA outperforms the compared algorithms statistically significantly across different stopping criteria and demonstrated the robustness of the proposed algorithm. Consequently, this paper is of interest and importance in the field of EDAs.[[notice]]補正完

    3D fast convex-hull-based evolutionary multiobjective optimization algorithm

    Get PDF
    The receiver operating characteristic (ROC) and detection error tradeoff (DET) curves have been widely used in the machine learning community to analyze the performance of classifiers. The area (or volume) under the convex hull has been used as a scalar indicator for the performance of a set of classifiers in ROC and DET space. Recently, 3D convex-hull-based evolutionary multiobjective optimization algorithm (3DCH-EMOA) has been proposed to maximize the volume of convex hull for binary classification combined with parsimony and three-way classification problems. However, 3DCH-EMOA revealed high consumption of computational resources due to redundant convex hull calculations and a frequent execution of nondominated sorting. In this paper, we introduce incremental convex hull calculation and a fast replacement for non-dominated sorting. While achieving the same high quality results, the computational effort of 3DCH-EMOA can be reduced by orders of magnitude. The average time complexity of 3DCH-EMOA in each generation is reduced from O ( n 2 log n ) to O ( n log n ) per iteration, where n is the population size. Six test function problems are used to test the performance of the newly proposed method, and the algorithms are compared to several state-of-the-art algorithms, including NSGA-III, RVEA, etc., which were not compared to 3DCH-EMOA before. Experimental results show that the new version of the algorithm (3DFCH-EMOA) can speed up 3DCH-EMOA for about 30 times for a typical population size of 300 without reducing the performance of the method. Besides, the proposed algorithm is applied for neural networks pruning, and several UCI datasets are used to test the performance

    Multiobjective optimization of classifiers by means of 3-D convex hull based evolutionary algorithms

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The receiver operating characteristic (ROC) and detection error tradeoff(DET) curves are frequently used in the machine learning community to analyze the performance of binary classifiers. Recently, the convex-hull-based multiobjective genetic programming algorithm was proposed and successfully applied to maximize the convex hull area for binary classifi- cation problems by minimizing false positive rate and maximizing true positive rate at the same time using indicator-based evolutionary algorithms. The area under the ROC curve was used for the performance assessment and to guide the search. Here we extend this re- search and propose two major advancements: Firstly we formulate the algorithm in detec- tion error tradeoffspace, minimizing false positives and false negatives, with the advantage that misclassification cost tradeoffcan be assessed directly. Secondly, we add complexity as an objective function, which gives rise to a 3D objective space (as opposed to a 2D pre- vious ROC space). A domain specific performance indicator for 3D Pareto front approxima- tions, the volume above DET surface, is introduced, and used to guide the indicator-based evolutionary algorithm to find optimal approximation sets. We assess the performance of the new algorithm on designed theoretical problems with different geometries of Pareto fronts and DET surfaces, and two application-oriented benchmarks: (1) Designing spam filters with low numbers of false rejects, false accepts, and low computational cost us- ing rule ensembles, and (2) finding sparse neural networks for binary classification of test data from the UCI machine learning benchmark. The results show a high performance of the new algorithm as compared to conventional methods for multicriteria optimization

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    Improving Intrusion Prevention, Detection and Response

    Get PDF
    Merged with duplicate record 10026.1/479 on 10.04.2017 by CS (TIS)In the face of a wide range of attacks. Intrusion Detection Systems (IDS) and other Internet security tools represent potentially valuable safeguards to identify and combat the problems facing online systems. However, despite the fact that a variety o f commercial and open source solutions are available across a range of operating systems and network platforms, it is notable that the deployment of IDS is often markedly less than other well-known network security countermeasures and other tools may often be used in an ineffective manner. This thesis considers the challenges that users may face while using IDS, by conducting a web-based questionnaire to assess these challenges. The challenges that are used in the questionnaire were gathered from the well-established literature. The participants responses varies between being with or against selecting them as challenges but all the listed challenges approved that they are consider problems in the IDS field. The aim of the research is to propose a novel set of Human Computer Interaction-Security (HCI-S) usability criteria based on the findings of the web-based questionnaire. Moreover, these criteria were inspired from previous literature in the field of HCI. The novelty of the criteria is that they focus on the security aspects. The new criteria were promising when they were applied to Norton 360, a well known Internet security suite. Testing the alerts issued by security software was the initial step before testing other security software. Hence, a set of security software were selected and some alerts were triggered as a result of performing a penetration test conducted within a test-bed environment using the network scanner Nmap. The findings reveal that four of the HCI-S usability criteria were not fully addressed by all of these security software. Another aim of this thesis is to consider the development of a prototype to address the HCI-S usability criteria that seem to be overlooked in the existing security solutions. The thesis conducts a practical user trial and the findings are promising and attempt to find a proper solution to solve this problem. For instance, to take advantage of previous security decisions, it would be desirable for a system to consider the user's previous decisions on similar alerts, and modify alerts accordingly to account for the user's previous behaviour. Moreover, in order to give users a level of fiexibility, it is important to enable them to make informed decisions, and to be able to recover from them if needed. It is important to address the proposed criteria that enable users to confirm / recover the impact of their decision, maintain an awareness of system status all the time, and to offer responses that match users' expectations. The outcome of the current study is a set of a proposed 16 HCI-S usability criteria that can be used to design and to assess security alerts issued by any Internet security suite. These criteria are not equally important and they vary between high, medium and low.The embassy of the arab republic of Egypt (cultural centre & educational bureau) in Londo

    Improving evolutionary algorithms by MEANS of an adaptive parameter control approach

    Get PDF
    Evolutionary algorithms (EA) constitute a class of optimization methods that is widely used to solve complex scientific problems. However, EA often converge prematurely over suboptimal solutions, the evolution process is computational expensive, and setting the required EA parameters is quite difficult. We believe that the best way to address these problems is to begin by improving the parameter setting strategy, which will in turn improve the search path of the optimizer, and, we hope, ultimately help prevent premature convergence and relieve the computational burden. The strategy that will achieve this outcome, and the one we adopt in this research, is to ensure that the parameter setting approach takes into account the search path and attempts to drive it in the most advantageous direction. Our objective is therefore to develop an adaptive parameter setting approach capable of controlling all the EA parameters at once. To interpret the search path, we propose to incorporate the concept of exploration and exploitation into the feedback indicator. The first step is to review and study the available genotypic diversity measurements used to characterize the exploration of the optimizer over the search space. We do this by implementing a specifically designed benchmark, and propose three diversity requirements for evaluating the meaningfulness of those measures as population diversity estimators. Results show that none of the published formulations is, in fact, a qualified diversity descriptor. To remedy this, we introduce a new genotypic formulation here, the performance analysis of which shows that it produces better results overall, notwithstanding some serious defects. We initiate a similar study aimed at describing the role of exploitation in the search process, which is to indicate promising regions. However, since exploitation is mainly driven by the individuals’ fitness, we turn our attention toward phenotypic convergence measures. Again, the in-depth analysis reveals that none of the published phenotypic descriptors is capable of portraying the fitness distribution of a population. Consequently, a new phenotypic formulation is developed here, which shows perfect agreement with the expected population behavior. On the strength of these achievements, we devise an optimizer diagnostic tool based on the new genotypic and phenotypic formulations, and illustrate its value by comparing the impacts of various EA parameters. Although the main purpose of this development is to explore the relevance of using both a genotypic and a phenotypic measure to characterize the search process, our diagnostic tool proves to be one of the few tools available to practitioners for interpreting and customizing the way in which optimizers work over real-world problems. With the knowledge gained in our research, the objective of this thesis is finally met, with the proposal of a new adaptive parameter control approach. The system is based on a Bayesian network that enables all the EA parameters to be considered at once. To the authors’ knowledge, this is the first parameter setting proposal devised to do so. The genotypic and phenotypic measures developed are combined in the form of a credit assignment scheme for rewarding parameters by, among other things, promoting maximization of both exploration and exploitation. The proposed adaptive system is evaluated over a recognized benchmark (CEC’05) through the use of a steady-state genetic algorithm (SSGA), and then compared with seven other approaches, like FAUC-RMAB and G-CMA-ES, which are state-of-the-art adaptive methods. Overall, the results demonstrate statistically that the new proposal not only performs as well as G-CMA-ES, but outperforms almost all the other adaptive systems. Nonetheless, this investigation revealed that none of the methods tested is able to locate global optimum over complex multimodal problems. This led us to conclude that synergy and complementarity among the parameters involved is probably missing. Consequently, more research on these topics is advised, with a view to devising enhanced optimizers. We provide numerous recommendations for such research at the end of this thesis
    corecore