368 research outputs found

    Differential Evolution and Deterministic Chaotic Series: A Detailed Study

    Get PDF
    This research represents a detailed insight into the modern and popular hybridization of deterministic chaotic dynamics and evolutionary computation. It is aimed at the influence of chaotic sequences on the performance of four selected Differential Evolution (DE) variants. The variants of interest were: original DE/Rand/1/ and DE/Best/1/ mutation schemes, simple parameter adaptive jDE, and the recent state of the art version SHADE. Experiments are focused on the extensive investigation of the different randomization schemes for the selection of individuals in DE algorithm driven by the nine different two-dimensional discrete deterministic chaotic systems, as the chaotic pseudorandom number generators. The performances of DE variants and their chaotic/non-chaotic versions are recorded in the one-dimensional settings of 10D and 15 test functions from the CEC 2015 benchmark, further statistically analyzed

    A puzzling anomaly in the 4-mer composition of the giant pandoravirus genomes reveals a stringent new evolutionary selection process

    Get PDF
    International audienceThe Pandoraviridae is a rapidly growing family of giant viruses, all of which have been isolated using laboratory strains of Acanthamoeba. The genomes of ten distinct strains have been fully characterized, reaching up to 2.5 Mb in size. These double-stranded DNA genomes encode the largest of all known viral proteomes and are propagated in oblate virions that are among the largest ever-described (1.2 μm long and 0.5 μm wide). The evolutionary origin of these atypical viruses is the object of numerous speculations. Applying the Chaos Game Representation to the pandoravirus genome sequences, we discovered that the tetranucleotide (4-mer) "AGCT" is totally absent from the genomes of 2 strains (P. dulcis and P. quercus) and strongly underrepresented in others. Given the amazingly low probability of such an observation in the corresponding randomized sequences, we investigated its biological significance through a comprehensive study of the 4-mer compositions of all viral genomes. Our results indicate that "AGCT" was specifically eliminated during the evolution of the Pandoraviridae and that none of the previously proposed host-virus antagonistic relationships could explain this phenomenon. Unlike the three other families of giant viruses (Mimiviridae, Pithoviridae, Molliviridae) infecting the same Acanthamoeba host, the pandoraviruses exhibit a puzzling genomic anomaly suggesting a highly specific DNA editing in response to a new kind of strong evolutionary pressure.IMPORTANCE The recent years have seen the discovery of several families of giant DNA viruses all infecting the ubiquitous amoebozoa of the genus Acanthamoeba. With dsDNA genomes reaching 2.5 Mb in length packaged in oblate particles the size of a bacterium, the pandoraviruses are the most complex and largest viruses known as of today. In addition to their spectacular dimensions, the pandoraviruses encode the largest proportion of proteins without homolog in other organisms which are thought to result from a de novo gene creation process. While using comparative genomics to investigate the evolutionary forces responsible for the emergence of such an unusual giant virus family, we discovered a unique bias in the tetranucleotide composition of the pandoravirus genomes that can only result from an undescribed evolutionary process not encountered in any other microorganism

    AGENT-BASED DISCRETE EVENT SIMULATION MODELING AND EVOLUTIONARY REAL-TIME DECISION MAKING FOR LARGE-SCALE SYSTEMS

    Get PDF
    Computer simulations are routines programmed to imitate detailed system operations. They are utilized to evaluate system performance and/or predict future behaviors under certain settings. In complex cases where system operations cannot be formulated explicitly by analytical models, simulations become the dominant mode of analysis as they can model systems without relying on unrealistic or limiting assumptions and represent actual systems more faithfully. Two main streams exist in current simulation research and practice: discrete event simulation and agent-based simulation. This dissertation facilitates the marriage of the two. By integrating the agent-based modeling concepts into the discrete event simulation framework, we can take advantage of and eliminate the disadvantages of both methods.Although simulation can represent complex systems realistically, it is a descriptive tool without the capability of making decisions. However, it can be complemented by incorporating optimization routines. The most challenging problem is that large-scale simulation models normally take a considerable amount of computer time to execute so that the number of solution evaluations needed by most optimization algorithms is not feasible within a reasonable time frame. This research develops a highly efficient evolutionary simulation-based decision making procedure which can be applied in real-time management situations. It basically divides the entire process time horizon into a series of small time intervals and operates simulation optimization algorithms for those small intervals separately and iteratively. This method improves computational tractability by decomposing long simulation runs; it also enhances system dynamics by incorporating changing information/data as the event unfolds. With respect to simulation optimization, this procedure solves efficient analytical models which can approximate the simulation and guide the search procedure to approach near optimality quickly.The methods of agent-based discrete event simulation modeling and evolutionary simulation-based decision making developed in this dissertation are implemented to solve a set of disaster response planning problems. This research also investigates a unique approach to validating low-probability, high-impact simulation systems based on a concrete example problem. The experimental results demonstrate the feasibility and effectiveness of our model compared to other existing systems

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    On-line machine scheduling

    Get PDF
    ix+93hlm.;24c

    Navigational Strategies for Control of Underwater Robot using AI based Algorithms

    Get PDF
    Autonomous underwater robots have become indispensable marine tools to perform various tedious and risky oceanic tasks of military, scientific, civil as well as commercial purposes. To execute hazardous naval tasks successfully, underwater robot needs an intelligent controller to manoeuver from one point to another within unknown or partially known three-dimensional environment. This dissertation has proposed and implemented various AI based control strategies for underwater robot navigation. Adaptive versions of neuro-fuzzy network and several stochastic evolutionary algorithms have been employed here to avoid obstacles or to escape from dead end situations while tracing near optimal path from initial point to destination of an impulsive underwater scenario. A proper balance between path optimization and collision avoidance has been considered as major aspects for evaluating performances of proposed navigational strategies of underwater robot. Online sensory information about position and orientation of both target and nearest obstacles with respect to the robot’s current position have been considered as inputs for path planners. To validate the feasibility of proposed control algorithms, numerous simulations have been executed within MATLAB based simulation environment where obstacles of different shapes and sizes are distributed in a chaotic manner. Simulation results have been verified by performing real time experiments of robot in underwater environment. Comparisons with other available underwater navigation approaches have also been accomplished for authentication purpose. Extensive simulation and experimental studies have ensured the obstacle avoidance and path optimization abilities of proposed AI based navigational strategies during motion of underwater robot. Moreover, a comparative study has been performed on navigational performances of proposed path planning approaches regarding path length and travel time to find out most efficient technique for navigation within an impulsive underwater environment

    Learning Curricula in Open-Ended Worlds

    Get PDF
    Deep reinforcement learning (RL) provides powerful methods for training optimal sequential decision-making agents. As collecting real-world interactions can entail additional costs and safety risks, the common paradigm of sim2real conducts training in a simulator, followed by real-world deployment. Unfortunately, RL agents easily overfit to the choice of simulated training environments, and worse still, learning ends when the agent masters the specific set of simulated environments. In contrast, the real-world is highly open-ended—featuring endlessly evolving environments and challenges, making such RL approaches unsuitable. Simply randomizing across a large space of simulated environments is insufficient, as it requires making arbitrary distributional assumptions, and as the design space grows, it can become combinatorially less likely to sample specific environment instances that are useful for learning. An ideal learning process should automatically adapt the training environment to maximize the learning potential of the agent over an open-ended task space that matches or surpasses the complexity of the real world. This thesis develops a class of methods called Unsupervised Environment Design (UED), which seeks to enable such an open-ended process via a principled approach for gradually improving the robustness and generality of the learning agent. Given a potentially open-ended environment design space, UED automatically generates an infinite sequence or curriculum of training environments at the frontier of the learning agent’s capabilities. Through both extensive empirical studies and theoretical arguments founded on minimax-regret decision theory and game theory, the findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness and generalization to previously unseen environment instances. Such autocurricula are promising paths toward open-ended learning systems that approach general intelligence—a long sought-after ambition of artificial intelligence research—by continually generating and mastering additional challenges of their own design

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore