10 research outputs found

    Artificial evolution with Binary Decision Diagrams: a study in evolvability in neutral spaces

    Get PDF
    This thesis develops a new approach to evolving Binary Decision Diagrams, and uses it to study evolvability issues. For reasons that are not yet fully understood, current approaches to artificial evolution fail to exhibit the evolvability so readily exhibited in nature. To be able to apply evolvability to artificial evolution the field must first understand and characterise it; this will then lead to systems which are much more capable than they are currently. An experimental approach is taken. Carefully crafted, controlled experiments elucidate the mechanisms and properties that facilitate evolvability, focusing on the roles and interplay between neutrality, modularity, gradualism, robustness and diversity. Evolvability is found to emerge under gradual evolution as a biased distribution of functionality within the genotype-phenotype map, which serves to direct phenotypic variation. Neutrality facilitates fitness-conserving exploration, completely alleviating local optima. Population diversity, in conjunction with neutrality, is shown to facilitate the evolution of evolvability. The search is robust, scalable, and insensitive to the absence of initial diversity. The thesis concludes that gradual evolution in a search space that is free of local optima by way of neutrality can be a viable alternative to problematic evolution on multi-modal landscapes

    Evolutionary Search Techniques with Strong Heuristics for Multi-Objective Feature Selection in Software Product Lines

    Get PDF
    Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature models) using various search-based software engineering methods. Our main result is that as we increase the number of optimization objectives, the methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0 violations of domain constraints. We also present significant improvements to IBEA\u27s performance by employing three strong heuristic techniques that we call PUSH, PULL, and seeding. The PUSH technique forces the evolutionary search to respect certain rules and dependencies defined by the feature models, while the PULL technique gives higher weight to constraint satisfaction as an optimization objective and thus achieves a higher percentage of fully-compliant configurations within shorter runtimes. The seeding technique helps in guiding very large feature models to correct configurations very early in the optimization process. Our conclusion is that the methods we apply in search-based software engineering need to be carefully chosen, particularly when studying complex decision spaces with many optimization objectives. Also, we conclude that search methods must be customized to fit the problem at hand. Specifically, the evolutionary search must respect domain constraints

    State-of-art Multiobjective Evolutionary Algorithms-pareto Panking, Density Estimation and Dynamic Population

    Get PDF
    Electrical Engineerin

    JINC - A Multi-Threaded Library for Higher-Order Weighted Decision Diagram Manipulation

    Get PDF
    Ordered Binary Decision Diagrams (OBDDs) have been proven to be an efficient data structure for symbolic algorithms. The efficiency of the symbolic methods de- pends on the underlying OBDD library. Available OBDD libraries are based on the standard concepts and so far only differ in implementation details. This thesis introduces new techniques to increase run-time and space-efficiency of an OBDD library. This thesis introduces the framework of Higher-Order Weighted Decision Diagrams (HOWDDs) to combine the similarities of different OBDD variants. This frame- work pioneers the basis for the new variant Toggling Algebraic Decision Diagrams (TADDs) which has been shown to be a space-efficient HOWDD variant for sym- bolic matrix representation. The concept of HOWDDs has been use to implement the OBDD library JINC. This thesis also analyzes the usage of multi-threading techniques to speed-up OBDD manipulations. A new reordering framework ap- plies the advantages of multi-threading techniques to reordering algorithms. This approach uses an abstraction layer so that the original reordering algorithms are not touched. The challenge that arise from a straight forward algorithm is that the computed-tables and the garbage collection are not as efficient as in a single- threaded environment. We resolve this problem by developing a new multi-operand APPLY algorithm that eliminates the creation of temporary nodes which could occur during computation and thus reduces the need for caching or garbage collection. The HOWDD framework leads to an efficient library design which has been shown to be more efficient than the established OBDD library CUDD. The HOWDD instance TADD reduces the needed number of nodes by factor two compared to ordinary ADDs. The new multi-threading approaches are more efficient than single-threading approaches by several factors. In the case of the new reordering framework the speed- up almost equals the theoretical optimal speed-up. The novel multi-operand APPLY algorithm reduces the memory usage for the n-queens problem by factor 50 which enables the calculation of bigger problem instances compared to the traditional APPLY approach. The new approaches improve the performance and reduce the memory footprint. This leads to the conclusion that applications should be reviewed whether they could benefit from the new multi-threading multi-operand approaches introduced and discussed in this thesis

    Test set generation and optimisation using evolutionary algorithms and cubical calculus.

    Get PDF
    As the complexity of modern day integrated circuits rises, many of the challenges associated with digital testing rise exponentially. VLSI technology continues to advance at a rapid pace, in accordance with Moore's Law, posing evermore complex, NP-complete problems for the test community. The testing of ICs currently accounts for approximately a third of the overall design costs and according to the Semiconductor Industry Association, the per-transistor test cost will soon exceed the per-transistor production cost. Given the need to test ICs of ever-increasing complexity and to contain the cost of test, the problems of test pattern generation, testability analysis and test set minimisation continue to provide formidable challenges for the research community. This thesis presents original work in these three areas. Firstly, a new method is presented for generating test patterns for multiple output combinational circuits based on the Boolean difference method and cubical calculus. The Boolean difference method has been largely overlooked in automatic test pattern generation algorithms due to its cumbersome, algebraic nature. It is shown that cubical calculus provides an elegant and economical technique for solving Boolean difference equations. Formal mathematical techniques are presented involving the Boolean difference and cubical calculus providing, a test pattern generation method that dispenses with the need for costly circuit simulations. The methods provide the basis for test generation algorithms which are suitable for computer implementation. Secondly, some of the core test pattern generation computations outlined above also provide the basis of a new method for computing testability measures such as controllability and observability. This method is effectively a very economical spin-off of the test pattern generation process using Boolean differences and cubical calculus.The third and largest part of this thesis introduces a new test set minimization algorithm, GA-MITS, based on an evolutionary optimization algorithm. This novel approach applies a genetic algorithm to find minimal or near minimal test sets while maintaining a given fault coverage. The algorithm is designed as a postprocessor to minimise test sets that have been previously generated by an ATPG system and is thus considered a static approach to the test set minimisation problem. It is shown empirically that GA-MITS is remarkably successful in minimizing test sets generated for the ISCAS-85 benchmark circuits and hence potentially capable of reducing the production costs of realistic digital circuits

    Towards the Exact Minimization of BDDs—An Elitism-Based Distributed Evolutionary Algorithm

    No full text
    [[abstract]]Binary Decision Diagrams (BDDs) are the state-of-the-art data structure for representation and manipulation of Boolean functions. In general, exact BDD minimization is NP-complete. For BDD-based technology, a small improvement in the number of nodes often simplifies the follow-up problem tremendously. This paper proposes an elitism-based evolutionary algorithm (EBEA) for BDD minimization. It can efficiently find the optimal orderings of variables for all LGSynth91 benchmark circuits with a known minimum size. Moreover, we develop a distributed model of EBEA, DEBEA, which obtains the best-ever variable orders for almost all benchmarks in the LGSynth91. Experimental results show that DEBEA is able to achieve super-linear performance compared to EBEA for some hard benchmarks.

    WCET and Priority Assignment Analysis of Real-Time Systems using Search and Machine Learning

    Get PDF
    Real-time systems have become indispensable for human life as they are used in numerous industries, such as vehicles, medical devices, and satellite systems. These systems are very sensitive to violations of their time constraints (deadlines), which can have catastrophic consequences. To verify whether the systems meet their time constraints, engineers perform schedulability analysis from early stages and throughout development. However, there are challenges in obtaining precise results from schedulability analysis due to estimating the worst-case execution times (WCETs) and assigning optimal priorities to tasks. Estimating WCET is an important activity at early design stages of real-time systems. Based on such WCET estimates, engineers make design and implementation decisions to ensure that task executions always complete before their specified deadlines. However, in practice, engineers often cannot provide a precise point of WCET estimates and they prefer to provide plausible WCET ranges. Task priority assignment is an important decision, as it determines the order of task executions and it has a substantial impact on schedulability results. It thus requires finding optimal priority assignments so that tasks not only complete their execution but also maximize the safety margins from their deadlines. Optimal priority values increase the tolerance of real-time systems to unexpected overheads in task executions so that they can still meet their deadlines. However, it is a hard problem to find optimal priority assignments because their evaluation relies on uncertain WCET values and complex engineering constraints must be accounted for. This dissertation proposes three approaches to estimate WCET and assign optimal priorities at design stages. Combining a genetic algorithm and logistic regression, we first suggest an automatic approach to infer safe WCET ranges with a probabilistic guarantee based on the worst-case scheduling scenarios. We then introduce an extended approach to account for weakly hard real-time systems with an industrial schedule simulator. We evaluate our approaches by applying them to industrial systems from different domains and several synthetic systems. The results suggest that they are possible to estimate probabilistic safe WCET ranges efficiently and accurately so the deadline constraints are likely to be satisfied with a high degree of confidence. Moreover, we propose an automated technique that aims to identify the best possible priority assignments in real-time systems. The approach deals with multiple objectives regarding safety margins and engineering constraints using a coevolutionary algorithm. Evaluation with synthetic and industrial systems shows that the approach significantly outperforms both a baseline approach and solutions defined by practitioners. All the solutions in this dissertation scale to complex industrial systems for offline analysis within an acceptable time, i.e., at most 27 hours

    Proceedings of The Multi-Agent Logics, Languages, and Organisations Federated Workshops (MALLOW 2010)

    Get PDF
    http://ceur-ws.org/Vol-627/allproceedings.pdfInternational audienceMALLOW-2010 is a third edition of a series initiated in 2007 in Durham, and pursued in 2009 in Turin. The objective, as initially stated, is to "provide a venue where: the cost of participation was minimum; participants were able to attend various workshops, so fostering collaboration and cross-fertilization; there was a friendly atmosphere and plenty of time for networking, by maximizing the time participants spent together"
    corecore