13 research outputs found

    Towards explainable metaheuristics: PCA for trajectory mining in evolutionary algorithms.

    Get PDF
    The generation of explanations regarding decisions made by population-based meta-heuristics is often a difficult task due to the nature of the mechanisms employed by these approaches. With the increase in use of these methods for optimisation in industries that require end-user confirmation, the need for explanations has also grown. We present a novel approach to the extraction of features capable of supporting an explanation through the use of trajectory mining - extracting key features from the populations of NDAs. We apply Principal Components Analysis techniques to identify new methods of population diversity tracking post-runtime after projection into a lower dimensional space. These methods are applied to a set of benchmark problems solved by a Genetic Algorithm and a Univariate Estimation of Distribution Algorithm. We show that the new sub-space derived metrics can capture key learning steps in the algorithm run and how solution variable patterns that explain the fitness function may be captured in the principal component coefficients

    Non-deterministic solvers and explainable AI through trajectory mining.

    Get PDF
    Traditional methods of creating explanations from complex systems involving the use of AI have resulted in a wide variety of tools available to users to generate explanations regarding algorithm and network designs. This however has traditionally been aimed at systems that mimic the structure of human thought such as neural networks. The growing adoption of AI systems in industries has led to research and roundtables regarding the ability to extract explanations from other systems such as Non-Deterministic algorithms. This family of algorithms can be analysed but the explanation of events can often be difficult for non-experts to understand. Mentioned is a potential path to the generation of explanations that would not require expert-level knowledge to be correctly understood

    Partial structure learning by subset Walsh transform.

    Get PDF
    Estimation of distribution algorithms (EDAs) use structure learning to build a statistical model of good solutions discovered so far, in an effort to discover better solutions. The non-zero coefficients of the Walsh transform produce a hypergraph representation of structure of a binary fitness function; however, computation of all Walsh coefficients requires exhaustive evaluation of the search space. In this paper, we propose a stochastic method of determining Walsh coefficients for hyperedges contained within the selected subset of the variables (complete local structure). This method also detects parts of hyperedges which cut the boundary of the selected variable set (partial structure), which may be used to incrementally build an approximation of the problem hypergraph

    Exploring representations for optimising connected autonomous vehicle routes in multi-modal transport networks using evolutionary algorithms.

    Get PDF
    The past five years have seen rapid development of plans and test pilots aimed at introducing connected and autonomous vehicles (CAVs) in public transport systems around the world. While self-driving technology is still being perfected, public transport authorities are increasingly interested in the ability to model and optimize the benefits of adding CAVs to existing multi-modal transport systems. Using a real-world scenario from the Leeds Metropolitan Area as a case study, we demonstrate an effective way of combining macro-level mobility simulations based on open data with global optimisation techniques to discover realistic optimal deployment strategies for CAVs. The macro-level mobility simulations are used to assess the quality of a potential multi-route CAV service by quantifying geographic accessibility improvements using an extended version of Dijkstra's algorithm on an abstract multi-modal transport network. The optimisations were carried out using several popular population-based optimisation algorithms that were combined with several routing strategies aimed at constructing the best routes by ordering stops in a realistic sequence

    Towards explainable metaheuristics: feature extraction from trajectory mining.

    Get PDF
    Explaining the decisions made by population-based metaheuristics can often be considered difficult due to the stochastic nature of the mechanisms employed by these optimisation methods. As industries continue to adopt these methods in areas that increasingly require end-user input and confirmation, the need to explain the internal decisions being made has grown. In this article, we present our approach to the extraction of explanation supporting features using trajectory mining. This is achieved through the application of principal components analysis techniques to identify new methods of tracking population diversity changes post-runtime. The algorithm search trajectories were generated by solving a set of benchmark problems with a genetic algorithm and a univariate estimation of distribution algorithm and retaining all visited candidate solutions which were then projected to a lower dimensional sub-space. We also varied the selection pressure placed on high fitness solutions by altering the selection operators. Our results show that metrics derived from the projected sub-space algorithm search trajectories are capable of capturing key learning steps and how solution variable patterns that explain the fitness function may be captured in the principal component coefficients. A comparative study of variable importance rankings derived from a surrogate model built on the same dataset was also performed. The results show that both approaches are capable of identifying key features regarding variable interactions and their influence on fitness in a complimentary fashion

    Structural coherence of problem and algorithm: an analysis for EDAs on all 2-bit and 3-bit problems.

    Get PDF
    Metaheuristics assume some kind of coherence between decision and objective spaces. Estimation of Distribution algorithms approach this by constructing an explicit probabilistic model of high fitness solutions, the structure of which is intended to reflect the structure of the problem. In this context, 'structure' means the dependencies or interactions between problem variables in a probabilistic graphical model. There are many approaches to discovering these dependencies, and existing work has already shown that often these approaches discover 'unnecessary' elements of structure - that is, elements which are not needed to correctly rank solutions. This work performs an exhaustive analysis of all 2 and 3 bit problems, grouped into classes based on mononotic invariance. It is shown in [1] that each class has a minimal Walsh structure that can be used to solve the problem. We compare the structure discovered by different structure learning approaches to the minimal Walsh structure for each class, with summaries of which interactions are (in)correctly identified. Our analysis reveals a large number of symmetries that may be used to simplify problem solving. We show that negative selection can result in improved coherence between discovered and necessary structure, and conclude with some directions for a general programme of study building on this work

    Generating easy and hard problems using the proximate optimality principle.

    Get PDF
    We present an approach to generating problems of variable difficulty based on the well-known Proximate Optimality Principle (POP), often paraphrased as similar solutions have similar fitness. We explore definitions of this concept in terms of metrics in objective space and in representation space and define POP in terms of coherence of these metrics. We hypothesise that algorithms will perform well when the neighbourhoods they explore in representation space are coherent with the natural metric induced by fitness on objective space. We develop an explicit method of problem generation which creates bit string problems where the natural fitness metric is coherent or anti-coherent with Hamming neighbourhoods. We conduct experiments to show that coherent problems are easy whereas anti-coherent problems are hard for local hill climbers using the Hamming neighbourhoods

    Minimal walsh structure and ordinal linkage of monotonicity-invariant function classes on bit strings.

    Get PDF
    Problem structure, or linkage, refers to the interaction between variables in a black-box fitness function. Discovering structure is a feature of a range of algorithms, including estimation of distribution algorithms (EDAs) and perturbation methods (PMs). The complexity of structure has traditionally been used as a broad measure of problem difficulty, as the computational complexity relates directly to the complexity of structure. The EDA literature describes necessary and unnecessary interactions in terms of the relationship between problem structure and the structure of probabilistic graphical models discovered by the EDA. In this paper we introduce a classification of problems based on monotonicity invariance. We observe that the minimal problem structures for these classes often reveal that significant proportions of detected structures are unnecessary. We perform a complete classification of all functions on 3 bits. We consider nonmonotonicity linkage discovery using perturbation methods and derive a concept of directed ordinal linkage associated to optimization schedules. The resulting refined classification factored out by relabeling, shows a hierarchy of nine directed ordinal linkage classes for all 3-bit functions. We show that this classification allows precise analysis of computational complexity and parallelizability and conclude with a number of suggestions for future work

    Comparison of simulated annealing and evolution strategies for optimising cyclical rosters with uneven demand and flexible trainee placement.

    No full text
    Rosters are often used for real-world staff scheduling requirements. Multiple design factors such as demand variability, shift type placement, annual leave requirements, staff well-being and the placement of trainees need to be considered when constructing good rosters. In the present work we propose a metaheuristic-based strategy for designing optimal cyclical rosters that can accommodate uneven demand patterns. A key part of our approach relies on integrating an efficient optimal trainee placement module within the metaheuristic-driven search. Results obtained on a real-life problem proposed by the Port of Aberdeen indicate that by incorporating a demand-informed random rota initialisation procedure, our strategy can generally achieve high-quality end-of-run solutions when using relatively simple base solvers like simulated annealing (SA) and evolution strategies (ES). While ES converge faster, SA outperforms quality-wise, with both approaches being able to improve the man-made baseline
    corecore