352 research outputs found

    Towards explainable metaheuristics: PCA for trajectory mining in evolutionary algorithms.

    Get PDF
    The generation of explanations regarding decisions made by population-based meta-heuristics is often a difficult task due to the nature of the mechanisms employed by these approaches. With the increase in use of these methods for optimisation in industries that require end-user confirmation, the need for explanations has also grown. We present a novel approach to the extraction of features capable of supporting an explanation through the use of trajectory mining - extracting key features from the populations of NDAs. We apply Principal Components Analysis techniques to identify new methods of population diversity tracking post-runtime after projection into a lower dimensional space. These methods are applied to a set of benchmark problems solved by a Genetic Algorithm and a Univariate Estimation of Distribution Algorithm. We show that the new sub-space derived metrics can capture key learning steps in the algorithm run and how solution variable patterns that explain the fitness function may be captured in the principal component coefficients

    Non-deterministic solvers and explainable AI through trajectory mining.

    Get PDF
    Traditional methods of creating explanations from complex systems involving the use of AI have resulted in a wide variety of tools available to users to generate explanations regarding algorithm and network designs. This however has traditionally been aimed at systems that mimic the structure of human thought such as neural networks. The growing adoption of AI systems in industries has led to research and roundtables regarding the ability to extract explanations from other systems such as Non-Deterministic algorithms. This family of algorithms can be analysed but the explanation of events can often be difficult for non-experts to understand. Mentioned is a potential path to the generation of explanations that would not require expert-level knowledge to be correctly understood

    Explaining a staff rostering genetic algorithm using sensitivity analysis and trajectory analysis.

    Get PDF
    In the field of Explainable AI, population-based search metaheuristics are of growing interest as they become more widely used in critical applications. The ability to relate key information regarding algorithm behaviour and drivers of solution quality to an end-user is vital. This paper investigates a novel method of explanatory feature extraction based on analysis of the search trajectory and compares the results to those of sensitivity analysis using “Weighted Ranked Biased Overlap”. We apply these techniques to search trajectories generated by a genetic algorithm as it solves a staff rostering problem. We show that there is a significant overlap between these two explainability methods when identifying subsets of rostered workers whose allocations are responsible for large portions of fitness change in an optimization run. Both methods identify similar patterns in sensitivity, but our method also draws out additional information. As the search progresses, the techniques reveal how individual workers increase or decrease in the influence on the overall rostering solution’s quality. Our method also helps identify workers with a lower impact on overall solution fitness and at what stage in the search these individuals can be considered highly flexible in their roster assignment

    Towards explainable metaheuristics: feature extraction from trajectory mining.

    Get PDF
    Explaining the decisions made by population-based metaheuristics can often be considered difficult due to the stochastic nature of the mechanisms employed by these optimisation methods. As industries continue to adopt these methods in areas that increasingly require end-user input and confirmation, the need to explain the internal decisions being made has grown. In this article, we present our approach to the extraction of explanation supporting features using trajectory mining. This is achieved through the application of principal components analysis techniques to identify new methods of tracking population diversity changes post-runtime. The algorithm search trajectories were generated by solving a set of benchmark problems with a genetic algorithm and a univariate estimation of distribution algorithm and retaining all visited candidate solutions which were then projected to a lower dimensional sub-space. We also varied the selection pressure placed on high fitness solutions by altering the selection operators. Our results show that metrics derived from the projected sub-space algorithm search trajectories are capable of capturing key learning steps and how solution variable patterns that explain the fitness function may be captured in the principal component coefficients. A comparative study of variable importance rankings derived from a surrogate model built on the same dataset was also performed. The results show that both approaches are capable of identifying key features regarding variable interactions and their influence on fitness in a complimentary fashion

    On the mechanisms governing gas penetration into a tokamak plasma during a massive gas injection

    Get PDF
    A new 1D radial fluid code, IMAGINE, is used to simulate the penetration of gas into a tokamak plasma during a massive gas injection (MGI). The main result is that the gas is in general strongly braked as it reaches the plasma, due to mechanisms related to charge exchange and (to a smaller extent) recombination. As a result, only a fraction of the gas penetrates into the plasma. Also, a shock wave is created in the gas which propagates away from the plasma, braking and compressing the incoming gas. Simulation results are quantitatively consistent, at least in terms of orders of magnitude, with experimental data for a D 2 MGI into a JET Ohmic plasma. Simulations of MGI into the background plasma surrounding a runaway electron beam show that if the background electron density is too high, the gas may not penetrate, suggesting a possible explanation for the recent results of Reux et al in JET (2015 Nucl. Fusion 55 093013)

    Modelling of the effect of ELMs on fuel retention at the bulk W divertor of JET

    Get PDF
    Effect of ELMs on fuel retention at the bulk W target of JET ITER-Like Wall was studied with multi-scale calculations. Plasma input parameters were taken from ELMy H-mode plasma experiment. The energetic intra-ELM fuel particles get implanted and create near-surface defects up to depths of few tens of nm, which act as the main fuel trapping sites during ELMs. Clustering of implantation-induced vacancies were found to take place. The incoming flux of inter-ELM plasma particles increases the different filling levels of trapped fuel in defects. The temperature increase of the W target during the pulse increases the fuel detrapping rate. The inter-ELM fuel particle flux refills the partially emptied trapping sites and fills new sites. This leads to a competing effect on the retention and release rates of the implanted particles. At high temperatures the main retention appeared in larger vacancy clusters due to increased clustering rate

    Overview of the JET ITER-like wall divertor

    Get PDF

    Power exhaust by SOL and pedestal radiation at ASDEX Upgrade and JET

    Get PDF

    Current Research into Applications of Tomography for Fusion Diagnostics

    Get PDF
    Retrieving spatial distribution of plasma emissivity from line integrated measurements on tokamaks presents a challenging task due to ill-posedness of the tomography problem and limited number of the lines of sight. Modern methods of plasma tomography therefore implement a-priori information as well as constraints, in particular some form of penalisation of complexity. In this contribution, the current tomography methods under development (Tikhonov regularisation, Bayesian methods and neural networks) are briefly explained taking into account their potential for integration into the fusion reactor diagnostics. In particular, current development of the Minimum Fisher Regularisation method is exemplified with respect to real-time reconstruction capability, combination with spectral unfolding and other prospective tasks
    • …
    corecore