6 research outputs found

    XMAP: eXplainable mapping analytical process

    No full text
    Abstract As the number of artificial intelligence (AI) applications increases rapidly and more people will be affected by AI’s decisions, there are real needs for novel AI systems that can deliver both accuracy and explanations. To address these needs, this paper proposes a new approach called eXplainable Mapping Analytical Process (XMAP). Different from existing works in explainable AI, XMAP is highly modularised and the interpretability for each step can be easily obtained and visualised. A number of core algorithms are developed in XMAP to capture the distributions and topological structures of data, define contexts that emerged from data, and build effective representations for classification tasks. The experiments show that XMAP can provide useful and interpretable insights across analytical steps. For the binary classification task, its predictive performance is very competitive as compared to advanced machine learning algorithms in the literature. In some large datasets, XMAP can even outperform black-box algorithms without losing its interpretability.</p

    Variable-length particle swarm optimization for feature selection on high-dimensional classification

    No full text
    With a global search mechanism, particle swarm optimization (PSO) has shown promise in feature selection (FS). However, most of the current PSO-based FS methods use a fix-length representation, which is inflexible and limits the performance of PSO for FS. When applying these methods to high-dimensional data, it not only consumes a significant amount of memory but also requires a high computational cost. Overcoming this limitation enables PSO to work on data with much higher dimensionality which has become more and more popular with the advance of data collection technologies. In this paper, we propose the first variable-length PSO representation for FS, enabling particles to have different and shorter lengths, which defines smaller search space and therefore, improves the performance of PSO. By rearranging features in a descending order of their relevance, we facilitate particles with shorter lengths to achieve better classification performance. Furthermore, using the proposed length changing mechanism, PSO can jump out of local optima, further narrow the search space and focus its search on smaller and more fruitful area. These strategies enable PSO to reach better solutions in a shorter time. Results on ten high-dimensional datasets with varying difficulties show that the proposed variable-length PSO can achieve much smaller feature subsets with significantly higher classification performance in much shorter time than the fixed-length PSO methods. The proposed method also outperformed the compared non-PSO FS methods in most cases

    Variable-length particle swarm optimization for feature selection on high-dimensional classification

    No full text
    With a global search mechanism, particle swarm optimization (PSO) has shown promise in feature selection (FS). However, most of the current PSO-based FS methods use a fix-length representation, which is inflexible and limits the performance of PSO for FS. When applying these methods to high-dimensional data, it not only consumes a significant amount of memory but also requires a high computational cost. Overcoming this limitation enables PSO to work on data with much higher dimensionality which has become more and more popular with the advance of data collection technologies. In this paper, we propose the first variable-length PSO representation for FS, enabling particles to have different and shorter lengths, which defines smaller search space and therefore, improves the performance of PSO. By rearranging features in a descending order of their relevance, we facilitate particles with shorter lengths to achieve better classification performance. Furthermore, using the proposed length changing mechanism, PSO can jump out of local optima, further narrow the search space and focus its search on smaller and more fruitful area. These strategies enable PSO to reach better solutions in a shorter time. Results on ten high-dimensional datasets with varying difficulties show that the proposed variable-length PSO can achieve much smaller feature subsets with significantly higher classification performance in much shorter time than the fixed-length PSO methods. The proposed method also outperformed the compared non-PSO FS methods in most cases

    A Comprehensive Review of Digital Twin Technology for Grid-Connected Microgrid Systems: State of the Art, Potential and Challenges Faced

    No full text
    The concept of the digital twin has been adopted as an important aspect in digital transformation of power systems. Although the notion of the digital twin is not new, its adoption into the energy sector has been recent and has targeted increased operational efficiency. This paper is focused on addressing an important gap in the research literature reviewing the state of the art in utilization of digital twin technology in microgrids, an important component of power systems. A microgrid is a local power network that acts as a dependable island within bigger regional and national electricity networks, providing power without interruption even when the main grid is down. Microgrids are essential components of smart cities that are both resilient and sustainable, providing smart cities the opportunity to develop sustainable energy delivery systems. Due to the complexity of design, development and maintenance of a microgrid, an efficient simulation model with ability to handle the complexity and spatio-temporal nature is important. The digital twin technologies have the potential to address the above-mentioned requirements, providing an exact virtual model of the physical entity of the power system. The paper reviews the application of digital twins in a microgrid at electrical points where the microgrid connects or disconnects from the main distribution grid, that is, points of common coupling. Furthermore, potential applications of the digital twin in microgrids for better control, security and resilient operation and challenges faced are also discussed

    Fairness optimisation with multi-objective swarms for explainable classifiers on data streams

    No full text
    Recently, advanced AI systems equipped with sophisticated learning algorithms have emerged, enabling the processing of extensive streaming data for online decision-making in diverse domains. However, the widespread deployment of these systems has prompted concerns regarding potential ethical issues, particularly the risk of discrimination that can adversely impact certain community groups. This issue has been proven to be challenging to address in the context of streaming data, where data distribution can change over time, including changes in the level of discrimination within the data. In addition, transparent models like decision trees are favoured in such applications because they illustrate the decision-making process. However, it is essential to keep the models compact because the explainability of large models can diminish. Existing methods usually mitigate discrimination at the cost of accuracy. Accuracy and discrimination, therefore, can be considered conflicting objectives. Current methods are still limited in controlling the trade-off between these conflicting objectives. This paper proposes a method that can incrementally learn classification models from streaming data and automatically adjust the learnt models to balance multi-objectives simultaneously. The novelty of this research is to propose a multi-objective algorithm to maximise accuracy, minimise discrimination and model size simultaneously based on swarm intelligence. Experimental results using six real-world datasets show that the proposed algorithm can evolve fairer and simpler classifiers while maintaining competitive accuracy compared to existing state-of-the-art methods tailored for streaming data
    corecore