6 research outputs found

    Identification of significant factors for air pollution levels using a neural network based knowledge discovery system

    Get PDF
    Artificial neural network (ANN) is a commonly used approach to estimate or forecast air pollution levels, which are usually assessed by the concentrations of air contaminants such as nitrogen dioxide, sulfur dioxide, carbon monoxide, ozone, and suspended particulate matters (PMs) in the atmosphere of the concerned areas. Even through ANN can accurately estimate air pollution levels they are numerical enigmas and unable to provide explicit knowledge of air pollution levels by air pollution factors (e.g. traffic and meteorological factors). This paper proposed a neural network based knowledge discovery system aimed at overcoming this limitation in ANN. The system consists of two units: a) an ANN unit, which is used to estimate the air pollution levels based on relevant air pollution factors; b) a knowledge discovery unit, which is used to extract explicit knowledge from the ANN unit. To demonstrate the practicability of this neural network based knowledge discovery system, numerical data on mass concentrations of PM2.5 and PM1.0, meteorological and traffic data measured near a busy traffic road in Hangzhou city were applied to investigate the air pollution levels and the potential air pollution factors that may impact on the concentrations of these PMs. Results suggest that the proposed neural network based knowledge discovery system can accurately estimate air pollution levels and identify significant factors that have impact on air pollution levels

    Rule-Extraction Methods From Feedforward Neural Networks: A Systematic Literature Review

    Full text link
    Motivated by the interpretability question in ML models as a crucial element for the successful deployment of AI systems, this paper focuses on rule extraction as a means for neural networks interpretability. Through a systematic literature review, different approaches for extracting rules from feedforward neural networks, an important block in deep learning models, are identified and explored. The findings reveal a range of methods developed for over two decades, mostly suitable for shallow neural networks, with recent developments to meet deep learning models' challenges. Rules offer a transparent and intuitive means of explaining neural networks, making this study a comprehensive introduction for researchers interested in the field. While the study specifically addresses feedforward networks with supervised learning and crisp rules, future work can extend to other network types, machine learning methods, and fuzzy rule extraction

    Accurate and interpretable classification of microspectroscopy pixels using artificial neural networks

    Get PDF
    This paper addresses the problem of classifying materials from microspectroscopy at a pixel level. The challenges lie in identifying discriminatory spectral features and obtaining accurate and interpretable models relating spectra and class labels. We approach the problem by designing a supervised classifier from a tandem of Artificial Neural Network (ANN) models that identify relevant features in raw spectra and achieve high classification accuracy. The tandem of ANN models is meshed with classification rule extraction methods to lower the model complexity and to achieve interpretability of the resulting model. The contribution of the work is in designing each ANN model based on the microspectroscopy hypothesis about a discriminatory feature of a certain target class being composed of a linear combination of spectra. The novelty lies in meshing ANN and decision rule models into a tandem configuration to achieve accurate and interpretable classification results. The proposed method was evaluated using a set of broadband coherent anti-Stokes Raman scattering (BCARS) microscopy cell images (600 000 pixel-level spectra) and a reference four-class rule-based model previously created by biochemical experts. The generated classification rule-based model was on average 85% accurate measured by the DICE pixel label similarity metric, and on average 96% similar to the reference rules measured by the vector cosine metric

    Classification of Explainable Artificial Intelligence Methods through Their Output Formats

    Get PDF
    Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation

    Interpretation of trained neural networks by rule extraction

    No full text
    corecore