4 research outputs found

    Development of an Explainable Artificial Intelligence Prototype for Interpreting Predictive Models

    Get PDF
    Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algorithmic transparency. Some governments are responding to this through legislation like the “Right of Explanation” rule in the EU and “Algorithmic Accountability Act” in the USA in 2019. The attempt to open up the black box and introduce some level of interpretation has given rise to what is today known as Explainable Artificial Intelligence (XAI). The objective of this paper is to provide a design and implementation of an Explainable Artificial Intelligence Prototype (ExplainEx) that interprets predictive models by explaining their confusion matrix, component classes and classification accuracy. This study is limited to four ML algorithms including J48, Random Tree, RepTree and FURIA. At the core of the software is an engine automating a seamless interaction between Expliclas Web API and the trained datasets, to provide natural language explanation. The prototype is both a stand-alone and client-server based system capable of providing global explanations for any model built on any of the four ML algorithms. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset and returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models

    Quadratic exponential random early detection: a new enhanced random early detection-oriented congestion control algorithm for routers

    Get PDF
    Network congestion is still a problem on the internet. The random early detection (RED) algorithm being the most notable and widely implemented congestion algorithm in routers faces the problems of queue instability and large delay arising from the presence of an ineffectual singular linear packet dropping function. This research article presents a refinement to RED, named quadratic exponential random early detection (QERED) algorithm, which exploits the advantages of two drop functions, namely quadratic and exponential in order to enhance the performance of RED algorithm. ns-3 simulation studies using various traffic load conditions to assess and benchmark the effectiveness of QERED with two improved variants of RED affirmed that QERED offers a better performance in terms of average queue size and delay metrics at various network scenarios. Fortunately, to replace/upgrade the implementation for RED algorithm with QERED’s in routers will require minimal effort due to the fact that nothing more besides the packet dropping probability profile got to be adjusted

    ExplainEx: An Explainable Artificial Intelligence Framework for Interpreting Predictive Models

    Get PDF
    Artificial Intelligence (AI) systems are increasingly dependent on machine learning models which lack interpretability and algorithmic transparency, and hence may not be trusted by its users. The fear of failure in these systems is driving many governments to demand more explanation and accountability. Take, for example, the “Right of Explanation” rule proposed in the European Union in 2019, which gives citizens the right to demand an explanation from AI-based predictions. Explainable Artificial Intelligence (XAI) is an attempt to open up the “black box” and create more explainable systems which create predictive models whose results are easily understandable to humans. This paper describes an explanation model called ExplainEx which automatically generates natural language explanation for predictive models by consuming REST API provided by ExpliClas open-source web service. The classification model consists of four main decision tree algorithms including J48, Random Tree, RepTree and FURIA. The user interface was designed based on Microsoft.Net Framework programming platform. At the background is a software engine automating a seamless interaction between Expliclas API and the trained datasets, to provide natural language explanation to users. Unlike other studies, our proposed model is both a stand-alone and client-server based system capable of providing global explanations for any decision tree classifier. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset, returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models

    Development of an Explainable Artificial Intelligence Prototype for Interpreting Predictive Models

    No full text
    Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algorithmic transparency. Some governments are responding to this through legislation like the “Right of Explanation†rule in the EU and “Algorithmic Accountability Act†in the USA in 2019. The attempt to open up the black box and introduce some level of interpretation has given rise to what is today known as Explainable Artificial Intelligence (XAI). The objective of this paper is to provide a design and implementation of an Explainable Artificial Intelligence Prototype (ExplainEx) that interprets predictive models by explaining their confusion matrix, component classes and classification accuracy. This study is limited to four ML algorithms including J48, Random Tree, RepTree and FURIA. At the core of the software is an engine automating a seamless interaction between Expliclas Web API and the trained datasets, to provide natural language explanation. The prototype is both a stand-alone and client-server based system capable of providing global explanations for any model built on any of the four ML algorithms. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset and returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models
    corecore