4 research outputs found
Development of an Explainable Artificial Intelligence Prototype for Interpreting Predictive Models
Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algorithmic transparency. Some governments are responding to this through legislation like the “Right of Explanation” rule in the EU and “Algorithmic Accountability Act” in the USA in 2019. The attempt to open up the black box and introduce some level of interpretation has given rise to what is today known as Explainable Artificial Intelligence (XAI). The objective of this paper is to provide a design and implementation of an Explainable Artificial Intelligence Prototype (ExplainEx) that interprets predictive models by explaining their confusion matrix, component classes and classification accuracy. This study is limited to four ML algorithms including J48, Random Tree, RepTree and FURIA. At the core of the software is an engine automating a seamless interaction between Expliclas Web API and the trained datasets, to provide natural language explanation. The prototype is both a stand-alone and client-server based system capable of providing global explanations for any model built on any of the four ML algorithms. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset and returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models
ExplainEx: An Explainable Artificial Intelligence Framework for Interpreting Predictive Models
Artificial Intelligence (AI) systems are increasingly dependent on machine learning
models which lack interpretability and algorithmic transparency, and hence may not be
trusted by its users. The fear of failure in these systems is driving many governments to
demand more explanation and accountability. Take, for example, the “Right of
Explanation” rule proposed in the European Union in 2019, which gives citizens the
right to demand an explanation from AI-based predictions. Explainable Artificial
Intelligence (XAI) is an attempt to open up the “black box” and create more explainable
systems which create predictive models whose results are easily understandable to
humans. This paper describes an explanation model called ExplainEx which
automatically generates natural language explanation for predictive models by
consuming REST API provided by ExpliClas open-source web service. The classification
model consists of four main decision tree algorithms including J48, Random Tree,
RepTree and FURIA. The user interface was designed based on Microsoft.Net
Framework programming platform. At the background is a software engine automating
a seamless interaction between Expliclas API and the trained datasets, to provide
natural language explanation to users. Unlike other studies, our proposed model is both a stand-alone and client-server based system capable of providing global
explanations for any decision tree classifier. It supports multiple concurrent users in a
client-server environment and can apply all four algorithms concurrently on a single
dataset, returning both precision score and explanation. It is a ready tool for
researchers who have datasets and classifiers prepared for explanation. This work
bridges the gap between prediction and explanation, thereby allowing researchers to
concentrate on data analysis and building state-of-the-art predictive models
The prospects of adopting e-learning in the Nigerian education system: a case study of Covenant University
The conventional method of education has shrunken adequate information access and
acquisition capability. However, this as further widened the educational knowledge gap. This research
study examined the prospect of adopting e-learning in the Nigerian educational system. The Unified
Theory of acceptance and use of technology (UTAUT)modelwas utilized in other to properly investigate
the adoption of e-learning for an improved educational system in Nigeria. Adescriptive survey design was
employed, and a quantitative research method was used for data gathering and analysis. A total of 574
responses was obtainedfrom the research study respondents. The study analysis result showed that the
Average variance extracted (AVE) for actual use, behavioural intention, experience, effort expectancy,
facilitating condition, performance expectancy and social influence was
o.738,0.790,0.670,0.804,0.749,0.861,0.514 respectively, and the discriminant value for actual use,
behavioural intention, experience, effort expectancy, facilitating condition, performance expectancy and
social influence were 0.859,0.889,0.897,0.819,0.865,0.928 and 0.717 respectively. This analysis result
suggests that the research model convergent and discriminate validity were acceptable. Furthermore,
approximately 59.7% of the variance of behavioural intention (BI) to adopt eLearning was illustrated by
the PE (Performance Expectancy), EE (Effort Expectancy), and SI (Social Influence); Where R2 = 0.597.
Furthermore, about 77.5% of the variance of actual adoption (AC) of eLearning was explained by
behavioural intention (BI) to adopt eLearning Where R2 = 0.775. The result suggests that Performance
expectancy (PE), Effort Expectancy (EE) and Social Influence (SI) have a positive effect of the
behavioural intention to adopt e-Learning and the behavioral intention would lead to the actual adoption
of eLearning. Additionally, Facilitating condition (FC) and Experience (E) have a positive effect on the
actual adoption of e-Learning.The result of the research study suggests that the adoption of e-Learningin
Nigeria educational system is influenced by the ease of use, performance gain, public sway, adequate
support, and proficiency
Development of an Explainable Artificial Intelligence Prototype for Interpreting Predictive Models
Artificial Intelligence (AI) now depends on black box machine learning (ML) models which lack algorithmic transparency. Some governments are responding to this through legislation like the “Right of Explanation†rule in the EU and “Algorithmic Accountability Act†in the USA in 2019. The attempt to open up the black box and introduce some level of interpretation has given rise to what is today known as Explainable Artificial Intelligence (XAI). The objective of this paper is to provide a design and implementation of an Explainable Artificial Intelligence Prototype (ExplainEx) that interprets predictive models by explaining their confusion matrix, component classes and classification accuracy. This study is limited to four ML algorithms including J48, Random Tree, RepTree and FURIA. At the core of the software is an engine automating a seamless interaction between Expliclas Web API and the trained datasets, to provide natural language explanation. The prototype is both a stand-alone and client-server based system capable of providing global explanations for any model built on any of the four ML algorithms. It supports multiple concurrent users in a client-server environment and can apply all four algorithms concurrently on a single dataset and returning both precision score and explanation. It is a ready tool for researchers who have datasets and classifiers prepared for explanation. This work bridges the gap between prediction and explanation, thereby allowing researchers to concentrate on data analysis and building state-of-the-art predictive models