Artificial intelligence (AI) reasoning and explainable AI (XAI) tasks have
gained popularity recently, enabling users to explain the predictions or
decision processes of AI models. This paper introduces Forest Monkey (FM), a
toolkit designed to reason the outputs of any AI-based defect detection and/or
classification model with data explainability. Implemented as a Python package,
FM takes input in the form of dataset folder paths (including original images,
ground truth labels, and predicted labels) and provides a set of charts and a
text file to illustrate the reasoning results and suggest possible
improvements. The FM toolkit consists of processes such as feature extraction
from predictions to reasoning targets, feature extraction from images to defect
characteristics, and a decision tree-based AI-Reasoner. Additionally, this
paper investigates the time performance of the FM toolkit when applied to four
AI models with different datasets. Lastly, a tutorial is provided to guide
users in performing reasoning tasks using the FM toolkit.Comment: 6 pages, 5 figures, accepted in 2023 IEEE symposium series on
computational intelligence (SSCI