Learning Dynamics, Pattern Recognition Capability and Interpretability of the Tsetlin Machine

Abstract

The inability to trace an AI’s reasoning process and understand why it makes each decision is known as the black box problem. This remains one of the major barriers to the trusted and widespread use of machine learning in many application domains. The paper explores pattern recognition performance and learning dynamics of the Tsetlin Machine – a new explainable logic-based machine-learning approach. Tsetlin Machine uses a collection of finite-state automata with a unique logic-based learning mechanism and provides a promising alternative to Artificial Neural Networks with several advantages, such as interpretability, low complexity, suitability for hardware implementation and high performance. This work investigates Tsetlin Machine’s mechanism for constructing conjunctive clauses from data and their interpretation for pattern recognition on several datasets. We demonstrate that during training the logical clauses learn persistent sub-patterns within the class. Each clause creates a class template by clustering a certain number of similar class samples, combining them through literal-wise logical conjunction (i.e., AND-ing). The number of class samples that each clause combines depends on Tsetlin Machine’s hyperparameters. The more class samples that are combined, the more general the clauses become. The paper aims at uncovering how Tsetlin Machine’s hyperparameters influence the balance between clause generalization and specialization and how this affects the accuracy of pattern recognition. It also studies the evolution of the machine’s internal state, its convergence and training completion

    Similar works

    This paper was published in Leeds Beckett Repository.

    Having an issue?

    Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

    Licence: http://creativecommons.org/licenses/by/4.0