Neural Networks (NNs) can learn very accurate solutions to complex problems, but it is rarely clear how. The Layerwise Relevance Propagation (LRP) framework would explain how a given NN would produce a prediction for given data by assigning a relevance score to each data feature in each data example. This would be achieved by propagating each NN layer's output onto each data feature in its input. Other researchers have shown what hyperparameters and architectural choices lead to these explanations beinanalytically correct, however, it is not always possible to apply these in practice.
The first chapter discusses the problems and solutions that were explored in this research. The second chapter presents background literature about AI, NNs, model shade, explainability, and LRP. The third chapter compared explanations extracted by LRP to those extracted from white-box models. These were most comparable when the NN architecture was large and when the data that it was fitted on contained many data examples. This established a link between explainability and the predictive accuracy of a NN. The fourth chapter found that explanations generated by LRP can be made correct through hyperparameter optimisation, and the newly-proposed Local LRP (LLRP) framework exceeded the explainability of trained LRP over greyscale and colour images by learning the hyperparameters at each NN layer. Chapter five discovered and analysed why the actual and expected negative relevance representations differ, and the sensitivity of positive relevance was maximised individually instead of trying to mutually maximise positive and negative relevance. A final reflection in chapter six shows that this thesis has contributed to NN explainability by improving the relevance produced by LRP. Future research opportunities were highlighted throughout this work
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.