2 research outputs found

    Extracting Rules from Neural Networks to Predict High Return Stocks

    Get PDF
    Neural networks have been shown to be a powerful classification tool in financial applications. However, neural networks are basically black boxes that do not explain the classification procedure. The training results from neural networks, which are sets of connection weights expressed in numeric terms, hardly shed light on the importance of input attributes and their relationship for classification problems. To address this issue, researchers have developed different algorithms to extract classification rules from trained neural networks. The purpose of this paper is to validate the prediction power of extracted rules from one algorithm GLARE (Generalized Analytic Rule Extraction). The input to the GLARE algorithm is a set of connection weights from a trained neural network, and the output is classification rules that can be used to predict new cases as well as to explain the classification procedure. We apply the conventional backpropagation and GLARE to a data set from the CompuStat database. The input to the prediction problem is a vector of financial statement variables, and the output is the rate of return on common shareholders\u27 equity. To test the effect of the number of training epochs on rule extraction, we train the networks for 5 and 1000 epochs before rule extraction. To test the statistical significance of performance differences between conventional backpropagation and rules from neural networks, we perform paired t test for each pair of the average returns. The experimental results support the superiority of extracted rules to conventional backpropagation on selecting high return stocks

    Rule-Extraction Methods From Feedforward Neural Networks: A Systematic Literature Review

    Full text link
    Motivated by the interpretability question in ML models as a crucial element for the successful deployment of AI systems, this paper focuses on rule extraction as a means for neural networks interpretability. Through a systematic literature review, different approaches for extracting rules from feedforward neural networks, an important block in deep learning models, are identified and explored. The findings reveal a range of methods developed for over two decades, mostly suitable for shallow neural networks, with recent developments to meet deep learning models' challenges. Rules offer a transparent and intuitive means of explaining neural networks, making this study a comprehensive introduction for researchers interested in the field. While the study specifically addresses feedforward networks with supervised learning and crisp rules, future work can extend to other network types, machine learning methods, and fuzzy rule extraction
    corecore