2,008 research outputs found
Analyzing Divisia Rules Extracted from a Feedforward Neural Network
This paper introduces a mechanism for generating a series of rules that characterize the money-price relationship, defined as the relationship between the rate of growth of the money supply and inflation. Division component data is used to train a selection of candidate feedforward neural networks. The selected network is mined for rules, expressed in human-readable and machine-executable form. The rule and network accuracy are compared, and expert commentary is made on the readability and reliability of the extracted rule set. The ultimate goal of this research is to produce rules that meaningfully and accurately describe inflation in terms of Divisia component dataset
Extraction of similarity based fuzzy rules from artificial neural networks
A method to extract a fuzzy rule based system from a trained artificial neural network for classification
is presented. The fuzzy system obtained is equivalent to the corresponding neural network.
In the antecedents of the fuzzy rules, it uses the similarity between the input datum and the weight
vectors. This implies rules highly understandable. Thus, both the fuzzy system and a simple analysis
of the weight vectors are enough to discern the hidden knowledge learnt by the neural network. Several
classification problems are presented to illustrate this method of knowledge discovery by using
artificial neural networks
Extracting Symbolic Representations Learned by Neural Networks
Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, human-readable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods.
In this context, this dissertation research makes four main contributions. First, modifications to the backpropagation learning procedure are derived mathematically, and it is shown that these modifications can be accomplished as local computations. Second, the effectiveness of the modified learning procedure for feedforward networks is established by showing that, on a set of benchmark tasks, it produces rule sets that are substantially simpler than those produced by standard backpropagation learning. Third, this approach is extended to simple recurrent networks, and experimental evaluation shows remarkable reduction in the sizes of the finite state machines extracted from the recurrent networks trained using this approach. Finally, this method is further modified to work on echo state networks, and computational experiments again show significant improvement in finite state machine extraction from these networks. These results clearly establish that principled modification of error backpropagation so that it constructs a better separated hidden layer representation is an effective way to improve contemporary symbolic extraction methods
Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
Neural-symbolic computing has now become the subject of interest of both
academic and industry research laboratories. Graph Neural Networks (GNN) have
been widely used in relational and symbolic domains, with widespread
application of GNNs in combinatorial optimization, constraint satisfaction,
relational reasoning and other scientific domains. The need for improved
explainability, interpretability and trust of AI systems in general demands
principled methodologies, as suggested by neural-symbolic computing. In this
paper, we review the state-of-the-art on the use of GNNs as a model of
neural-symbolic computing. This includes the application of GNNs in several
domains as well as its relationship to current developments in neural-symbolic
computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape
Evaluating Go Game Records for Prediction of Player Attributes
We propose a way of extracting and aggregating per-move evaluations from sets
of Go game records. The evaluations capture different aspects of the games such
as played patterns or statistic of sente/gote sequences. Using machine learning
algorithms, the evaluations can be utilized to predict different relevant
target variables. We apply this methodology to predict the strength and playing
style of the player (e.g. territoriality or aggressivity) with good accuracy.
We propose a number of possible applications including aiding in Go study,
seeding real-work ranks of internet players or tuning of Go-playing programs
Generative Adversarial Network for Market Hourly Discrimination
In this paper, we consider 2 types of instruments traded on the markets, stocks and cryptocurrencies. In particular, stocks are traded in a market subject to opening hours, while cryptocurrencies are traded in a 24-hour market. What we want to demonstrate through the use of a particular type of generative neural network is that the instruments of the non-timetable market have a different amount of information, and are therefore more suitable for forecasting. In particular, through the use of real data we will demonstrate how there are also stocks subject to the same rules as cryptocurrencies
- …