82,985 research outputs found

    Hybrid rule-extraction from support vector machines

    Get PDF
    Rule-extraction from artificial neural networks(ANNs) as well as support vector machines (SVMs) provide explanations for the decisions made by these systems. This explanation capability is very important in applications such as medical diagnosis. Over the last decade, a multitude of algorithms for rule-extraction from ANNs have been developed. However, rule-extraction from SVMs is not widely available yet.In this paper, a hybrid approach for rule-extraction from SVMs is outlined. This approach has two basic components: (1) data reduction using a logistic regression model and (2) learning based rule-extraction. The quality of the extracted rules is then evaluated in terms of fidelity, accuracy, consistency and comprehensibility. The rules are also verified against the available knowledge from the domain problem (diabetes) to assure correctness and validity

    Extracting Symbolic Representations Learned by Neural Networks

    Get PDF
    Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, human-readable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods. In this context, this dissertation research makes four main contributions. First, modifications to the backpropagation learning procedure are derived mathematically, and it is shown that these modifications can be accomplished as local computations. Second, the effectiveness of the modified learning procedure for feedforward networks is established by showing that, on a set of benchmark tasks, it produces rule sets that are substantially simpler than those produced by standard backpropagation learning. Third, this approach is extended to simple recurrent networks, and experimental evaluation shows remarkable reduction in the sizes of the finite state machines extracted from the recurrent networks trained using this approach. Finally, this method is further modified to work on echo state networks, and computational experiments again show significant improvement in finite state machine extraction from these networks. These results clearly establish that principled modification of error backpropagation so that it constructs a better separated hidden layer representation is an effective way to improve contemporary symbolic extraction methods

    Learning-Based Rule-Extraction From Support Vector Machines: Performance On Benchmark Data Sets

    Get PDF
    Over the last decade, rule-extraction from neural networks (ANN) techniques have been developed to explain how classification and regression are realised by the ANN. Yet, this is not the case for support vector machines (SVMs) which also demonstrate an inability to explain the process by which a learning result was reached and why a decision is being made. Rule-extraction from SVMs is important, especially for applications such as medical diagnosis. In this paper, an approach for learning-based rule-extraction from support vector machines is outlined, including an evaluation of the quality of the extracted rules in terms of fidelity, accuracy, consistency and comprehensibility. In addition, the rules are verified by use of knowledge from the problem domains as well as other classification techniques to assure correctness and validity

    Simplification of rules extracted from neural networks

    Get PDF
    Artificial neural networks (ANNs) have been proven to be successful general machine learning techniques for, amongst others, pattern recognition and classification. Realworld problems in agriculture (soybean, tea), medicine (cancer, cardiology, mammograms) and finance (credit rating, stock market) are successfully solved using ANNs. ANNs model biological neural systems. A biological neural system consists of neurons interconnected through neural synapses. These neurons serve as information processing units. Synapses carrt information to the neurons, which then processes or responds to the data by sending a signal to the next level of neurons. Information is strengthened or lessened according to the sign ..and magnitude of the weight associated with the connection. An ANN consists of cell-like entities called units (also called artificial neurons) and weighted connections between these units referred to as links. ANNs can be viewed as a directed graph with weighted connections. An unit belongs to one of three groups: input, hidden or output. Input units receive the initial training patterns, which consist of input attributes and the associated target attributes, from the environment. Hidden units do not interact with the environment whereas output units presents the results to the environment. Hidden and output units compute an output ai which is a function f of the sum of its input weights w; multiplied by the output x; of the units j in the preceding layer, together with a bias term fh that acts as a threshold for the unit. The output ai for unit i with n input units is calculated as ai = f("f:,'J= 1 x;w; - 8i ). Training of the ANN is done by adapting the weight values for each unit via a gradient search. Given a set of input-target pairs, the ANN learns the functional relationship between the input and the target. A serious drawback of the neural network approach is the difficulty to determine why a particular conclusion was reached. This is due to the inherit 'black box' nature of the neural network approach. Neural networks rely on 'raw' training data to learn the relationships between the initial inputs and target outputs. Knowledge is encoded in a set of numeric weights and biases. Although this data driven aspect of neural network allows easy adjustments when change of environment or events occur, it is difficult to interpret numeric weights, making it difficult for humans to understand. Concepts represent by symbolic learning algorithms are intuitive and therefore easily understood by humans [Wnek 1994). One approach to understanding the representations formed by neural networks is to extract such symbolic rules from networks. Over the last few years, a number of rule extraction methods have been reported (Craven 1993, Fu 1994). There are some general assumptions that these algorithms adhere to. The first assumption that most rule extraction algorithms make, is that non-input units are either maximally active (activation near 1) or inactive (activation near 0). This Boolean valued activation is approximated by using the standard logistic activation function /(z) = 1/( 1 + e-•z ) and setting s 5.0. The use of the above function parameters guarantees that non-input units always have non-negative activations in the range [0,1). The second underlying premise of rule extraction is that each hidden and output unit implements a symbolic rule. The concept associated with each unit is the consequent of the rule, and certain subsets of the input units represent the antecedent of the rule. Rule extraction algorithms search for those combinations of input values to a particular hidden or output unit that results in it having an optimal (near-one) activation. Here, rule extraction methods exploit a very basic principle of biological neural networks. That is, if the sum of its weighted inputs exceeds a certain threshold, then the biological neuron fires [Fu 1994). This condition is satisfied when the sum of the weighted inputs exceeds the bias, where (E'Jiz,=::l w; > 9i)• It has been shown that most concepts described by humans usally can be expressed as production rules in disjunctive normal form (DNF) notation. Rules expressed in this notation are therefore highly comprehensible and intuitive. In addition, the number of production rules may be reduced and the structure thereof simplified by using propositional logic. A method that extracts production rules in DNF is presented [Viktor 1995). The basic idea of the method is the use of equivalence classes. Similarly weighted links are grouped into a cluster, the assumption being that individual weights do not have unique importance. Clustering considerably reduces the combinatorics of the method as opposed to previously reported approaches. Since the rules are in a logically manipulatable form, significant simplifications in the structure thereof can be obtained, yielding a highly reduced and comprehensible set of rules. Experimental results have shown that the accuracy of the extracted rules compare favourably with the CN2 [Clark 1989] and C4.5 [Quinlan 1993] symbolic rule extraction methods. The extracted rules are highly comprehensible and similar to those extracted by traditional symfiolic methods

    Genetic programming and bacterial algorithm for neural networks and fuzzy systems design

    Get PDF
    In the field of control systems it is common to use techniques based on model adaptation to carry out control for plants for which mathematical analysis may be intricate. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this line, this paper gives a perspective on the quality of results given by two different biologically connected learning algorithms for the design of B-spline neural networks (BNN) and fuzzy systems (FS). One approach used is the Genetic Programming (GP) for BNN design and the other is the Bacterial Evolutionary Algorithm (BEA) applied for fuzzy rule extraction. Also, the facility to incorporate a multi-objective approach to the GP algorithm is outlined, enabling the designer to obtain models more adequate for their intended use
    • …
    corecore