8,723 research outputs found
Extracting Boolean rules from CA patterns
A multiobjective genetic algorithm (GA) is introduced to identify both the neighborhood and the rule set in the form of a parsimonious Boolean expression for both one- and two-dimensional cellular automata (CA). Simulation results illustrate that the new algorithm performs well even when the patterns are corrupted by static and dynamic nois
Analysis of Neural Networks in Terms of Domain Functions
Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a mysterious "black box". Although much research has already been done to "open the box," there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network's function and, depending on the chosen base functions, it may also provide an insight into the neural network' s inner "reasoning." It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor
Local Rule-Based Explanations of Black Box Decision Systems
The recent years have witnessed the rise of accurate but obscure decision
systems which hide the logic of their internal decision processes to the users.
The lack of explanations for the decisions of black box systems is a key
ethical issue, and a limitation to the adoption of machine learning components
in socially sensitive and safety-critical contexts. %Therefore, we need
explanations that reveals the reasons why a predictor takes a certain decision.
In this paper we focus on the problem of black box outcome explanation, i.e.,
explaining the reasons of the decision taken on a specific instance. We propose
LORE, an agnostic method able to provide interpretable and faithful
explanations. LORE first leans a local interpretable predictor on a synthetic
neighborhood generated by a genetic algorithm. Then it derives from the logic
of the local interpretable predictor a meaningful explanation consisting of: a
decision rule, which explains the reasons of the decision; and a set of
counterfactual rules, suggesting the changes in the instance's features that
lead to a different outcome. Wide experiments show that LORE outperforms
existing methods and baselines both in the quality of explanations and in the
accuracy in mimicking the black box
Recommended from our members
Exact and Approximate Rule Extraction from Neural Networks with Boolean Features
Rule extraction from classifiers treated as black boxes is an important topic in explainable artificial intelligence (XAI). It is concerned with finding rules that describe classifiers and that are understandable to humans, having the form of (I f...T hen...Else). Neural network classifiers are one type of classifier where it is difficult to know how the inputs map to the decision. This paper presents a technique to extract rules from a neural network where the feature space is Boolean, without looking at the inner structure of the network. For such a network with a small feature space, a Boolean function describing it can be directly calculated, whilst for a network with a larger feature space, a sampling method is described to produce rule-based approximations to the behaviour of the network with varying granularity, leading to XAI. The technique is experimentally assessed on a dataset of cross-site scripting (XSS) attacks, and proves to give very high accuracy and precision, comparable to that given by the neural network being approximated
Dissecting the Biological Motherboard (Systems Biology and Beyond)
Genome-scale molecular networks, including gene pathways, gene regulatory networks and protein interactions, are central to the investigation of the nascent disciplines of systems biology and bio-complexity. Dissecting these genome-scale molecular networks in its all-possible manifestations is paramount in our quest for a genotype-input phenotype-output application which will also take environment-genome interactions into account.

Machine learning approaches are now increasingly being used for reverse engineering such networks. Our work stresses the importance of a system approach in biological research and how artificial neural networks are at the forefront of Artificial Intelligence techniques that are increasingly being used to construct as well as dissect molecular networks, the building blocks of the living system.

Our paper will show the application of artificial neural networks to reverse engineer a temporal gene pathway 
In this paper we will also explore the pruning of nodes of these artificial neural networks to simulate gene silencing and thus generate novel biological insight into these molecular networks (The Biological Motherboard).

The research described is novel, in that this may be the first time that the application of neural networks to temporal gene expression data is described. It will be shown that a trained artificial neural network, with pruning, can also be described as a gene network with minimal re-interpretation, where the weights on links between nodes reflect the probability of one gene affecting another gene in time
- …