355 research outputs found
Recommended from our members
Proceedings of IJCAI International Workshop on Neural-Symbolic Learning and Reasoning NeSy 2005
Recommended from our members
Deep Logic Networks: Inserting and Extracting Knowledge from Deep Belief Networks
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language - a set of logical rules that we call confidence rules - and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance
Recommended from our members
Rule Extraction from Support Vector Machines: A Geometric Approach. Technical Report
This paper presents a new approach to rule extraction from Support Vector Machines. SVMs have been applied successfully in many areas with excellent generalization results; rule extraction can offer explanation capability to SVMs. We propose to approximate the SVM classification boundary through querying followed by clustering, searching and then to extract rules by solving an optimization problem. Theoretical proof and experimental results then indicate that the rules can be used to validate the SVM results, since maximum fidelity with high accuracy can be achieved
Recommended from our members
Neural-Symbolic Monitoring and Adaptation
Runtime monitors check the execution of a system under scrutiny against a set of formal specifications describing a prescribed behaviour. The two core properties for monitoring systems are scalability and adaptability. In this paper we show how RuleRunner, our previous neural-symbolic monitoring system, can exploit learning strategies in order to integrate desired deviations with the initial set of specification. The resulting system allows for fast conformance checking and can suggest possible enhanced models when the initial set of specifications has to be adapted in order to include new patterns
Recommended from our members
Applied temporal Rule Mining to Time Series
Association rule mining from time series has attracted considerable interest over the last years and various methods have been developed. Temporal rules between discovered episodes provide useful knowledge for the dynamics of the problem domain and the underlying data generating process. However, temporal rule mining has received little attention over the last years. In addition, the proposed methods suffer from two significant drawbacks. First the rules they produce are not robust enough with respect to noise. Second the proposed methods are highly dependent on the choice of the parameters since small perturbations on the parameters lead to significantly different results. In this paper we propose a frame-work to derive temporal rules from time series. Our approach is based on episode rule mining that discovers temporal rules from time series in the frequency domain using the discrete cosine transform. The rules are then translated to temporal relations between time series patterns of arbitrary length. Experimental results of the proposed framework are presented in the relevant section
Logic tensor networks for semantic image interpretation
Semantic Image Interpretation (SII) is the task of extracting structured semantic descriptions from images. It is widely agreed that the combined use of visual data and background knowledge is of great importance for SII. Recently, Statistical Relational Learning (SRL) approaches have been developed for reasoning under uncertainty and learning in the presence of data and rich knowledge. Logic Tensor Networks (LTNs) are a SRL framework which integrates neural networks with first-order fuzzy logic to allow (i) efficient learning from noisy data in the presence of logical constraints, and (ii) reasoning with logical formulas describing general properties of the data. In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the relevant part-of relations between objects. To the best of our knowledge, this is the first successful application of SRL to such SII tasks. The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-theart Fast Region-based Convolutional Neural Networks (Fast R-CNN). Moreover, we show that the use of logical background knowledge adds robustness to the learning system when errors are present in the labels of the training data
Efficient Predicate Invention using Shared NeMuS
Amao is a cognitive agent framework that tacklesthe invention of predicates with a different strat-egy as compared to recent advances in InductiveLogic Programming (ILP) approaches like Meta-Intepretive Learning (MIL) technique. It uses aNeural Multi-Space (NeMuS) graph structure toanti-unify atoms from the Herbrand base, whichpasses in the inductive momentum check. Induc-tive Clause Learning (ICL), as it is called, is ex-tended here by using the weights of logical compo-nents, already present in NeMuS, to support induc-tive learning by expanding clause candidates withanti-unified atoms. An efficient invention mecha-nism is achieved, including the learning of recur-sive hypotheses, while restricting the shape of thehypothesis by adding bias definitions or idiosyn-crasies of the language
Recommended from our members
Proceedings of ECAI International Workshop on Neural-Symbolic Learning and reasoning NeSy 2006
- …