78,350 research outputs found

    Logic Programming In Radial Basis Function Neural Networks

    Get PDF
    In this thesis, I established new techniques to represent logic programming in radial basis function neural networks. Two techniques were developed. The first technique is to encode the logic programming in radial basis function neural networks. The second technique is to compute the single step operator of logic programming in radial basis function neural networks. I used different types of optimization algorithms to improve the performance of the neural networks. I used three different techniques for improving the predictive capability of the neural networks. These techniques are: no-training technique, half training technique and full training technique. In this thesis, I established a new method for determining the best number of the hidden neurons in radial basis function neural networks. To do that I used the root mean square error function and Schwarz bayesian criterion as model selection criteria. I used real data sets of different sizes in the computational results. The analysis revealed that performance of particle swarm optimization algorithm and Prey predator algorithm are better to use in training the networks. In this thesis also, I developed a new technique to extract the logic programming from radial basis function neural networks. To do that, I established the radial basis function neural networks which represent the three conjunctive normal form (3-CNF) logic programming. Following this, I implemented the results to represent the electronic circuits in the radial basis function neural networks

    An Architecture-Altering and Training Methodology for Neural Logic Networks: Application in the Banking Sector

    Get PDF
    Artificial neural networks have been universally acknowledged for their ability on constructing forecasting and classifying systems. Among their desirable features, it has always been the interpretation of their structure, aiming to provide further knowledge for the domain experts. A number of methodologies have been developed for this reason. One such paradigm is the neural logic networks concept. Neural logic networks have been especially designed in order to enable the interpretation of their structure into a number of simple logical rules and they can be seen as a network representation of a logical rule base. Although powerful by their definition in this context, neural logic networks have performed poorly when used in approaches that required training from data. Standard training methods, such as the back-propagation, require the network’s synapse weight altering, which destroys the network’s interpretability. The methodology in this paper overcomes these problems and proposes an architecture-altering technique, which enables the production of highly antagonistic solutions while preserving any weight-related information. The implementation involves genetic programming using a grammar-guided training approach, in order to provide arbitrarily large and connected neural logic networks. The methodology is tested in a problem from the banking sector with encouraging results

    The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence

    Get PDF
    Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page

    DeepProbLog: Neural Probabilistic Logic Programming

    Get PDF
    We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.Comment: Accepted for spotlight at NeurIPS 201
    corecore