23 research outputs found

    The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence

    Get PDF
    Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page

    Maximum 2-satisfiability in radial basis function neural network

    Get PDF
    Maximum k-Satisfiability (MAX-kSAT) is the logic to determine the maximum number of satisfied clauses. Correctly, this logic plays a prominent role in numerous applications as a combinatorial optimization logic. MAX2SAT is a case of MAX-kSAT and is written in Conjunctive Normal Form (CNF) with two variables in each clause. This paper presents a new paradigm in using MAX2SAT by implementing in Radial Basis Function Neural Network (RBFNN). Hence, we restrict the analysis to MAX2SAT clauses. We utilize Dev C++ as the platform of training and testing our proposed algorithm. In this study, the effectiveness of RBFNN-MAX2SAT can be estimated by evaluating the proposed models with testing data sets. The results obtained are analysed using the ratio of satisfied clause (RSC), the root means square error (RMSE), and CPU time. The simulated results suggest that the proposed algorithm is effective in doing MAX2SAT logic programming by analysing the performance by obtaining lower Root Mean Square Error, high ratio of satisfied clauses and lesser CPU time

    A neural implementation of multi-adjoint logic programs via sf-homogenization

    Get PDF
    A generalization of the homogenization process needed for the neural im- plementation of multi-adjoint logic programming (a unifying theory to deal with uncertainty, imprecise data or incomplete information) is presented here. The idea is to allow to represent a more general family of adjoint pairs, but maintaining the advantage of the existing implementation recently introduced in [6]. The soundness of the transformation is proved and its complexity is analysed. In addition, the corresponding generalization of the neural-like implementation of the fixed point semantics of multi-adjoint is presented

    Dimensions of Neural-symbolic Integration - A Structured Survey

    Full text link
    Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities.Comment: 28 page

    Discrete hopfield neural network in restricted maximum k-satisfiability logic programming

    Get PDF
    Maximum k-Satisfiability (MAX-kSAT) consists of the most consistent interpretation that generate the maximum number of satisfied clauses. MAX-kSAT is an important logic representation in logic programming since not all combinatorial problem is satisfiable in nature. This paper presents Hopfield Neural Network based on MAX-kSAT logical rule. Learning of Hopfield Neural Network will be integrated with Wan Abdullah method and Sathasivam relaxation method to obtain the correct final state of the neurons. The computer simulation shows that MAX-kSAT can be embedded optimally in Hopfield Neural Network
    corecore