731 research outputs found
First-order logic learning in artificial neural networks
Artificial Neural Networks have previously been applied in neuro-symbolic learning to learn ground logic program rules. However, there are few results of learning relations using neuro-symbolic learning. This paper presents the system PAN, which can learn relations. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The system is tested on three examples and the results are discussed
Recommended from our members
Connectionist modal logic: Representing modalities in neural networks
AbstractModal logics are amongst the most successful applied logical systems. Neural networks were proved to be effective learning systems. In this paper, we propose to combine the strengths of modal logics and neural networks by introducing Connectionist Modal Logics (CML). CML belongs to the domain of neural-symbolic integration, which concerns the application of problem-specific symbolic knowledge within the neurocomputing paradigm. In CML, one may represent, reason or learn modal logics using a neural network. This is achieved by a Modalities Algorithm that translates modal logic programs into neural network ensembles. We show that the translation is sound, i.e. the network ensemble computes a fixed-point meaning of the original modal program, acting as a distributed computational model for modal logic. We also show that the fixed-point computation terminates whenever the modal program is well-behaved. Finally, we validate CML as a computational model for integrated knowledge representation and learning by applying it to a well-known testbed for distributed knowledge representation. This paves the way for a range of applications on integrated knowledge representation and learning, from practical reasoning to evolving multi-agent systems
Recommended from our members
Learning Distributed Representations for Multiple-Viewpoint Melodic Prediction
The analysis of sequences is important for extracting in- formation from music owing to its fundamentally temporal nature. In this paper, we present a distributed model based on the Restricted Boltzmann Machine (RBM) for learning melodic sequences. The model is similar to a previous suc- cessful neural network model for natural language [2]. It is first trained to predict the next pitch in a given pitch se- quence, and then extended to also make use of information in sequences of note-durations in monophonic melodies on the same task. In doing so, we also propose an efficient way of representing this additional information that takes advantage of the RBM’s structure. Results show that this RBM-based prediction model performs better than previ- ously evaluated n-gram models and also outperforms them in certain cases. It is able to make use of information present in longer sequences more effectively than n-gram models, while scaling linearly in the number of free pa- rameters required
Recommended from our members
Value-based argumentation frameworks as neural-symbolic learning systems
While neural networks have been successfully used in a number of machine learning applications, logical languages have been the standard for the representation of argumentative reasoning. In this paper, we establish a relationship between neural networks and argumentation networks, combining reasoning and learning in the same argumentation framework. We do so by presenting a new neural argumentation algorithm, responsible for translating argumentation networks into standard neural networks. We then show a correspondence between the two networks. The algorithm works not only for acyclic argumentation networks, but also for circular networks, and it enables the accrual of arguments through learning as well as the parallel computation of arguments
Recommended from our members
Abductive reasoning in neural-symbolic learning systems
Abduction is or subsumes a process of inference. It entertains possible hypotheses and it chooses hypotheses for further scrutiny. There is a large literature on various aspects of non-symbolic, subconscious abduction. There is also a very active research community working on the symbolic (logical) characterisation of abduction, which typically treats it as a form of hypothetico-deductive reasoning. In this paper we start to bridge the gap between the symbolic and sub-symbolic approaches to abduction. We are interested in benefiting from developments made by each community. In particular, we are interested in the ability of non-symbolic systems (neural networks) to learn from experience using efficient algorithms and to perform massively parallel computations of alternative abductive explanations. At the same time, we would like to benefit from the rigour and semantic clarity of symbolic logic. We present two approaches to dealing with abduction in neural networks. One of them uses Connectionist Modal Logic and a translation of Horn clauses into modal clauses to come up with a neural network ensemble that computes abductive explanations in a top-down fashion. The other combines neural-symbolic systems and abductive logic programming and proposes a neural architecture which performs a more systematic, bottom-up computation of alternative abductive explanations. Both approaches employ standard neural network architectures which are already known to be highly effective in practical learning applications. Differently from previous work in the area, our aim is to promote the integration of reasoning and learning in a way that the neural network provides the machinery for cognitive computation, inductive learning and hypothetical reasoning, while logic provides the rigour and explanation capability to the systems, facilitating the interaction with the outside world. Although it is left as future work to determine whether the structure of one of the proposed approaches is more amenable to learning than the other, we hope to have contributed to the development of the area by approaching it from the perspective of symbolic and sub-symbolic integration
Recommended from our members
Sequence Classification Restricted Boltzmann Machines With Gated Units
For the classification of sequential data, dynamic Bayesian networks and recurrent neural networks (RNNs) are the preferred models. While the former can explicitly model the temporal dependences between the variables, and the latter have the capability of learning representations. The recurrent temporal restricted Boltzmann machine (RTRBM) is a model that combines these two features. However, learning and inference in RTRBMs can be difficult because of the exponential nature of its gradient computations when maximizing log likelihoods. In this article, first, we address this intractability by optimizing a conditional rather than a joint probability distribution when performing sequence classification. This results in the ``sequence classification restricted Boltzmann machine'' (SCRBM). Second, we introduce gated SCRBMs (gSCRBMs), which use an information processing gate, as an integration of SCRBMs with long short-term memory (LSTM) models. In the experiments reported in this article, we evaluate the proposed models on optical character recognition, chunking, and multiresident activity recognition in smart homes. The experimental results show that gSCRBMs achieve the performance comparable to that of the state of the art in all three tasks. gSCRBMs require far fewer parameters in comparison with other recurrent networks with memory gates, in particular, LSTMs and gated recurrent units (GRUs)
Effect of various rebar types and crushed glass coating onto BFRP rebar on the bond strength to concrete
This study demonstrates the bond-slip behaviour of steel, BFRP, CFRP and GFRP rebars to concrete obtained from a series of pull-out tests. Results show that CFRP can achieve higher bond strength than steel, whereas BFRP attains half of the bond strength of steel. The bond strength was decreased by 23–28% in steel and BFRP due to an increase in bar diameter. However, the CFRP and GFRP showed a 73–78% reduction in bond strength due to the increase in bar diameter. Steel had the steepest slope in the bond-slip curve, followed by CFRP, BFRP and GFRP. Since BFRP attained optimum performance, the surface coating was applied onto BFRP using natural sand and recycled crushed glass to evaluate the roughness of FRP on the bond performance. Sand and glass coated BFRP demonstrated 37% and 75% higher bond strength compared to uncoated BFRP, while bond stiffness was increased by 14 and 11%, respectively. Compared to the sand coated BFRP, crushed glass coated BFRP exhibited approximately 20% and 10% more bond strength for the 6 mm and 10 mm diameter rebars, respectively. However, the glass coated BFRP exhibited more brittle behaviour compared to its sand coated counterparts. Based on the analytical results, surface roughness and embedment length are found to have a significant influence on the ultimate bond strength (p-value < 0.05) at a 5% level of significance. Additionally, the interaction effect of diameter*embedment and diameter*coating was found to have a significant effect on bond strength
Tissue Cytokine Responses in Canine Visceral Leishmaniasis
To elucidate the local tissue cytokine response of dogs infected with Leishmania chagasi, cytokine mRNA levels were measured in bone marrow aspirates from 27 naturally infected dogs from Brazil and were compared with those from 5 uninfected control animals. Interferon-Îł mRNA accumulation was enhanced in infected dogs and was positively correlated with humoral (IgG1) but not with lymphoproliferative responses to Leishmania antigen in infected dogs. Increased accumulation of mRNA for interleukin (IL)4, IL-10, and IL-18 was not observed in infected dogs, and mRNA for these cytokines did not correlate with antibody or proliferative responses. However, infected dogs with detectable IL-4 mRNA had significantly more severe symptoms. IL-13 mRNA was not detectable in either control or infected dogs. These data suggest that clinical symptoms are not due to a deficiency in interferon-Îł production. However, in contrast to its role in human visceral leishmaniasis, IL-10 may not play a key immunosuppressive role in dogs
- …