4,560 research outputs found
Statistical relational learning with soft quantifiers
Quantification in statistical relational learning (SRL) is either existential or universal, however humans might be more inclined to express knowledge using soft quantifiers, such as ``most'' and ``a few''. In this paper, we define the syntax and semantics of PSL^Q, a new SRL framework that supports reasoning with soft quantifiers, and present its most probable explanation (MPE) inference algorithm. To the best of our knowledge, PSL^Q is the first SRL framework that combines soft quantifiers with first-order logic rules for modelling uncertain relational data. Our experimental results for link prediction in social trust networks demonstrate that the use of soft quantifiers not only allows for a natural and intuitive formulation of domain knowledge, but also improves the accuracy of inferred results
Comparing Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning Methods for a Computational Trust Problem with Wikipedia
Computational trust is an ever-more present issue with the surge in autonomous agent development. Represented as a defeasible phenomenon, problems associated with computational trust may be solved by the appropriate reasoning methods. This paper compares two types of such methods, Defeasible Argumentation and Non-Monotonic Fuzzy Logic to assess which is more effective at solving a computational trust problem centred around Wikipedia editors. Through the application of these methods with real-data and a set of knowledge-bases, it was found that the Fuzzy Logic approach was statistically significantly better than the Argumentation approach in its inferential capacity
T-Norms Driven Loss Functions for Machine Learning
Neural-symbolic approaches have recently gained popularity to inject prior
knowledge into a learner without requiring it to induce this knowledge from
data. These approaches can potentially learn competitive solutions with a
significant reduction of the amount of supervised data. A large class of
neural-symbolic approaches is based on First-Order Logic to represent prior
knowledge, relaxed to a differentiable form using fuzzy logic. This paper shows
that the loss function expressing these neural-symbolic learning tasks can be
unambiguously determined given the selection of a t-norm generator. When
restricted to supervised learning, the presented theoretical apparatus provides
a clean justification to the popular cross-entropy loss, which has been shown
to provide faster convergence and to reduce the vanishing gradient problem in
very deep structures. However, the proposed learning formulation extends the
advantages of the cross-entropy loss to the general knowledge that can be
represented by a neural-symbolic method. Therefore, the methodology allows the
development of a novel class of loss functions, which are shown in the
experimental results to lead to faster convergence rates than the approaches
previously proposed in the literature
Soft quantification in statistical relational learning
We present a new statistical relational learning (SRL) framework that supports reasoning with soft quantifiers, such as "most" and "a few." We define the syntax and the semantics of this language, which we call , and present a most probable explanation inference algorithm for it. To the best of our knowledge, is the first SRL framework that combines soft quantifiers with first-order logic rules for modelling uncertain relational data. Our experimental results for two real-world applications, link prediction in social trust networks and user profiling in social networks, demonstrate that the use of soft quantifiers not only allows for a natural and intuitive formulation of domain knowledge, but also improves inference accuracy
An Empirical Evaluation of the Inferential Capacity of Defeasible Argumentation, Non-monotonic Fuzzy Reasoning and Expert Systems
Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning activity under uncertainty. An experimental work was performed by exploiting three deductive knowledge bases produced with the aid of experts in the field. These were coded into models by employing the selected techniques and were subsequently elicited with data gathered from humans. The inferences produced by these models were in turn analysed according to common metrics of evaluation in the field of mental workload, in specific validity and sensitivity. Findings suggest that the variance of the inferences of expert systems and fuzzy reasoning models was higher, highlighting poor stability. Contrarily, that of argument-based models was lower, showing a superior stability of its inferences across knowledge bases and under different system configurations. The originality of this research lies in the quantification of the impact of defeasible argumentation. It contributes to the field of logic and non-monotonic reasoning by situating defeasible argumentation among similar approaches of non-monotonic reasoning under uncertainty through a novel empirical comparison
- …