23,897 research outputs found
Preliminary Experiments using Subjective Logic for the Polyrepresentation of Information Needs
According to the principle of polyrepresentation, retrieval accuracy may
improve through the combination of multiple and diverse information object
representations about e.g. the context of the user, the information sought, or
the retrieval system. Recently, the principle of polyrepresentation was
mathematically expressed using subjective logic, where the potential
suitability of each representation for improving retrieval performance was
formalised through degrees of belief and uncertainty. No experimental evidence
or practical application has so far validated this model. We extend the work of
Lioma et al. (2010), by providing a practical application and analysis of the
model. We show how to map the abstract notions of belief and uncertainty to
real-life evidence drawn from a retrieval dataset. We also show how to estimate
two different types of polyrepresentation assuming either (a) independence or
(b) dependence between the information objects that are combined. We focus on
the polyrepresentation of different types of context relating to user
information needs (i.e. work task, user background knowledge, ideal answer) and
show that the subjective logic model can predict their optimal combination
prior and independently to the retrieval process
A Reasoning System for a First-Order Logic of Limited Belief
Logics of limited belief aim at enabling computationally feasible reasoning
in highly expressive representation languages. These languages are often
dialects of first-order logic with a weaker form of logical entailment that
keeps reasoning decidable or even tractable. While a number of such logics have
been proposed in the past, they tend to remain for theoretical analysis only
and their practical relevance is very limited. In this paper, we aim to go
beyond the theory. Building on earlier work by Liu, Lakemeyer, and Levesque, we
develop a logic of limited belief that is highly expressive while remaining
decidable in the first-order and tractable in the propositional case and
exhibits some characteristics that make it attractive for an implementation. We
introduce a reasoning system that employs this logic as representation language
and present experimental results that showcase the benefit of limited belief.Comment: 22 pages, 0 figures, Twenty-sixth International Joint Conference on
Artificial Intelligence (IJCAI-17
Probabilistic Algorithmic Knowledge
The framework of algorithmic knowledge assumes that agents use deterministic
knowledge algorithms to compute the facts they explicitly know. We extend the
framework to allow for randomized knowledge algorithms. We then characterize
the information provided by a randomized knowledge algorithm when its answers
have some probability of being incorrect. We formalize this information in
terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a
query about a fact \phi provides evidence for \phi being true. Finally, we
discuss the extent to which this evidence can be used as a basis for decisions.Comment: 26 pages. A preliminary version appeared in Proc. 9th Conference on
Theoretical Aspects of Rationality and Knowledge (TARK'03
Relational Representations in Reinforcement Learning: Review and Open Problems
This paper is about representation in RL.We discuss some of the concepts in representation and generalization in reinforcement learning and argue for higher-order representations, instead of the commonly used propositional representations. The paper contains a small review of current reinforcement learning systems using higher-order representations, followed by a brief discussion. The paper ends with research directions and open problems.\u
- ā¦