833 research outputs found
Possibilistic networks parameter learning: Preliminary empirical comparison
International audienceLike Bayesian networks, possibilistic ones compactly encode joint uncertainty representations over a set of variables. Learning possibilistic networks from data in general and from imperfect or scarce data in particular, has not received enough attention. Indeed, only few works deal with learning the structure and the parameters of a possibilistic network from a dataset. This paper provides a preliminary comparative empirical evaluation of two approaches for learning the parameters of a possibilistic network from empirical data. The first method is a possibilistic approach while the second one first learns imprecise probability measures then transforms them into possibility distributions by means of probability-possibility transformations. The comparative evaluation focuses on learning belief networks on datasets with missing data and scarce datasets
The PITA System: Tabling and Answer Subsumption for Reasoning under Uncertainty
Many real world domains require the representation of a measure of
uncertainty. The most common such representation is probability, and the
combination of probability with logic programs has given rise to the field of
Probabilistic Logic Programming (PLP), leading to languages such as the
Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs),
Problog, PRISM and others. These languages share a similar distribution
semantics, and methods have been devised to translate programs between these
languages. The complexity of computing the probability of queries to these
general PLP programs is very high due to the need to combine the probabilities
of explanations that may not be exclusive. As one alternative, the PRISM system
reduces the complexity of query answering by restricting the form of programs
it can evaluate. As an entirely different alternative, Possibilistic Logic
Programs adopt a simpler metric of uncertainty than probability. Each of these
approaches -- general PLP, restricted PLP, and Possibilistic Logic Programming
-- can be useful in different domains depending on the form of uncertainty to
be represented, on the form of programs needed to model problems, and on the
scale of the problems to be solved. In this paper, we show how the PITA system,
which originally supported the general PLP language of LPADs, can also
efficiently support restricted PLP and Possibilistic Logic Programs. PITA
relies on tabling with answer subsumption and consists of a transformation
along with an API for library functions that interface with answer subsumption
Contextual and Possibilistic Reasoning for Coalition Formation
In multiagent systems, agents often have to rely on other agents to reach
their goals, for example when they lack a needed resource or do not have the
capability to perform a required action. Agents therefore need to cooperate.
Then, some of the questions raised are: Which agent(s) to cooperate with? What
are the potential coalitions in which agents can achieve their goals? As the
number of possibilities is potentially quite large, how to automate the
process? And then, how to select the most appropriate coalition, taking into
account the uncertainty in the agents' abilities to carry out certain tasks? In
this article, we address the question of how to find and evaluate coalitions
among agents in multiagent systems using MCS tools, while taking into
consideration the uncertainty around the agents' actions. Our methodology is
the following: We first compute the solution space for the formation of
coalitions using a contextual reasoning approach. Second, we model agents as
contexts in Multi-Context Systems (MCS), and dependence relations among agents
seeking to achieve their goals, as bridge rules. Third, we systematically
compute all potential coalitions using algorithms for MCS equilibria, and given
a set of functional and non-functional requirements, we propose ways to select
the best solutions. Finally, in order to handle the uncertainty in the agents'
actions, we extend our approach with features of possibilistic reasoning. We
illustrate our approach with an example from robotics
Naive possibilistic classifiers for imprecise or uncertain numerical data
International audienceIn real-world problems, input data may be pervaded with uncertainty. In this paper, we investigate the behavior of naive possibilistic classifiers, as a counterpart to naive Bayesian ones, for dealing with classification tasks in the presence of uncertainty. For this purpose, we extend possibilistic classifiers, which have been recently adapted to numerical data, in order to cope with uncertainty in data representation. Here the possibility distributions that are used are supposed to encode the family of Gaussian probabilistic distributions that are compatible with the considered dataset. We consider two types of uncertainty: (i) the uncertainty associated with the class in the training set, which is modeled by a possibility distribution over class labels, and (ii) the imprecision pervading attribute values in the testing set represented under the form of intervals for continuous data. Moreover, the approach takes into account the uncertainty about the estimation of the Gaussian distribution parameters due to the limited amount of data available. We first adapt the possibilistic classification model, previously proposed for the certain case, in order to accommodate the uncertainty about class labels. Then, we propose an algorithm based on the extension principle to deal with imprecise attribute values. The experiments reported show the interest of possibilistic classifiers for handling uncertainty in data. In particular, the probability-to-possibility transform-based classifier shows a robust behavior when dealing with imperfect data
- …