6 research outputs found
Reliable Uncertain Evidence Modeling in Bayesian Networks by Credal Networks
A reliable modeling of uncertain evidence in Bayesian networks based on a
set-valued quantification is proposed. Both soft and virtual evidences are
considered. We show that evidence propagation in this setup can be reduced to
standard updating in an augmented credal network, equivalent to a set of
consistent Bayesian networks. A characterization of the computational
complexity for this task is derived together with an efficient exact procedure
for a subclass of instances. In the case of multiple uncertain evidences over
the same variable, the proposed procedure can provide a set-valued version of
the geometric approach to opinion pooling.Comment: 19 page
Generalized belief change with imprecise probabilities and graphical models
We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored
Belief change operations under confidentiality requirements in multiagent systems
Multiagent systems are populated with autonomous computing entities called agents which pro-actively pursue their goals.
The design of such systems is an active field within artificial intelligence research with one objective being flexible and adaptive
agents in dynamic and inaccessible environments.
An agent's decision-making and finally its success in achieving its goals crucially depends on the agent's information about its environment
and the sharing of information with other agents in the multiagent system. For this and other reasons, an agent's information is a valuable asset
and thus the agent is often interested in the confidentiality of parts of this information. From research in computer security it is well-known that
confidentiality is not only achieved by the agent's control of access to its data, but by its control of the flow of information when processing the data
during the interaction with other agents.
This thesis investigates how to specify and enforce the confidentiality interests of an agent D while it reacts to iterated query, revision
and update requests from another agent A for the purpose of information sharing.
First, we will enable the agent D to specify in a dedicated confidentiality policy that parts of its previous or current belief about its environment
should be hidden from the other requesting agent A.
To formalize the requirement of hiding belief, we will in particular postulate agent A's capabilities for reasoning about D's belief and about
D's processing of information to form its belief. Then, we will relate the requirements imposed by a confidentiality policy to others in the research
of information flow control and inference control in computer security.
Second, we will enable the agent D to enforce its confidentiality aims as expressed by its policy by refusing requests from A at a potential violation
of its policy. A crucial part of the enforcement is D's simulation of A's postulated reasoning about D's belief and the change of this belief.
In this thesis, we consider two particular operators of belief change: an update operator for a simple logic-oriented database model
and a revision operator for D's assertions about its environment that yield the agent's belief after its nonmonotonic reasoning.
To prove the effectiveness of D's means of enforcement, we study necessary properties of D's simulation of A and then
based on these properties show that D's enforcement is effective according to the formal requirements of its policy
Three scenarios for the revision of epistemic states
This position paper was triggered by discussions with Jerome Lang and Jim Delgrande at a Belief Revision seminar in Dagstuhl, in August 2005International audienceThis position paper discusses the dif culty of interpreting iterated belief revision in the scope of the existing literature. Axioms of iterated belief revision are often presented as extensions of the AGM axioms, upon receiving a sequence of inputs. More recent inputs are assumed to have priority over less recent ones. We argue that this view of iterated revision is at odds with the claim, made by GÂżardenfors and Makinson, that belief revision and non-monotonic reasoning are two sides of the same coin. We lay bare three different paradigms of revision based on speci c interpretations of the epistemic entrenchment de ning an epistemic state and of the input information. If the epistemic entrenchment stems from default rules, then AGM revision is a matter of changing plausible conclusions when receiving speci c information on the problem at hand. In such a paradigm, iterated belief revision makes no sense. If the epistemic entrenchment encodes prior uncertain evidence and the input information is at the same level as the prior information and possibly uncertain, then iterated revision reduces to prioritized merging. A third problem is one of the revision of an epistemic entrenchment by means of another one. In this case, iteration makes sense, and it corresponds to the revision of a conditional knowledge base describing background information by the addition of new default rules
Three scenarios for the revision of epistemic states
This position article was triggered by discussions with Jérôme Lang and Jim Delgrande at a Belief Revision seminar in Dagstuhl, in August 2005, and presented at the 2006 Non-Monotonic Reasoning Workshop, Windermere, UK.International audienceThis position paper discusses the difficulty of interpreting the iterated belief revision problem. Axioms of iterated belief revision are often presented as extensions of the AGM axioms, upon receiving a sequence of inputs, likely to alter not only the belief set, but also the epistemic entrenchment relation underlying the revision operator. Iterated belief revision presupposes that more recent inputs have priority over less recent ones. We argue that this view of iterated revision is at odds with the suggestion of Gärdenfors and Makinson, that belief revision and non-monotonic reasoning are two sides of the same coin. It is not clear that non-monotonic reasoning modifies the ranking of possible worlds implicit in default rules. We lay bare three different paradigms of revision based on specific interpretations of the epistemic entrenchment implicitly at work and of the input information. If the epistemic entrenchment stems from default rules and the input is a specific piece of evidence, then AGM revision is a matter of changing plausible conclusions, and iterated revision makes no sense. However, if the epistemic entrenchment encodes uncertain factual evidence and the input information as well, then iterated revision reduces to prioritized merging. A third problem where iteration makes sense corresponds to the revision, by the addition of new default rules, of a conditional knowledge base describing background information. The three scenarios are compared with similar problems in the framework of probabilistic reasonin