9 research outputs found

    Belief revision, non-monotonic reasoning and secrecy for epistemic agents

    Get PDF
    Software agents are increasingly used to handle the information of individuals and companies. They also exchange this information with other software agents and humans. This raises the need for sophisticated methods of such agents to represent information, to change it, reason with it, and to protect it. We consider these needs for communicating autonomous agents with incomplete information in a partially observable, dynamic environment. The protection of secret information requires the agents to consider the information of agents, and the possible inferences of these. Further, they have to keep track of this information, and they have to anticipate the effects of their actions. In our considered setting the preservation of secrecy is not always possible. Consequently, an agent has to be able to evaluate and minimize the degree of violation of secrecy. Incomplete information calls for non-monotonic logics, which allow to draw tentative conclusions. A dynamic environment calls for operators that change the information of the agent when new information is received. We develop a general framework of agents that represent their information by logical knowledge representation formalisms with the aim to integrate and combine methods for non-monotonic reasoning, for belief change, and methods to protect secret information. For the integration of belief change theory, we develop new change operators that make use of non-monotonic logic in the change process, and new operators for non-monotonic formalisms. We formally prove their adherence to the quality standards taken and adapted from belief revision theory. Based on the resulting framework we develop a formal framework for secrecy aware agents that meet the requirements described above. We consider different settings for secrecy and analyze requirements to preserve secrecy. For the protection of secrecy we elaborate on change operations and the evaluation of actions with respect to secrecy, both declaratively and by providing constructive approaches. We formally prove the adherence of the constructions to the declarative specifications. Further, we develop concrete agent instances of our framework building on and extending the well known BDI agent model. We build complete secrecy aware agents that use our extended BDI model and answer set programming for knowledge representation and reasoning. For the implementation of our agents we developed Angerona, a Java multiagent development framework. It provides a general framework for developing epistemic agents and implements most of the approaches presented in this thesis

    Know-How for Motivated BDI Agents (Extended Abstract)

    Get PDF
    ABSTRACT The BDI model is well accepted as an architecture for representing and realizing rational agents. The beliefs in this model are focused on the representation of beliefs about the world and other agents and are widely independent from the agents intentions. We argue that also the representation of know-how, which captures the beliefs about actions and procedures, has to be taken into account when modeling rational agents. Using the notion of know-how as introduced by Singh we formalize and implement a concrete and usable agent architecture that supports and benefits from this representation of procedural beliefs in multiple ways. It also supports the representation of motivations that influence the agent's behavior. We thus enable the agent to reason about its planning capabilities in the same way as it can reason about any other of its beliefs by extending a BDI-based agent architecture to allow the representation of procedural beliefs explicitly as part of the agent's logical beliefs which again influences and enhances the agent's behavior

    Argumentative Credibility-based Revision in Multi-Agent Systems

    Get PDF
    We consider the problem of belief revision in a multi-agent system with information stemming from different agents with different degrees of credibility. In this context an agent has to carefully choose which information is to be accepted for revision in order to avoid believing in faulty and untrustworthy information. We propose a revision process combining selective revision, deductive argumentation, and credibility information for the adequate handling of information in this complex scenario. New information is evaluated based on the credibility of the source in combination with all arguments favoring and opposing the new information. The evaluation process determines which part of the new information is to be accepted for revision and thereupon incorporated into the belief base by an appropriate revision operator. We demonstrate the benefits of our approach, investigate formal properties, and show that it outperforms the baseline approach without argumentation.Sociedad Argentina de Informática e Investigación Operativ

    Providing Information by Resource- Constrained Data Analysis

    Get PDF
    The Collaborative Research Center SFB 876 (Providing Information by Resource-Constrained Data Analysis) brings together the research fields of data analysis (Data Mining, Knowledge Discovery in Data Bases, Machine Learning, Statistics) and embedded systems and enhances their methods such that information from distributed, dynamic masses of data becomes available anytime and anywhere. The research center approaches these problems with new algorithms respecting the resource constraints in the different scenarios. This Technical Report presents the work of the members of the integrated graduate school

    Preserving Confidentiality in Multiagent Systems - An Internship Project within the DAAD RISE Program

    No full text
    RISE (Research Internships in Science and Engineering) is a summer internship program for undergraduate students from the United States, Canada and the UK organized by the DAAD (Deutscher Akademischer Austausch Dienst). Within the project A5 in the Collaborative Research Center SFB 876, we have planned and conducted an internship project in the RISE program that should support our research. Daniel Dilger was the intern and has been supervised by the PhD students Patrick Krümpelmann and Cornelia Tadros. The aim was to model an application scenario for our prototype implementation of a confidentiality preserving multiagent system and to run experiments with that prototype
    corecore