80,477 research outputs found

    Improving Assumption based Distributed Belief Revision

    Get PDF
    Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents’ perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically focus on the more pertinent contexts, and a distributed belief revision algorithm with two levels of consistency. This work contributions include: (i) a concise representation of the shared external facts; (ii) a simple and innovative methodology to achieve distributed context management; and (iii) a reduced inter-agent data exchange format. The different levels of consistency adopted were based on the relevance of the data under consideration: higher relevance data (detected inconsistencies) was granted global consistency while less relevant data (system facts) was assigned local consistency. These abilities are fully supported by the ATMS standard functionalities

    An Architectural Approach to Ensuring Consistency in Hierarchical Execution

    Full text link
    Hierarchical task decomposition is a method used in many agent systems to organize agent knowledge. This work shows how the combination of a hierarchy and persistent assertions of knowledge can lead to difficulty in maintaining logical consistency in asserted knowledge. We explore the problematic consequences of persistent assumptions in the reasoning process and introduce novel potential solutions. Having implemented one of the possible solutions, Dynamic Hierarchical Justification, its effectiveness is demonstrated with an empirical analysis

    Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness

    Get PDF
    This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas

    Reflective Argumentation

    Get PDF
    Theories of argumentation usually focus on arguments as means of persuasion, finding consensus, or justifying knowledge claims. However, the construction and visualization of arguments can also be used to clarify one's own thinking and to stimulate change of this thinking if gaps, unjustified assumptions, contradictions, or open questions can be identified. This is what I call "reflective argumentation." The objective of this paper is, first, to clarify the conditions of reflective argumentation and, second, to discuss the possibilities of argument visualization methods in supporting reflection and cognitive change. After a discussion of the cognitive problems we are facing in conflicts--obviously the area where cognitive change is hardest--the second part will, based on this, determine a set of requirements argument visualization tools should fulfill if their main purpose is stimulating reflection and cognitive change. In the third part, I will evaluate available argument visualization methods with regard to these requirements and talk about their limitations. The fourth part, then, introduces a new method of argument visualization which I call Logical Argument Mapping (LAM). LAM has specifically been designed to support reflective argumentation. Since it uses primarily deductively valid argument schemes, this design decision has to be justified with regard to goals of reflective argumentation. The fifth part, finally, provides an example of how Logical Argument Mapping could be used as a method of reflective argumentation in a political controversy

    Distributed Belief Revision and Environmental Decision Support

    Get PDF
    This article discusses the development of an Intelligent Distributed Environmental Decision Support System, built upon the association of a Multi-agent Belief Revision System with a Geographical Information System (GIS). The inherent multidisciplinary features of the involved expertises in the field of environmental management, the need to define clear policies that allow the synthesis of divergent perspectives, its systematic application, and the reduction of the costs and time that result from this integration, are the main reasons that motivate the proposal of this project. This paper is organised in two parts: in the first part we present and discuss the developed - Distributed Belief Revision Test-bed - DiBeRT; in the second part we analyse its application to the environmental decision support domain, with special emphasis on the interface with a GIS

    The Proposal to Tax Small Incomes

    Get PDF
    • …
    corecore