2,379 research outputs found

    Is a Semantic Web Agent a Knowledge-Savvy Agent?

    No full text
    The issue of knowledge sharing has permeated the field of distributed AI and in particular, its successor, multiagent systems. Through the years, many research and engineering efforts have tackled the problem of encoding and sharing knowledge without the need for a single, centralized knowledge base. However, the emergence of modern computing paradigms such as distributed, open systems have highlighted the importance of sharing distributed and heterogeneous knowledge at a larger scale—possibly at the scale of the Internet. The very characteristics that define the Semantic Web—that is, dynamic, distributed, incomplete, and uncertain knowledge—suggest the need for autonomy in distributed software systems. Semantic Web research promises more than mere management of ontologies and data through the definition of machine-understandable languages. The openness and decentralization introduced by multiagent systems and service-oriented architectures give rise to new knowledge management models, for which we can’t make a priori assumptions about the type of interaction an agent or a service may be engaged in, and likewise about the message protocols and vocabulary used. We therefore discuss the problem of knowledge management for open multi-agent systems, and highlight a number of challenges relating to the exchange and evolution of knowledge in open environments, which pertinent to both the Semantic Web and Multi Agent System communities alike

    Deriving individual obligations from collective obligations

    Get PDF
    A collective obligation is an obligation directed to a group of agents so that the group, as a whole, is obliged to achieve a given task. The problem investigated here is the impact of collective obligations on individual obligations,i.e. obligations directed to single agents of the group. In this case, we claim that the derivation of individual obligations from collective obligations depends on several parameters among which the ability of the agents (i.e. what they can do) and their own personal commitments (i.e. what they are determined to do). As for checking if these obligations are fulfilled or not, we need to know what are the actual actions performed by the agents

    GHOST: experimenting countermeasures for conflicts in the pilot's activity

    Get PDF
    An approach for designing countermeasures to cure conflict in aircraft pilots’ activities is presented, both based on Artificial Intelligence and Human Factors concepts. The first step is to track the pilot’s activity, i.e. to reconstruct what he has actually done thanks to the flight parameters and reference models describing the mission and procedures. The second step is to detect conflict in the pilot’s activity, and this is linked to what really matters to the achievement of the mission. The third step is to design accu- rate countermeasures which are likely to do bet- ter than the existing onboard devices. The three steps are presented and supported by experimental results obtained from private and professional pi- lots

    Bounded-Monitor Placement in Normative Environments

    Get PDF
    ISSN: 16130073 Funding: This work is partially supported by grants from CNPq/Brazil numbers 132339/2016-1 and 305969/2016-1.Publisher PD

    A modal logic for reasoning on consistency and completeness of regulations

    Get PDF
    In this paper, we deal with regulations that may exist in multi-agent systems in order to regulate agent behaviour and we discuss two properties of regulations, that is consistency and completeness. After defining what consistency and completeness mean, we propose a way to consistently complete incomplete regulations. In this contribution, we extend previous works and we consider that regulations are expressed in a first order modal deontic logic

    TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources

    No full text
    In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
    corecore