4 research outputs found

    Plan Acquisition Through Intentional Learning in BDI Multi-Agent Systems

    Get PDF
    Multi-Agent Systems (MAS), a technique emanating from Distributed Artificial Intelligence, is a suitable technique to study complex systems. They make it possible to represent and simulate both elements and interrelations of systems in a variety of domains. The most commonly used approach to develop the individual components (agents) within MAS is reactive agency. However, other architectures, like cognitive agents, enable richer behaviours and interactions to be captured and modelled. The well-known Belief-Desire-Intentions architecture (BDI) is a robust approach to develop cognitive agents and it can emulate aspects of autonomous behaviour and is thus a promising tool to simulate social systems. Machine Learning has been applied to improve the behaviour of agents both individually or collectively. However, the original BDI model of agency, is lacking learning as part of its core functionalities. To cope with learning, the BDI agency has been extended by Intentional Learning (IL) operating at three levels: belief adjustment, plan selection, and plan acquisition. The latter makes it possible to increase the agent’s catalogue of skills by generating new procedural knowledge to be used onwards. The main contributions of this thesis are: a) the development of IL in a fully-fledged BDI framework at the plan acquisition level, b) extending IL from the single-agent case to the collective perspective; and c) a novel framework that melts reactive and BDI agents through integrating both MAS and Agent-Based Modelling approaches, it allows the configuration of diverse domains and environments. Learning is demonstrated in a test-bed environment to acquire a set of plans that drive the agent to exhibit behaviours such as target-searching and left-handed wall-following. Learning in both decision strata, single and collective, is tested in a more challenging and socially relevant environment: the Disaster-Rescue problem

    Contextualizing normative open multi-agent systems

    No full text
    International audienceOpen MASs can be extremely dynamic due to heterogeneous agents that migrate among them to obtain resources or services not found locally. In order to prevent malicious actions and to ensure agent trust, open MAS should be enhanced with normative mechanisms. However, it is not reasonable to expect that foreign agents will know in advance all the norms of the MAS in which they will execute. Thus this paper presents DynaCROM, our approach for addressing these issues. From the individual agents' perspective, DynaCROM is an information mechanism so that agents become context norm-aware; from the system developers' perspective, DynaCROM is a methodology for norm management in regulated MASs. Notwithstanding the ultimate goal of a regulated MAS is to have an enforcement mechanism, we also present in the paper the integration of DynaCROM and SCAAR. SCAAR is the current solution of DynaCROM for norm enforcement

    Contextualizing normative open multi-agent systems

    No full text
    Open MASs can be extremely dynamic due to heterogeneous agents that migrate among them to obtain resources or services not found locally. In order to prevent malicious actions and to ensure agent trust, open MAS should be enhanced with normative mechanisms. However, it is not reasonable to expect that foreign agents know in advance all the norms of the MAS in which they will execute. Thus, this paper presents our DynaCROM approach for addressing these issues. From the individual agents ’ perspective, DynaCROM is an information mechanism so that agents become context norm-aware; from the system developers ’ perspective, DynaCROM is a methodology for norm management in regulated MASs. Notwithstanding the ultimate goal of a regulated MAS is to have an enforcement mechanism, we also present in the paper the integration of DynaCROM with SCAAR, its current solution for enforcing contextual norms
    corecore