11 research outputs found

    Acquiring knowledge from expert agents in a structured argumentation setting

    Get PDF
    Information-seeking interactions in multi-agent systems are required for situations in which there exists an expert agent that has vast knowledge about some topic, and there are other agents (questioners or clients) that lack and need information regarding that topic. In this work, we propose a strategy for automatic knowledge acquisition in an information-seeking setting in which agents use a structured argumentation formalism for knowledge representation and reasoning. In our approach, the client conceives the other agent as an expert in a particular domain and is committed to believe in the expert's qualified opinion about a given query. The client's goal is to ask questions and acquire knowledge until it is able to conclude the same as the expert about the initial query. On the other hand, the expert's goal is to provide just the necessary information to help the client understand its opinion. Since the client could have previous knowledge in conflict with the information acquired from the expert agent, and given that its goal is to accept the expert's position, the client may need to adapt its previous knowledge. The operational semantics for the client-expert interaction will be defined in terms of a transition system. This semantics will be used to formally prove that, once the client-expert interaction finishes, the client will have the same assessment the expert has about the performed query.Fil: Agis, Ramiro Andrés. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: Gottifredi, Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; ArgentinaFil: García, Alejandro Javier. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Ciencias e Ingeniería de la Computación. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentin

    An Argumentation-Driven Model for Flexible and Efficient Persuasive Negotiation

    Get PDF
    The purpose of this paper is to propose a formal description and implementation of a negotiation protocol between autonomous agents using persuasive argumentation. This protocol is designed to be simple and computationally efficient. The computational efficiency is achieved by specifying the protocol as a set of simple logical rules that software agents can easily combine. These latter are specified as a set of computational dialogue games about which agents can reason. The protocol converges by checking the termination conditions. The paper discusses the formal properties of the protocol and addresses, as proof of concept, the implementation issues using an agent-oriented platform equipped with logical programming mechanisms

    Reinforcement Learning for Argumentation

    Get PDF
    Argumentation as a logical reasoning approach plays an important role in improving communication, increasing agree-ability, and resolving conflicts in multi-agent-systems (MAS). The present research aims to explore the effectiveness of argumentation in reinforcement learning of intelligent agents in terms of, outperforming baseline agents, learning transfer between argument graphs, and improving relevance and coherence of dialogue quality. This research developed `ARGUMENTO+' to encourage a reinforcement learning agent (RL agent) playing abstract argument game for improving performance against different baseline agents by using a newly proposed state representation in order to make each state unique. When attempting to generalise this approach to other argumentation graphs, the RL agent was not able to effectively identify the argument patterns that are transferable to other domains. In order to improve the effectiveness of the RL agent to recognise argument patterns, this research adopted a logic-based dialogue game approach with richer argument representations. In the DE dialogue game, the RL agent played against hard-coded heuristic agents and showed improved performance compared to the baseline agents by using a reward function that encourages the RL agent to win the game with minimum number of moves. This also allowed the RL agent to adopt its own strategy, make moves, and learn to argue. This thesis also presents a new reward function that makes the RL agent's dialogue more coherent and relevant than its opponents. The RL agent was designed to recognise argument patterns, i.e. argumentation schemes and evidence support sources, which can be related to different domains. The RL agent used a transfer learning method to generalise and transfer experiences and speed up learning

    Dialogue games and trust for communicating agents

    Get PDF
    Multi-agent applications are primarily based on agent interactions, which are constrained by the trust of participating agents. Two important issues in these applications are how agents can communicate in a flexible and efficient way and how an agent can authenticate information conveyed by other agents in the system. In this thesis, we present a new communication framework and trust model addressing these issues by considering three factors. The first factor is about the flexibility, complexity, soundness, and completeness of the communication protocol. The second factor is about the classification of agents from a trust point of view using direct interactions. The third factor is related to the categorization of the agent's chains through which the information is transmitted. Such a categorization is based upon the reliability of the agents in the chain. The model aims to examine all available data in order to determine the trustworthiness of agents as transmitters of information. This approach is the first attempt in multi-agent systems towards classifying agents in order to accomplish trust. We also propose a thorough set of criteria and policies to assign different degrees of trustworthiness to each agent and consequently to the chains in which they appear. Agents are considered autonomous and they interact flexibly using a set of logical rules called dialogue games. Termination, soundness, and completeness results of the communication protocol are proven and its computational complexity is addressed. The proposed approach is also evaluated. Keywords: Trust, Dialogue Games, Multi-Agent Systems, Agent Types, Agent Characteristics, Chain of Agent

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Designing and trusting multi-agent systems for B2B applications

    Get PDF
    This thesis includes two main contributions. The first one is designing and implementing B usiness-to-B usiness (B2B ) applications using multi-agent systems and computational argumentation theory. The second one is trust management in such multi-agent systems using agents' credibility. Our first contribution presents a framework for modeling and deploying B2B applications, with autonomous agents exposing the individual components that implement these applications. This framework consists of three levels identified by strategic, application, and resource, with focus here on the first two levels. The strategic level is about the common vision that independent businesses define as part of their decision of partnership. The application level is about the business processes, which are virtually integrated as result of this common vision. Since conflicts are bound to arise among the independent applications/agents, the framework uses a formal model based upon computational argumentation theory through a persuasion protocol to detect and resolve these conflicts. Termination, soundness, and completeness properties of this protocol are presented. Distributed and centralized coordination strategies are also supported in this framework, which is illustrated with an online purchasing case study followed by its implementation in Jadex, a java-based platform for multi-agent systems. An important issue in such open multi-agent systems is how much agents trust each other. Considering the size of these systems, agents that are service providers or customers in a B2B setting cannot avoid interacting with others that are unknown or partially known regarding to some past experience. Due to the fact that agents are self-interested, they may jeopardize the mutual trust by not performing the actions as they are supposed to. To this end, our second contribution is proposing a trust model allowing agents to evaluate the credibility of other peers in the environment. Our multi-factor model applies a number of measurements in trust evaluation of other party's likely behavior. After a period of time, the actual performance of the testimony agent is compared against the information provided by interfering agents. This comparison process leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error

    A mobility model for the realistic simulation of social context

    Get PDF
    The widespread use of user-carried devices with short-range communication leads to networks characterized by high dynamics, sporadic connectivity, and strong partitioning. In such networks, connectivity between mobile nodes is strongly influenced by sociological aspects. To enable the evaluation of mobile applications which communicate in such networks, we require an appropriate mobility model. In this thesis, we have designed and implemented a mobility model which focuses on the simulation of social context. It takes an arbitrary weighted social network as input and reflects its structural properties in its mobility scheme. Based on this approach, our model allows to integrate recent advances in the research of complex social networks. In addition, we focus on the simulation of different typical human characteristics such as the periodical reappearance at preferred locations and movement in groups. Furthermore, our model allows the integration of mobility models which concentrate on geographical aspects such as modeling obstacles or realistic movement between locations. We provide experimental results that show that our model reflects the input social network with an accuracy of up to 99%. In addition, we show that our model captures the characteristics measured in traces of human mobility, which shows the validity of our approach. The generalizational character of our model enables the fast integration of future research results in the areas of human mobility and complex social networks

    Argument-Based Negotiation in a Social Context

    No full text
    Argumentation-based negotiation (ABN) provides agents with an effective means to resolve conflicts within a multi-agent society. However, to engage in such argumentative encounters the agents require the ability to generate arguments, which, in turn, demands four fundamental capabilities: a schema to reason in a social context, a mechanism to identify a suitable set of arguments, a language and a protocol to exchange these arguments, and a decision making functionality to generate such dialogues. This paper focuses on the first two issues and formulates models to capture them. Specifically, we propose a coherent schema, based on social commitments, to capture social influences emanating from the roles and relationships of a multi-agent society. After explaining how agents can use this schema to reason within a society, we then use it to identify two major ways of exploiting social influence within ABN to resolve conflicts. The first of these allows agents to argue about the validity of each other's social reasoning, whereas the second enables agents to exploit social influences by incorporating them as parameters within their negotiation. For each of these, we use our schema to systematically capture a comprehensive set of social arguments that can be used within a multi-agent society
    corecore