243,170 research outputs found
Bi-Directional Safety Analysis for Product-Line, Multi-Agent Systems
Abstract. Safety-critical systems composed of highly similar, semi-autonomous agents are being developed in several application domains. An example of such multi-agent systems is a fleet, or “constellation ” of satellites. In constellations of satellites, each satellite is commonly treated as a distinct autonomous agent that must cooperate to achieve higher-level constellation goals. In previous work, we have shown that modeling a constellation of satellites or spacecraft as a product line of agents (where the agents have many shared commonalities and a few key differences) enables reuse of software analysis and design assets. We have also previously developed efficient safety analysis techniques for product lines. We now propose the use of Bi-Directional Safety Analysis (BDSA) to aid in system certification. We extend BDSA to product lines of multi-agent systems and show how the analysis artifacts thus produced contribute to the software’s safety case for certification purposes. The product-line approach lets us reuse portions of the safety analysis for multiple agents, significantly reducing the burden of certification. We motivate and illustrate this work through a specific application, a product-line, multi-agent satellite constellation
AN ATTITUDE BASED MODELING OF AGENTS IN COALITION
Abstract: One of the main un~erpinning of the multi-agent systems community is how and why autonomous agents should cooperate w1th one another. Several formal and computational models of cooperative work or coalition are currently developed and used within multi-agent systems research. The coalition facilitates the achievement of cooperation among different agents. In this paper, a mental construct called attitude is proposed and its significance in coalition formation in a dynamic fire world is discussed. This paper presents ABCAS (Attitude Based Coalition Agent System) that shows coalitions in multi-agent systems are an effective way of dealing with the complexity of fire world. It shows that coalitions explore the attitudes an_d_ behaviors that help agents to achieve goals that cannot be achieved alone or to maximize net group Utility
Recommended from our members
Agent Decision-Making in Open Mixed Networks
Computer systems increasingly carry out tasks in mixed networks, that is in group settings in which they interact both with other computer systems and with people. Participants in these heterogeneous human-computer groups vary in their capabilities, goals, and strategies; they may cooperate, collaborate, or compete. The presence of people in mixed networks raises challenges for the design and the evaluation of decision-making strategies for computer agents. This paper describes several new decision-making models that represent, learn and adapt to various social attributes that influence people's decision-making and presents a novel approach to evaluating such models. It identifies a range of social attributes in an open-network setting that influence people's decision-making and thus affect the performance of computer-agent strategies, and establishes the importance of learning and adaptation to the success of such strategies. The settings vary in the capabilities, goals, and strategies that people bring into their interactions. The studies deploy a configurable system called Colored Trails (CT) that generates a family of games. CT is an abstract, conceptually simple but highly versatile game in which players negotiate and exchange resources to enable them to achieve their individual or group goals. It provides a realistic analogue to multi-agent task domains, while not requiring extensive domain modeling. It is less abstract than payoff matrices, and people exhibit less strategic and more helpful behavior in CT than in the identical payoff matrix decision-making context. By not requiring extensive domain modeling, CT enables agent researchers to focus their attention on strategy design, and it provides an environment in which the influence of social factors can be better isolated and studied.Engineering and Applied Science
Capturing the behaviour of inter-agent dialogues
A multiagent system (MAS) is made up of multiple interacting autonomous agents. It can be viewed as a society in which each agent performs its activity, cooperating to achieve common goals, or competing for them. Thus, every agent has the ability to do social interactions with other agents establishing dialogues via some kind of agent-communication language, under some communication protocol [13]. Argumentation has been used to model several kind of dialogues in multi-agents systems, such as negotiation or coordination [1, 7, 8, 5, 9].
Our current research activities are related to the use of argumentation in agent’s interaction, as a form of social dialogue. According to [15], dialogues can be classified in negotiation, where there is a conflict of interests, persuasion where there is a conflict of opinion or beliefs, indagation where there is a need for an explanation or proof of some proposition, deliberation or coordination where there is a need to coordinate goals and actions, and one special kind of dialogue called eristic based on personal conflicts. Except the last one, all this dialogues may exist in multi-agents systems as part of social activities among agents. We also study the use of argumentation formalisms to model the internal process of reasoning of an agent, often called monologues.
Our aim is to define an abstract argumentation framework to capture the behaviour of these different dialogues. We are not interested in the logic used to construct arguments. Our formulation completely abstracts from the internal structure of the arguments, considering them as moves made in a dialogue. We also consider multiagent systems as a set of multiple interacting autonomous agents.Eje: Inteligencia artificial distribuida, aspectos teĂłricos de la inteligencia artificial y teorĂa de computaciĂłnRed de Universidades con Carreras en Informática (RedUNCI
A Dynamic Epistemic Logic for Abstract Argumentation
This paper introduces a multi-agent dynamic epistemic logic for abstract argumenta-
tion. Its main motivation is to build a general framework for modelling the dynamics
of a debate, which entails reasoning about goals, beliefs, as well as policies of com-
munication and information update by the participants. After locating our proposal
and introducing the relevant tools from abstract argumentation, we proceed to build a
three-tiered logical approach. At the first level, we use the language of propositional
logic to encode states of a multi-agent debate. This language allows to specify which
arguments any agent is aware of, as well as their subjective justification status. We
then extend our language and semantics to that of epistemic logic, in order to model
individuals’ beliefs about the state of the debate, which includes uncertainty about the
information available to others. As a third step, we introduce a framework of dynamic
epistemic logic and its semantics, which is essentially based on so-called event models
with factual change. We provide completeness results for a number of systems and
show how existing formalisms for argumentation dynamics and unquantified uncerSynthese
tainty can be reduced to their semantics. The resulting framework allows reasoning
about subtle epistemic and argumentative updates—such as the effects of different
levels of trust in a source—and more in general about the epistemic dimensions of
strategic communication
Dynamic epistemic logics for abstract argumentation
AbstractThis paper introduces a multi-agent dynamic epistemic logic for abstract argumentation. Its main motivation is to build a general framework for modelling the dynamics of a debate, which entails reasoning about goals, beliefs, as well as policies of communication and information update by the participants. After locating our proposal and introducing the relevant tools from abstract argumentation, we proceed to build a three-tiered logical approach. At the first level, we use the language of propositional logic to encode states of a multi-agent debate. This language allows to specify which arguments any agent is aware of, as well as their subjective justification status. We then extend our language and semantics to that of epistemic logic, in order to model individuals' beliefs about the state of the debate, which includes uncertainty about the information available to others. As a third step, we introduce a framework of dynamic epistemic logic and its semantics, which is essentially based on so-called event models with factual change. We provide completeness results for a number of systems and show how existing formalisms for argumentation dynamics and unquantified uncertainty can be reduced to their semantics. The resulting framework allows reasoning about subtle epistemic and argumentative updates—such as the effects of different levels of trust in a source—and more in general about the epistemic dimensions of strategic communication
The UMASS intelligent home project.
Abstract Intelligent environments are an interesting development and research application problem for multi-agent systems. The functional and spatial distribution of tasks naturally lends itself to a multi-agent model and the existence of shared resources creates interactions over which the agents must coordinate. In the UMASS Intelligent Home project we have designed and implemented a set of distributed autonomous home control agents and deployed them in a simulated home environment. Our focus is primarily on resource coordination, though this project has multiple goals and areas of exploration ranging from the intellectual evaluation of the application as a general MAS testbed to the practical evaluation of our agent building and simulation tools
adaptive goal selection for improving situation awareness the fleet management case study
Abstract: Lack of Situation Awareness (SA) when dealing with complex dynamic environments is recognized as one of the main causes of human errors, leading to serious and critical incidents. One of the main issues is the attentional tunneling manifested, for instance, by human operators (in Decision Support Systems) focusing their attention on a single goal and loosing the awareness of the global picture of the monitored environments. A further issue is represented by stimuli, coming from such environments, which may divert the attention of the operators from the most important aspects and cause erroneous decisions. Thus, the need to define systems helping human operators to improve SA with respect to the two aforementioned drawbacks emerges. These systems should help operators in focusing their attention on active goals and, when really needed, switching it on new goals, in a sort of continuous adaptation. In this work an adaptive goal selection approach exploiting both goal-driven and data-driven information processing is proposed. The approach has been defined and injected in an existing multi-agent framework for Situation Awareness and applied in a Fleet Management System. The approach has been evaluated by means of the SAGAT methodology
Abstract argumentation and dialogues between agents
A multiagent system (MAS) is made up of multiple interacting autonomous agents. It can be viewed as a society in which each agent performs its activity, cooperating to achieve common goals, or competing for them. Thus, every agent has the ability to do social interactions with other agents establishing dialogues via some kind of agent-communication language, under some communication protocol [6].
Argumentation is suitable to model several kind of dialogues in multi-agents systems. Some authors are actually using defeasible argumentation to model negotiation processes between agents [3, 7]. Our current research activities are related to the use of argumentation in agent’s interaction, such as negotiation among several participants, persuasion, acquisition of knowledge and other forms of social dialogue. Usually, argumentation appears as a mechanism to deal with disagreement between agents, for example when some conflict of interest is present.
Argumentation can be used, not only to argue about something, but to know more about other agents: it is enough powerfull to play an important role in general social interaction in multiagents systems. The kind of arguments used in dialogues, and their relationship, depends on the type of dialogue involved.
According to [8], dialogues can be classified in negotiation, where there is a conflict of interests, persuasion where there is a conflict of opinion or beliefs, indagation where there is a need for an explanation or proof of some proposition, deliberation or coordination where there is a need to coordinate goals and actions, and one special kind of dialogue called eristic based on personal conflicts. Except the last one, all these dialogues may exist in multi-agents systems as part of social activities among agents. Our aim is to define an abstract argumentation framework to capture the behaviour of these different dialogues, and we present here the main ideas behind this task and the new formal definitions. We are not interested in the logic used to construct arguments, nor the comparison method used. Our formulation completely abstracts from the internal structure of the arguments, considering them as moves made in a dialogue.
We also consider multiagent systems as a set of multiple interacting autonomous agents.Eje: Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI
- …