15 research outputs found

    Prediction of intent in robotics and multi-agent systems.

    Get PDF
    Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches

    Towards robust teams with many agents

    Get PDF

    Towards Social Comparison for Failure Detection

    Get PDF
    Abstract Social comparison, the process in which individuals compare their behavior and beliefs to those of other agents, is an important process in human societies. Our aim is to utilize theories of this process for synthetic agents, for the purposes of enabling social skills, teamcoordination, and greater individual agent performance. Our current focus is on individual failure detection and recovery in multi-agent settings. We present a novel approach, SOCFAD, inspired by Social Comparison Theory from social psychology. SOCFAD includes the following key novel concepts: (a) utilizing other agents the environment as information sources for failure detection, and (b) a detection and recovery method for previously undetectable failures using abductive inference based on other agents' beliefs 1

    Towards Flexible Teamwork

    Full text link
    Many AI researchers are today striving to build agent teams for complex, dynamic multi-agent domains, with intended applications in arenas such as education, training, entertainment, information integration, and collective robotics. Unfortunately, uncertainties in these complex, dynamic domains obstruct coherent teamwork. In particular, team members often encounter differing, incomplete, and possibly inconsistent views of their environment. Furthermore, team members can unexpectedly fail in fulfilling responsibilities or discover unexpected opportunities. Highly flexible coordination and communication is key in addressing such uncertainties. Simply fitting individual agents with precomputed coordination plans will not do, for their inflexibility can cause severe failures in teamwork, and their domain-specificity hinders reusability. Our central hypothesis is that the key to such flexibility and reusability is providing agents with general models of teamwork. Agents exploit such models to autonomously reason about coordination and communication, providing requisite flexibility. Furthermore, the models enable reuse across domains, both saving implementation effort and enforcing consistency. This article presents one general, implemented model of teamwork, called STEAM. The basic building block of teamwork in STEAM is joint intentions (Cohen & Levesque, 1991b); teamwork in STEAM is based on agents' building up a (partial) hierarchy of joint intentions (this hierarchy is seen to parallel Grosz & Kraus's partial SharedPlans, 1996). Furthermore, in STEAM, team members monitor the team's and individual members' performance, reorganizing the team as necessary. Finally, decision-theoretic communication selectivity in STEAM ensures reduction in communication overheads of teamwork, with appropriate sensitivity to the environmental conditions. This article describes STEAM's application in three different complex domains, and presents detailed empirical results.Comment: See http://www.jair.org/ for an online appendix and other files accompanying this articl

    Recognizing Teamwork Activity In Observations Of Embodied Agents

    Get PDF
    This thesis presents contributions to the theory and practice of team activity recognition. A particular focus of our work was to improve our ability to collect and label representative samples, thus making the team activity recognition more efficient. A second focus of our work is improving the robustness of the recognition process in the presence of noisy and distorted data. The main contributions of this thesis are as follows: We developed a software tool, the Teamwork Scenario Editor (TSE), for the acquisition, segmentation and labeling of teamwork data. Using the TSE we acquired a corpus of labeled team actions both from synthetic and real world sources. We developed an approach through which representations of idealized team actions can be acquired in form of Hidden Markov Models which are trained using a small set of representative examples segmented and labeled with the TSE. We developed set of team-oriented feature functions, which extract discrete features from the high-dimensional continuous data. The features were chosen such that they mimic the features used by humans when recognizing teamwork actions. We developed a technique to recognize the likely roles played by agents in teams even before the team action was recognized. Through experimental studies we show that the feature functions and role recognition module significantly increase the recognition accuracy, while allowing arbitrary shuffled inputs and noisy data

    Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems

    Get PDF
    Much research in artificial intelligence is concerned with the development of autonomous agents that can interact effectively with other agents. An important aspect of such agents is the ability to reason about the behaviours of other agents, by constructing models which make predictions about various properties of interest (such as actions, goals, beliefs) of the modelled agents. A variety of modelling approaches now exist which vary widely in their methodology and underlying assumptions, catering to the needs of the different sub-communities within which they were developed and reflecting the different practical uses for which they are intended. The purpose of the present article is to provide a comprehensive survey of the salient modelling methods which can be found in the literature. The article concludes with a discussion of open problems which may form the basis for fruitful future research.Comment: Final manuscript (46 pages), published in Artificial Intelligence Journal. The arXiv version also contains a table of contents after the abstract, but is otherwise identical to the AIJ version. Keywords: autonomous agents, multiagent systems, modelling other agents, opponent modellin

    Intention as Commitment toward Time

    Full text link
    In this paper we address the interplay among intention, time, and belief in dynamic environments. The first contribution is a logic for reasoning about intention, time and belief, in which assumptions of intentions are represented by preconditions of intended actions. Intentions and beliefs are coherent as long as these assumptions are not violated, i.e. as long as intended actions can be performed such that their preconditions hold as well. The second contribution is the formalization of what-if scenarios: what happens with intentions and beliefs if a new (possibly conflicting) intention is adopted, or a new fact is learned? An agent is committed to its intended actions as long as its belief-intention database is coherent. We conceptualize intention as commitment toward time and we develop AGM-based postulates for the iterated revision of belief-intention databases, and we prove a Katsuno-Mendelzon-style representation theorem.Comment: 83 pages, 4 figures, Artificial Intelligence journal pre-prin

    Coordinating Team Tactics for Swarm-vs.-Swarm Adversarial Games

    Get PDF
    While swarms of UAVs have received much attention in the last few years, adversarial swarms (i.e., competitive, swarm-vs.-swarm games) have been less well studied. In this dissertation, I investigate the factors influential in team-vs.-team UAV aerial combat scenarios, elucidating the impacts of force concentration and opponent spread in the engagement space. Specifically, this dissertation makes the following contributions: (1) Tactical Analysis: Identifies conditions under which either explicitly-coordinating tactics or decentralized, greedy tactics are superior in engagements as small as 2-vs.-2 and as large as 10-vs.-10, and examines how these patterns change with the quality of the teams' weapons; (2) Coordinating Tactics: Introduces and demonstrates a deep-reinforcement-learning framework that equips agents to learn to use their own and their teammates' situational context to decide which pre-scripted tactics to employ in what situations, and which teammates, if any, to coordinate with throughout the engagement; the efficacy of agents using the neural network trained within this framework outperform baseline tactics in engagements against teams of agents employing baseline tactics in N-vs.-N engagements for N as small as two and as large as 64; and (3) Bio-Inspired Coordination: Discovers through Monte-Carlo agent-based simulations the importance of prioritizing the team's force concentration against the most threatening opponent agents, but also of preserving some resources by deploying a smaller defense force and defending against lower-penalty threats in addition to high-priority threats to maximize the remaining fuel within the defending team's fuel reservoir.Ph.D

    Modelado automático del comportamiento de agentes inteligentes

    Get PDF
    Las teorías más recientes sobre el cerebro humano confirman que un alto porcentaje de su capacidad es utilizado para predecir el futuro, incluyendo el comportamiento de otras personas. Para actuar de la forma más adecuada en un contexto social, los humanos tratan de reconocer el comportamiento de las personas que les rodean y así hacer predicciones basadas en estos reconocimientos. Cuando este proceso se lleva a cabo por agentes software, se conoce como modelado de agentes donde un agente puede ser un robot, un agente software o un humano. El modelado de agentes es un proceso que permite a un agente extraer y representar conocimiento (comportamiento, creencias, metas, acciones, planes, etcétera) de otros agentes en un entorno determinado. Un agente capaz de reconocer el comportamiento de otros, puede realizar diversas tareas como predecir el comportamiento futuro de los agentes observados, coordinarse con ellos, facilitarles la ejecución de sus acciones o detectar sus posibles errores. Si este reconocimiento puede ser realizado de forma automática, su utilidad puede ser muy relevante en muchos dominios. En esta tesis doctoral se aborda la tarea de adquirir automáticamente conocimiento acerca del comportamiento de otros agentes inteligentes. Actualmente, las técnicas para modelar el comportamiento de otros agentes están comenzando a surgir de forma importante en el campo de la Inteligencia Artificial. Cabe destacar que en la mayoría de las investigaciones actuales, se proponen modelados no generales de un determinado tipo de agentes en un dominio concreto, es decir, modelados ad hoc. Esta tesis doctoral presenta tres enfoques diferentes para el modelado automático del comportamiento de agentes inteligentes basado en la identificación de patrones en un comportamiento observado. Estos enfoques permitirán que un agente situado en un entorno determinado, sea capaz de adquirir conocimiento acerca de otros agentes situados en el mismo entorno. Cada enfoque propuesto posee características particulares que le permiten adecuarse a un tipo de dominio, lo que implica que se puede adquirir conocimiento de otros agentes en diversos Sistema Multiagente. Los tres enfoques propuestos transforman las observaciones del comportamiento de uno o varios agentes en una secuencia de eventos que lo definen. Esta secuencia es analizada con la finalidad de obtener su correspondiente modelo de comportamiento. De esta forma, en esta tesis doctoral, la tarea de modelado e identificación del comportamiento de uno o varios agentes es tratado, principalmente, como un problema de minería de secuencias de eventos. La aplicación de cada enfoque propuesto en dominios muy diferentes demuestra su generalidad.-----------------------------------------------------------------------------------There are new theories which claim that a high percentage of the human brain capacity is used for predicting the future, including the behavior of other humans. Planning for future needs, not just current ones, is one of the most formidable human cognitive achievements. To make good decisions in a social context, humans often need to recognize the plan underlying the behavior of others, and make predictions based on this recognition. This process, when carried out by software agents, is known as agent modeling where an agent can be a software agent, a robot or a human being. Agent modeling is the process of extracting and representing knowledge (behavior, beliefs, goals, actions, plans, etcetera) from other agents. By recognizing the behavior of others, many different tasks can be performed, such as to predict their future behavior, to coordinate with them or to assist them. This behavior recognition can be very useful in many applications if it can be done automatically. This thesis is framed in the field of agent behavior modeling. Most existing techniques for plan recognition assume the availability of carefully hand-crafted plan libraries, which encode the a-priori known behavioral repertoire of the observed agents; during run-time, plan recognition algorithms match the observed behavior of the agents against the plan-libraries, and matches are reported as hypotheses. Unfortunately, techniques for automatically acquiring plan-libraries from observations, e.g., by learning or data-mining, are only beginning to emerge. This thesis presents three different approaches for creating automatically the model of an agent behavior based on the analysis of its atomic behaviors. Each approach is suitable for different purposes, but in all of them, the observations of an agent behavior are transformed into a sequence of events which is analyzed in order to get the corresponding behavior model. Therefore, in this thesis, the problem of behavior classification is examined as a problem of learning to characterize the behavior of an agent in terms of sequences of atomic behaviors. In order to demonstrate the generalization of the proposed approaches, their performance has been experimentally evaluated in different domains
    corecore