17,023 research outputs found

    A simple account of multiagent epistemic planning

    Get PDF
    International audienceA realistic model of multiagent planning must allow us to model notions which are absent in classical planning such as communication and knowledge. We investigate multiagent planning based on a simple logic of action and knowledge that is based on the visibility of propositional variables. Using such a formal logic allows us to deduce the validity of a plan from the validity of the individual actions which compose it. We present a coding of multiagent planning problems expressed in this logic into the classical planning language PDDL. Feeding the resulting problem into a PDDL planner provides a provably correct plan for the original multiagent planning problem. We use the gossip problem as a running example

    Swarm robot social potential fields with internal agent dynamics

    Get PDF
    Swarm robotics is a new and promising approach to the design and control of multiagent robotic systems. In this paper we use a model for a second order non-linear system of self-propelled agents interacting via pair-wise attractive and repulsive potentials. We propose a new potential field method using dynamic agent internal states to successfully solve a reactive path-planning problem. The path planning problem cannot be solved using static potential fields due to local minima formation, but can be solved by allowing the agent internal states to manipulate the potential field. Simulation results demonstrate the ability of a single agent to perform reactive problem solving effectively, as well as the ability of a swarm of agents to perform problem solving using the collective behaviour of the entire swarm

    Influence-Optimistic Local Values for Multiagent Planning --- Extended Version

    Get PDF
    Recent years have seen the development of methods for multiagent planning under uncertainty that scale to tens or even hundreds of agents. However, most of these methods either make restrictive assumptions on the problem domain, or provide approximate solutions without any guarantees on quality. Methods in the former category typically build on heuristic search using upper bounds on the value function. Unfortunately, no techniques exist to compute such upper bounds for problems with non-factored value functions. To allow for meaningful benchmarking through measurable quality guarantees on a very general class of problems, this paper introduces a family of influence-optimistic upper bounds for factored decentralized partially observable Markov decision processes (Dec-POMDPs) that do not have factored value functions. Intuitively, we derive bounds on very large multiagent planning problems by subdividing them in sub-problems, and at each of these sub-problems making optimistic assumptions with respect to the influence that will be exerted by the rest of the system. We numerically compare the different upper bounds and demonstrate how we can achieve a non-trivial guarantee that a heuristic solution for problems with hundreds of agents is close to optimal. Furthermore, we provide evidence that the upper bounds may improve the effectiveness of heuristic influence search, and discuss further potential applications to multiagent planning.Comment: Long version of IJCAI 2015 paper (and extended abstract at AAMAS 2015

    Individual Planning in Agent Populations: Exploiting Anonymity and Frame-Action Hypergraphs

    Get PDF
    Interactive partially observable Markov decision processes (I-POMDP) provide a formal framework for planning for a self-interested agent in multiagent settings. An agent operating in a multiagent environment must deliberate about the actions that other agents may take and the effect these actions have on the environment and the rewards it receives. Traditional I-POMDPs model this dependence on the actions of other agents using joint action and model spaces. Therefore, the solution complexity grows exponentially with the number of agents thereby complicating scalability. In this paper, we model and extend anonymity and context-specific independence -- problem structures often present in agent populations -- for computational gain. We empirically demonstrate the efficiency from exploiting these problem structures by solving a new multiagent problem involving more than 1,000 agents.Comment: 8 page article plus two page appendix containing proofs in Proceedings of 25th International Conference on Autonomous Planning and Scheduling, 201

    FMAP: Distributed Cooperative Multi-Agent Planning

    Full text link
    This paper proposes FMAP (Forward Multi-Agent Planning), a fully-distributed multi-agent planning method that integrates planning and coordination. Although FMAP is specifically aimed at solving problems that require cooperation among agents, the flexibility of the domain-independent planning model allows FMAP to tackle multi-agent planning tasks of any type. In FMAP, agents jointly explore the plan space by building up refinement plans through a complete and flexible forward-chaining partial-order planner. The search is guided by h D T G , a novel heuristic function that is based on the concepts of Domain Transition Graph and frontier state and is optimized to evaluate plans in distributed environments. Agents in FMAP apply an advanced privacy model that allows them to adequately keep private information while communicating only the data of the refinement plans that is relevant to each of the participating agents. Experimental results show that FMAP is a general-purpose approach that efficiently solves tightly-coupled domains that have specialized agents and cooperative goals as well as loosely-coupled problems. Specifically, the empirical evaluation shows that FMAP outperforms current MAP systems at solving complex planning tasks that are adapted from the International Planning Competition benchmarks.This work has been partly supported by the Spanish MICINN under projects Consolider Ingenio 2010 CSD2007-00022 and TIN2011-27652-C03-01, the Valencian Prometeo project II/2013/019, and the FPI-UPV scholarship granted to the first author by the Universitat Politecnica de Valencia.Torreño Lerma, A.; Onaindia De La Rivaherrera, E.; Sapena Vercher, O. (2014). FMAP: Distributed Cooperative Multi-Agent Planning. Applied Intelligence. 41(2):606-626. https://doi.org/10.1007/s10489-014-0540-2S606626412Benton J, Coles A, Coles A (2012) Temporal planning with preferences and time-dependent continuous costs. In: Proceedings of the 22nd international conference on automated planning and scheduling (ICAPS). AAAI, pp 2–10Borrajo D. (2013) Multi-agent planning by plan reuse. In: Proceedings of the 12th international conference on autonomous agents and multi-agent systems (AAMAS). IFAAMAS, pp 1141–1142Boutilier C, Brafman R (2001) Partial-order planning with concurrent interacting actions. J Artif Intell Res 14(105):136Brafman R, Domshlak C (2008) From one to many: planning for loosely coupled multi-agent systems. In: Proceedings of the 18th international conference on automated planning and scheduling (ICAPS). AAAI, pp 28–35Brenner M, Nebel B (2009) Continual planning and acting in dynamic multiagent environments. J Auton Agents Multiagent Syst 19(3):297–331Bresina J, Dearden R, Meuleau N, Ramakrishnan S, Smith D, Washington R (2002) Planning under continuous time and resource uncertainty: a challenge for AI. In: Proceedings of the 18th conference on uncertainty in artificial intelligence (UAI). Morgan Kaufmann, pp 77–84Cox J, Durfee E (2009) Efficient and distributable methods for solving the multiagent plan coordination problem. Multiagent Grid Syst 5(4):373–408Crosby M, Rovatsos M, Petrick R (2013) Automated agent decomposition for classical planning. In: Proceedings of the 23rd international conference on automated planning and scheduling (ICAPS). AAAI, pp 46–54Dimopoulos Y, Hashmi MA, Moraitis P (2012) μ-satplan: Multi-agent planning as satisfiability. Knowl-Based Syst 29:54–62Fikes R, Nilsson N (1971) STRIPS: a new approach to the application of theorem proving to problem solving. Artif Intell 2(3):189–208Gerevini A, Haslum P, Long D, Saetti A, Dimopoulos Y (2009) Deterministic planning in the fifth international planning competition: PDDL3 and experimental evaluation of the planners. Artif Intell 173(5-6):619–668Ghallab M, Nau D, Traverso P (2004) Automated planning. Theory and practice. Morgan KaufmannGünay A, Yolum P (2013) Constraint satisfaction as a tool for modeling and checking feasibility of multiagent commitments. Appl Intell 39(3):489–509Helmert M (2004) A planning heuristic based on causal graph analysis. In: Proceedings of the 14th international conference on automated planning and scheduling ICAPS. AAAI, pp 161–170Hoffmann J, Nebel B (2001) The FF planning system: fast planning generation through heuristic search. J Artif Intell Res 14:253–302Jannach D, Zanker M (2013) Modeling and solving distributed configuration problems: a CSP-based approach. IEEE Trans Knowl Data Eng 25(3):603–618Jonsson A, Rovatsos M (2011) Scaling up multiagent planning: a best-response approach. In: Proceedings of the 21st international conference on automated planning and scheduling (ICAPS). AAAI, pp 114–121Kala R, Warwick K (2014) Dynamic distributed lanes: motion planning for multiple autonomous vehicles. Appl Intell:1–22Koehler J, Ottiger D (2002) An AI-based approach to destination control in elevators. AI Mag 23(3):59–78Kovacs DL (2011) Complete BNF description of PDDL3.1. Technical reportvan der Krogt R (2009) Quantifying privacy in multiagent planning. Multiagent Grid Syst 5(4):451–469Kvarnström J (2011) Planning for loosely coupled agents using partial order forward-chaining. In: Proceedings of the 21st international conference on automated planning and scheduling (ICAPS). AAAI, pp 138–145Lesser V, Decker K, Wagner T, Carver N, Garvey A, Horling B, Neiman D, Podorozhny R, Prasad M, Raja A et al (2004) Evolution of the GPGP/TAEMS domain-independent coordination framework. Auton Agents Multi-Agent Syst 9(1–2):87–143Long D, Fox M (2003) The 3rd international planning competition: results and analysis. J Artif Intell Res 20:1–59Nissim R, Brafman R, Domshlak C (2010) A general, fully distributed multi-agent planning algorithm. In: Proceedings of the 9th international conference on autonomous agents and multiagent systems (AAMAS). IFAAMAS, pp 1323–1330O’Brien P, Nicol R (1998) FIPA - towards a standard for software agents. BT Tech J 16(3):51–59Öztürk P, Rossland K, Gundersen O (2010) A multiagent framework for coordinated parallel problem solving. Appl Intell 33(2):132–143Pal A, Tiwari R, Shukla A (2013) Communication constraints multi-agent territory exploration task. Appl Intell 38(3):357–383Richter S, Westphal M (2010) The LAMA planner: guiding cost-based anytime planning with landmarks. J Artif Intell Res 39(1):127–177de la Rosa T, García-Olaya A, Borrajo D (2013) A case-based approach to heuristic planning. Appl Intell 39(1):184–201Sapena O, Onaindia E (2008) Planning in highly dynamic environments: an anytime approach for planning under time constraints. Appl Intell 29(1):90–109Sapena O, Onaindia E, Garrido A, Arangú M (2008) A distributed CSP approach for collaborative planning systems. Eng Appl Artif Intell 21(5):698–709Serrano E, Such J, Botía J, García-Fornes A (2013) Strategies for avoiding preference profiling in agent-based e-commerce environments. Appl Intell:1–16Smith D, Frank J, Jónsson A (2000) Bridging the gap between planning and scheduling. Knowl Eng Rev 15(1):47–83Such J, García-Fornes A, Espinosa A, Bellver J (2012) Magentix2: a privacy-enhancing agent platform. Eng Appl Artif Intell:96–109Tonino H, Bos A, de Weerdt M, Witteveen C (2002) Plan coordination by revision in collective agent based systems. Artif Intell 142(2):121–145Torreño A, Onaindia E, Sapena O (2012) An approach to multi-agent planning with incomplete information. In: Proceedings of the 20th European conference on artificial intelligence (ECAI), vol 242. IOS Press, pp 762–767Torreño A, Onaindia E, Sapena O (2014) A flexible coupling approach to multi-agent planning under incomplete information. Knowl Inf Syst 38(1):141–178Van Der Krogt R, De Weerdt M (2005) Plan repair as an extension of planning. In: Proceedings of the 15th international conference on automated planning and scheduling (ICAPS). AAAI, pp 161–170de Weerdt M, Clement B (2009) Introduction to planning in multiagent systems. Multiagent Grid Syst 5(4):345– 355Yokoo M, Durfee E, Ishida T, Kuwabara K (1998) The distributed constraint satisfaction problem: formalization and algorithms. IEEE Trans Knowl Data Eng 10(5):673–685Zhang J, Nguyen X, Kowalczyk R (2007) Graph-based multi-agent replanning algorithm. In: Proceedings of the 6th international joint conference conference on autonomous agents and multiagent systems (AAMAS). IFAAMAS, pp 798–80

    Collective construction of numerical potential fields for the foraging problem

    Get PDF
    We consider the problem of deploying a team of agents (robots) for the foraging problem. In this problem agents have to collect disseminated resources in an unknown environment. They must therefore be endowed with exploration and path-planning abilities. This paper presents a reactive multiagent system that is able to simultaneously perform the two desired activities~ - exploration and path-planning - in unknown and complex environments. To develop this multiagent system, we have designed a distributed and asynchronous version of Barraquand's algorithm that builds an optimal Artificial Potential Field (APF). Our algorithm relies on agents with very limited perceptions that only mark their environment with integer values. The algorithm does not require any costly mechanism to be present in the environment to manage dynamic phenomena such as evaporation or propagation. We show that the APF built by our algorithm converges to optimal paths. The model is extended to deal with the multi-sources foraging problem. Simulations show that it is more time-efficient than the standard pheromone-based ant algorithm. Moreover, our approach is also able to address the problem in any kind of environment such as mazes

    Scalable Planning and Learning for Multiagent POMDPs: Extended Version

    Get PDF
    Online, sample-based planning algorithms for POMDPs have shown great promise in scaling to problems with large state spaces, but they become intractable for large action and observation spaces. This is particularly problematic in multiagent POMDPs where the action and observation space grows exponentially with the number of agents. To combat this intractability, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. This approach applies not only in the planning case, but also in the Bayesian reinforcement learning setting. Experimental results show that we are able to provide high quality solutions to large multiagent planning and learning problems
    • …
    corecore