9 research outputs found

    Strategic negotiation and trust in diplomacy - the DipBlue approach

    Get PDF
    The study of games in Artificial Intelligence has a long tradition. Game playing has been a fertile environment for the development of novel approaches to build intelligent programs. Multi-agent systems (MAS), in particular, are a very useful paradigm in this regard, not only because multi-player games can be addressed using this technology, but most importantly because social aspects of agenthood that have been studied for years by MAS researchers can be applied in the attractive and controlled scenarios that games convey. Diplomacy is a multi-player strategic zero-sum board game, including as main research challenges an enormous search tree, the difficulty of determining the real strength of a position, and the accommodation of negotiation among players. Negotiation abilities bring along other social aspects, such as the need to perform trust reasoning in order to win the game. The majority of existing artificial players (bots) for Diplomacy do not exploit the strategic opportunities enabled by negotiation, focusing instead on search and heuristic approaches. This paper describes the development of DipBlue, an artificial player that uses negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. A simple trust assessment approach is used as a means to detect and react to potential betrayals by allied players. DipBlue was built to work with DipGame, a MAS testbed for Diplomacy, and has been tested with other players of the same platform and variations of itself. Experimental results show that the use of negotiation increases the performance of bots involved in alliances, when full trust is assumed. In the presence of betrayals, being able to perform trust reasoning is an effective approach to reduce their impact. © Springer-Verlag Berlin Heidelberg 2015

    Towards general cooperative game playing

    Get PDF
    Attempts to develop generic approaches to game playing have been around for several years in the field of Artificial Intelligence. However, games that involve explicit cooperation among otherwise competitive players cooperative negotiation games have not been addressed by current approaches. Yet, such games provide a much richer set of features, related with social aspects of interactions, which make them appealing for envisioning real-world applications. This work proposes a generic agent architecture Alpha to tackle cooperative negotiation games, combining elements such as search strategies, negotiation, opponent modeling and trust management. The architecture is then validated in the context of two different games that fall in this category Diplomacy and Werewolves. Alpha agents are tested in several scenarios, against other state-of-the-art agents. Besides highlighting the promising performance of the agents, the role of each architectural component in each game is assessed. (c) Springer International Publishing AG, part of Springer Nature 2018

    dipblue: a diplomacy agent with strategic and trust reasoning

    Get PDF
    O Diplomacy é um jogo de tabuleiro de estratégia militar, de turnos, passado no virar do século vinte onde sete potências lutam pelo domínio da Europa. O jogo é jogado por 2 a 7 elementos e caracteriza-se por não possuir factores aleatórios, bem como, por ser um jogo de soma-zero. Este tem uma componente bastante importante quando jogado entre jogadores humanos e que tem sido descartada nos jogos tipicamente abordados por Inteligência Artificial: antes de efectuarem as jogadas, os jogadores podem negociar entre si e discutir assuntos como alianças, propostas de jogadas, trocas de informações, entre outros. Tendo em conta que os jogadores actuam simultaneamente e que o número de unidades e movimentos é bastante extenso, o resultado é uma árvore de jogo demasiado vasta para ser pesquisada eficazmente. A maioria dos jogadores existentes para Diplomacy não tiram proveito das oportunidades que o jogo proporciona e tentam resolver o problema através de pesquisa de soluções e do uso de heurísticas complexas.Esta dissertação propõe uma abordagem para a criação de um jogador artificial chamado DipBlue, que tire proveito da negociação de forma a obter vantagem em relação aos restantes jogadores, através do uso tratados de paz, formação de alianças ou sugestão de acções a aliados. É ainda usada confiança como um meio de detectar e reagir a possíveis traições por parte de jogadores aliados.O jogador foi criado para a plataforma de testes de sistemas multi-agente DipGame e foi testado contra outros jogadores da mesma plataforma e contra variações de si mesmo.Os resultados das experiências demonstram que o uso de negociação aumenta a performance dos bots aliados se todos forem fieis aos acordos efectuados, contudo, quando traídos a eficácia dos bots desce drasticamente. Neste cenário, a capacidade de avaliar confiança provou ser capaz de reduzir o impacto das traições.Diplomacy is a military strategy turn-based board game, which takes place in the turn of the 20th century, where seven world powers fight for the dominion of Europe. The game can be played by 2 to 7 players and is characterized by not having random factors, as well as, by being a zero-sum game. It has a very important component when played by human players that has been put aside in games typically addressed by Artificial Intelligence techniques: before making their moves the players can negotiate among themselves and discuss issues such as alliances, move propositions, exchange of information, among others.Keeping in mind that the players act simultaneously and that the number of units and movements is extremely large, the result is a vast game tree impossible of being effectively searched. The majority of existing artificial players for Diplomacy don't make use of the negotiation opportunities the game provides and try to solve the problem through solution search and the use of complex heuristics.This dissertation proposes an approach to the development of an artificial player named DipBlue, that makes use of negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. Trust is used as a tool to detect and react to possible betrayals by allied players.DipBlue has a flexible architecture that allows the creation of different variations of the bot, each with a particular configuration and behaviour. The player was built to work with the multi-agent systems testbed DipGame and was tested with other players of the same platform and variations of itself. The results of the experiments show that the use of negotiation increases the performance of the bots involved in the alliances if all of them are trustworthy, however, when betrayed the efficiency of the bots drastically decreases. In this scenario, the ability to perform trust reasoning proved to successfully reduce the impact of betrayals

    A Generic Agent Architecture for Cooperative Multi-Agent Games

    Get PDF
    Esta dissertação tem como objetivo o desenvolvimento de uma arquitetura genérica de alto nível para o desenvolvimento de agentes capazes deeficientemente jogar jogos com um misto de competição e cooperação. Técnicas tradicionais utilizadas no contexto dos jogos incluem estratégias depesquisa como o Branch & Bound assim como abordagens de Monte-Carlo. Contudo estas técnicas são difíceis de aplicar a esta categoria de jogos, devidoaos frequentemente grandes espaços de pesquisa e à dificuldade em calcular os valores das posições e movimentos dos jogadores.Neste trabalho propômos uma arquitetura de agentes genérica que aborda os temas da negociação, confiança e modelação de oponentes, simplificando odesenvolvimento de agentes capazes de jogar estes jogos eficientemente. Esta arquitetura está dividida em quatro módulos independentes, inspirando-sena estrutura de uma nação em tempo de guerra, o Presidente, o Departamento Estratégico, o Departamento de Relações Externas e o Departamento deInteligência.Demonstramos as aplicações desta arquitetura instanciando-a usando dois jogos diferentes, o Diplomacy e o Werewolves of Miller's Hollow, e testando osagentes obtidos numa variedade de cenários contra agentes existentes. Os resultados obtidos mostram que a arquitetura é genérica o suficiente para seraplicada numa grande variedade de jogos, e a inclusão de negociação, confiança e modelação de oponentes permite obter agentes mais eficazes.The goal of this dissertation is to develop a high level generic architecture for the development of agents able to effectively play gameswith strong social components and a mix of competition and cooperation. Traditional techniques used in the context of games include searchingstrategies like Branch & Bound as well as Monte-Carlo approaches, however, these techniques are difficult to apply to this category of games,due to the often enormous search trees and the difficulty in calculating the value of a player's position or move.We propose a generic agent architecture that tackles the subjects of negotiation, trust and opponent modelling, simplifying the development ofagents capable of playing these games effectively. This architecture is split into four independent modules, taking inspiration from thestructure of a wartime nation, the President, the Strategic Office, the Foreign Office and the Intelligence Office.We demonstrate the applications of this architecture by instantiating it using two different games, Diplomacy and Werewolves of Miller's Hollow,and testing the obtained agents in a variety of scenarios against existing agents. The results obtained show that the architecture is genericenough to be applied in a wide variety of games, and the inclusion of negotiation, trust and opponent modelling allows for more effective agents

    The Role of Generative AI in Global Diplomatic Practices: A Strategic Framework

    Full text link
    As Artificial Intelligence (AI) transforms the domain of diplomacy in the 21st century, this research addresses the pressing need to evaluate the dualistic nature of these advancements, unpacking both the challenges they pose and the opportunities they offer. It has been almost a year since the launch of ChatGPT by OpenAI that revolutionised various work domains with its capabilities. The scope of application of these capabilities to diplomacy is yet to be fully explored or understood. Our research objective is to systematically examine the current discourse on Digital and AI Diplomacy, thus informing the development of a comprehensive framework for the role of Generative AI in modern diplomatic practices. Through the systematic analysis of 230 scholarly articles, we identified a spectrum of opportunities and challenges, culminating in a strategic framework that captures the multifaceted concepts for integration of Generative AI, setting a course for future research and innovation in diplomacy

    Challenges and Main Results of the Automated Negotiating Agents Competition (ANAC) 2019

    Get PDF
    The Automated Negotiating Agents Competition (ANAC) is a yearly-organized international contest in which participants from all over the world develop intelligent negotiating agents for a variety of negotiation problems. To facilitate the research on agent-based negotiation, the organizers introduce new research challenges every year. ANAC 2019 posed five negotiation challenges: automated negotiation with partial preferences, repeated human-agent negotiation, negotiation in supply-chain management, negotiating in the strategic game of Diplomacy, and in the Werewolf game. This paper introduces the challenges and discusses the main findings and lessons learnt per league

    Welfare Diplomacy: Benchmarking Language Model Cooperation

    Full text link
    The growing capabilities and increasingly widespread deployment of AI systems necessitate robust benchmarks for measuring their cooperative capabilities. Unfortunately, most multi-agent benchmarks are either zero-sum or purely cooperative, providing limited opportunities for such measurements. We introduce a general-sum variant of the zero-sum board game Diplomacy -- called Welfare Diplomacy -- in which players must balance investing in military conquest and domestic welfare. We argue that Welfare Diplomacy facilitates both a clearer assessment of and stronger training incentives for cooperative capabilities. Our contributions are: (1) proposing the Welfare Diplomacy rules and implementing them via an open-source Diplomacy engine; (2) constructing baseline agents using zero-shot prompted language models; and (3) conducting experiments where we find that baselines using state-of-the-art models attain high social welfare but are exploitable. Our work aims to promote societal safety by aiding researchers in developing and assessing multi-agent AI systems. Code to evaluate Welfare Diplomacy and reproduce our experiments is available at https://github.com/mukobi/welfare-diplomacy
    corecore