9,081 research outputs found

    Embodied Evolution in Collective Robotics: A Review

    Full text link
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    How to Make Agents and Influence Teammates: Understanding the Social Influence AI Teammates Have in Human-AI Teams

    Get PDF
    The introduction of computational systems in the last few decades has enabled humans to cross geographical, cultural, and even societal boundaries. Whether it was the invention of telephones or file sharing, new technologies have enabled humans to continuously work better together. Artificial Intelligence (AI) has one of the highest levels of potential as one of these technologies. Although AI has a multitude of functions within teaming, such as improving information sciences and analysis, one specific application of AI that has become a critical topic in recent years is the creation of AI systems that act as teammates alongside humans, in what is known as a human-AI team. However, as AI transitions into teammate roles they will garner new responsibilities and abilities, which ultimately gives them a greater influence over teams\u27 shared goals and resources, otherwise known as teaming influence. Moreover, that increase in teaming influence will provide AI teammates with a level of social influence. Unfortunately, while research has observed the impact of teaming influence by examining humans\u27 perception and performance, an explicit and literal understanding of the social influence that facilitates long-term teaming change has yet to be created. This dissertation uses three studies to create a holistic understanding of the underlying social influence that AI teammates possess. Study 1 identifies the fundamental existence of AI teammate social influence and how it pertains to teaming influence. Qualitative data demonstrates that social influence is naturally created as humans actively adapt around AI teammate teaming influence. Furthermore, mixed-methods results demonstrate that the alignment of AI teammate teaming influence with a human\u27s individual motives is the most critical factor in the acceptance of AI teammate teaming influence in existing teams. Study 2 further examines the acceptance of AI teammate teaming and social influence and how the design of AI teammates and humans\u27 individual differences can impact this acceptance. The findings of Study 2 show that humans have the greatest levels of acceptance of AI teammate teaming influence that is comparative to their own teaming influence on a single task, but the acceptance of AI teammate teaming influence across multiple tasks generally decreases as teaming influence increases. Additionally, coworker endorsements are shown to increase the acceptance of high levels of AI teammate teaming influence, and humans that perceive the capabilities of technology, in general, to be greater are potentially more likely to accept AI teammate teaming influence. Finally, Study 3 explores how the teaming and social influence possessed by AI teammates change when presented in a team that also contains teaming influence from multiple human teammates, which means social influence between humans also exists. Results demonstrate that AI teammate social influence can drive humans to prefer and observe their human teammates over their AI teammates, but humans\u27 behavioral adaptations are more centered around their AI teammates than their human teammates. These effects demonstrate that AI teammate social influence, when in the presence of human-human teaming and social influence, retains potency, but its effects are different when impacting either perception or behavior. The above three studies fill a currently under-served research gap in human-AI teaming, which is both the understanding of AI teammate social influence and humans\u27 acceptance of it. In addition, each study conducted within this dissertation synthesizes its findings and contributions into actionable design recommendations that will serve as foundational design principles to allow the initial acceptance of AI teammates within society. Therefore, not only will the research community benefit from the results discussed throughout this dissertation, but so too will the developers, designers, and human teammates of human-AI teams

    Multiagent systems: games and learning from structures

    Get PDF
    Multiple agents have become increasingly utilized in various fields for both physical robots and software agents, such as search and rescue robots, automated driving, auctions and electronic commerce agents, and so on. In multiagent domains, agents interact and coadapt with other agents. Each agent's choice of policy depends on the others' joint policy to achieve the best available performance. During this process, the environment evolves and is no longer stationary, where each agent adapts to proceed towards its target. Each micro-level step in time may present a different learning problem which needs to be addressed. However, in this non-stationary environment, a holistic phenomenon forms along with the rational strategies of all players; we define this phenomenon as structural properties. In our research, we present the importance of analyzing the structural properties, and how to extract the structural properties in multiagent environments. According to the agents' objectives, a multiagent environment can be classified as self-interested, cooperative, or competitive. We examine the structure from these three general multiagent environments: self-interested random graphical game playing, distributed cooperative team playing, and competitive group survival. In each scenario, we analyze the structure in each environmental setting, and demonstrate the structure learned as a comprehensive representation: structure of players' action influence, structure of constraints in teamwork communication, and structure of inter-connections among strategies. This structure represents macro-level knowledge arising in a multiagent system, and provides critical, holistic information for each problem domain. Last, we present some open issues and point toward future research

    Complex Problem Solving through Human-AI Collaboration: Literature Review on Research Contexts

    Get PDF
    Solving complex problems has been proclaimed as one major challenge for hybrid teams of humans and artificial intelligence (AI) systems. Human-AI collaboration brings immense opportunities in these complex tasks, in which humans struggle, but full automation is also impossible. Understanding and designing human-AI collaboration for complex problem solving is a wicked and multifaceted research problem itself. We contribute to this emergent field by reviewing to what extent existing research on instantiated human-AI collaboration already addresses this challenge. After clarifying the two key concepts (complex problem solving and human-AI collaboration), we perform a systematic literature review. We extract research contexts and assess them considering different complexity features. We thereby provide an overview of existing and guidance for designing new, suitable research contexts for studying complex problem solving through human-AI collaboration and present an outlook for further work on this research challenge

    Learning to Generate Natural Language Rationales for Game Playing Agents

    Get PDF
    Many computer games feature non-player charactert (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to explain its actions and motivations. When NPC behavior is controlled by a black box AI system it can be hard to generate the necessary explanations. In this paper, we present a system that generates human-like, natural language explanations—called rationales—of an agent\u27s actions in a game environment regardless of how the decisions are made by a black box AI. We outline a robust data collection and neural network training pipeline that can be used to gather think-aloud data and train a rationale generation model for any similar sequential turn based decision making task. A human-subject study shows that our technique produces believable rationales for an agent playing the game, Frogger. We conclude with insights about how people perceive automatically generated rationales

    Rapid adaptation of video game AI

    Get PDF
    • 

    corecore