636 research outputs found

    Virtual Battlespace Behavior Generation Through Class Imitation

    Get PDF
    Military organizations need realistic training scenarios to ensure mission readiness. Developing the skills required to differentiate combatants from non-combatants is very important for ensuring the international law of armed conflict is upheld. In Simulated Training Environments, one of the open challenges is to correctly simulate the appearance and behavior of combatant and non-combatant agents in a realistic manner. This thesis outlines the construction of a data driven agent that is capable of imitating the behaviors of the Virtual BattleSpace 2 behavior classes while our agent is configured to advance to a geographically specific goal. The approach and the resulting agent promotes and motivates the idea that Opponent and Non-Combatant behaviors inside of simulated environments can be improved through the use of behavioral imitation

    Dynamic difficulty adjustment of serious-game based on synthetic fog using activity theory model

    Get PDF
    This study used the activity theory model to determine the dynamic difficulty adjustment of serious-game based on synthetic fog. The difference in difficulty levels was generated in a 3-dimensional game environment with changes determined by applying varying fog thickness. The activity theory model in serious-games aims to facilitate development analysis in terms of learning content, the equipment used, and the resulting in-game action. The difficulty levels vary according to the player's ability because the game is expected to reduce boredom and frustration. Furthermore, this study simulated scenarios of various conditions, scores, time remaining, and the lives of synthetic players. The experimental results showed that the system can change the game environment with different fog thicknesses according to synthetic player parameters

    Two Gaps That Need to be Filled in Order to Trust AI in Complex Battle Scenarios

    Get PDF
    Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research SymposiumIn human terms, trust is earned. This paper presents an approach on how an AI-based Course of Action (COA) recommendation algorithm (CRA) can earn human trust. It introduces a nine-stage process (NSP) divided into three phases, where the first two phases close two critical logic gaps necessary to build a trustworthy CRA. The final phase involves deployment of a trusted CRA. Historical examples are presented to provide arguments on why trust needs to be earned, beyond explaining its recommendations, especially when battle complexity and opponent surprise actions are being addressed. The paper describes discussions on the effects that surprise actions had on past battles and how AI might have made a difference, but only if the degree of trust was high. To achieve this goal, the NSP introduces modeling constructs called EVEs. EVEs are key in allowing knowledge from varying sources and forms to be collected, integrated, and refined during all three phases. Using EVEs, the CRA can integrate knowledge from wargamers conducting tabletop discussions as well as operational test engineers working with actual technology during product testing. EVEs allow CRAs to be trained with a combination of theory and practice to provide more practical and accurate recommendations.Approved for public release; distribution is unlimited

    Two Gaps That Need to be Filled in Order to Trust AI in Complex Battle Scenarios

    Get PDF
    Excerpt from the Proceedings of the Nineteenth Annual Acquisition Research SymposiumIn human terms, trust is earned. This paper presents an approach on how an AI-based Course of Action (COA) recommendation algorithm (CRA) can earn human trust. It introduces a nine-stage process (NSP) divided into three phases, where the first two phases close two critical logic gaps necessary to build a trustworthy CRA. The final phase involves deployment of a trusted CRA. Historical examples are presented to provide arguments on why trust needs to be earned, beyond explaining its recommendations, especially when battle complexity and opponent surprise actions are being addressed. The paper describes discussions on the effects that surprise actions had on past battles and how AI might have made a difference, but only if the degree of trust was high. To achieve this goal, the NSP introduces modeling constructs called EVEs. EVEs are key in allowing knowledge from varying sources and forms to be collected, integrated, and refined during all three phases. Using EVEs, the CRA can integrate knowledge from wargamers conducting tabletop discussions as well as operational test engineers working with actual technology during product testing. EVEs allow CRAs to be trained with a combination of theory and practice to provide more practical and accurate recommendations.Approved for public release; distribution is unlimited

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments

    Actors, Institutions and Innovation Processes in New Path Creation : The Regional Emergence and Evolution of Wind Energy Technology in Germany

    Get PDF
    The recent literature of evolutionary economic geography points out that new technologies do not emerge randomly across space. The evolution of the economic landscape is associated with processes of path creation, regional branching and regional path dependence. However, the underlying processes and the role of the actors are under-investigated and poorly understood. This research gap is the starting point of this dissertation. The thesis focuses on the actors, mechanisms and processes of path creation and the co-evolution of technology and institutions. The overall aim of the dissertation is to provide theoretical foundation and empirical evidence to understand and explain the regional emergence and evolution of new technologies. The key question of the thesis is: Where, how, by whom and under which conditions do new technologies emerge. Based on research in evolutionary economic geography, institutional economic geography, and social and organizational science, a revised theoretical and conceptual framework is developed for analyzing and explaining the regional emergence and evolution of new technologies. The framework provides an actor-centered, dynamic perspective and goes beyond technological diversification and regional branching processes. Hence, the dissertation contributes to the current theoretical debate on path creation and the role of institutions and institutional change for the evolution of new technologies. The empirical analysis is based on an explanatory case study on the emergence and evolution of the onshore wind energy technology in Germany. A qualitative content analysis was employed. Data were collected by a document analysis and 40 in-depth interviews with relevant stakeholders. The findings show that besides energy and environmental policies at the national level, path creation was strongly influenced by the regional institutional environment. The findings also give qualitative insights into different types of actors and their motives and activities in an emerging technology. Concerning the mechanisms, the thesis identifies entrepreneurial activities and regional industry diversification as the key mechanisms in new regional path creation. These were later strengthened by various exogenous impulses. The relevance of the processes differs between regions. The thesis also reveals interrelations and feedback mechanisms between technological development and the institutional environment and finds that the co-evolution of supporting institutions like technical standards, the Electricity Feed-in Act and the Renewable Energy Sources Act was a key success factor for the evolution of wind energy technology. It was found that co-evolution was driven by actors who shaped and changed their institutional environment

    Responsible machine learning: supporting privacy preservation and normative alignment with multi-agent simulation

    Get PDF
    This dissertation aims to advance responsible machine learning through multi-agent simulation (MAS). I introduce and demonstrate an open source, multi-domain discrete event simulation framework and use it to: (1) improve state-of-the-art privacy-preserving federated learning and (2) construct a novel method for normatively-aligned learning from synthetic negative examples. Due to their complexity and capacity, the training of modern machine learning (ML) models can require vast user-collected data sets. The current formulation of federated learning arose in 2016 after repeated exposure of sensitive user information from centralized data stores where mobile and wearable training data was aggregated. Privacy-preserving federated learning (PPFL) soon added stochastic and cryptographic layers to protect against additional vectors of data exposure. Recent state of the art protocols have combined differential privacy (DP) and secure multiparty computation (MPC) to keep client training data set parameters private from an ``honest but curious'' server which is legitimately involved in the learning process, but attempting to infer information it should not have. Investigation of PPFL can be cost prohibitive if each iteration of a proposed experimental protocol is distributed to virtual computational nodes geolocated around the world. It can also be inaccurate when locally simulated without concern for client parallelism, accurate timekeeping, or computation and communication loads. In this work, a recent PPFL protocol is instantiated as a single-threaded MAS to show that its model accuracy, deployed parallel running time, and resistance to inference of client model parameters can be inexpensively evaluated. The protocol is then extended using oblivious distributed differential privacy to a new state of the art secure against attacks of collusion among all except one participant, with an empirical demonstration that the new protocol improves privacy with no loss of accuracy to the final model. State of the art reinforcement learning (RL) is also increasingly complex and hard to interpret, such that a sequence of individually innocuous actions may produce an unexpectedly harmful result. Safe RL seeks to avoid these results through techniques like reward variance reduction, error state prediction, or constrained exploration of the state-action space. Development of the field has been heavily influenced by robotics and finance, and thus it is primarily concerned with physical failures like a helicopter crash or a robot-human workplace collision, or monetary failures like the depletion of an investment account. The related field of Normative RL is concerned with obeying the behavioral expectations of a broad human population, like respecting personal space or not sneaking up behind people. Because normative behavior often implicates safety, for example the assumption that an autonomous navigation robot will not walk through a human to reach its goal more quickly, there is significant overlap between the two areas. There are problem domains not easily addressed by current approaches in safe or normative RL, where the undesired behavior is subtle, violates legal or ethical rather than physical or monetary constraints, and may be composed of individually-normative actions. In this work, I consider an intelligent stock trading agent that maximizes profit but may inadvertently learn ``spoofing'', a form of illegal market manipulation that can be difficult to detect. Using a financial market based on MAS, I safely coerce a variety of spoofing behaviors, learn to distinguish them from other profit-driven strategies, and carefully analyze the empirical results. I then demonstrate how this spoofing recognizer can be used as a normative guide to train an intelligent trading agent that will generate positive returns while avoiding spoofing behaviors, even if their adoption would increase short-term profits. I believe this contribution to normative RL, of deriving an method for normative alignment from synthetic non-normative action sequences, should generalize to many other problem domains.Ph.D

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    • …
    corecore