research

Serious Gaming for Building a Basis of Certification via Trust and Trustworthiness of Autonomous Systems

Abstract

Autonomous systems governed by a variety of adaptive and nondeterministic algorithms are being planned for inclusion into safety-critical environments, such as unmanned aircraft and space systems in both civilian and military applications. However, until autonomous systems are proven and perceived to be capable and resilient in the face of unanticipated conditions, humans will be reluctant or unable to delegate authority, remaining in control aided by machine-based information and decision support. Proving capability, or trustworthiness, is a necessary component of certification. Perceived capability is a component of trust. Trustworthiness is an attribute of a cyber-physical system that requires context-driven metrics to prove and certify. Trust is an attribute of the agents participating in the system and is gained over time and multiple interactions through trustworthy behavior and transparency. Historically, artificial intelligence and machine learning systems provide answers without explanation - without a rationale or insight into the machine thinking. In order to function as trusted teammates, machines must be able to explain their decisions and actions. This transparency is a product of both content and communication. NASAs Autonomy Teaming & TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR) project seeks to build a basis for certification of autonomous systems via establishing metrics for trustworthiness and trust in multi-agent team interactions, using AI (Artificial Intelligence) explainability and persistent modeling and simulation, in the context of mission planning and execution, with analyzable trajectories. Inspired by Massively Multiplayer Online Role Playing Games (MMORPG) and Serious Gaming, the proposed ATTRACTOR modeling and simulation environment is similar to online gaming environments in which player (aka agent) participants interact with each other, affect their environment, and expect the simulation to persist and change regardless of any individual agents active participation. This persistent simulation environment will accommodate individual agents, groups of self-organizing agents, and large-scale infrastructure behavior. The effects of the emerging adaptation and coevolution can be observed and measured to building a basis of measurable trustworthiness and trust, toward certification of safety-critical autonomous systems

    Similar works