9 research outputs found
Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control
Designing a safe, trusted, and ethical AI may be practically impossible;
however, designing AI with safe, trusted, and ethical use in mind is possible
and necessary in safety and mission-critical domains like aerospace. Safe,
trusted, and ethical use of AI are often used interchangeably; however, a
system can be safely used but not trusted or ethical, have a trusted use that
is not safe or ethical, and have an ethical use that is not safe or trusted.
This manuscript serves as a primer to illuminate the nuanced differences
between these concepts, with a specific focus on applications of Human-AI
teaming in aerospace system control, where humans may be in, on, or
out-of-the-loop of decision-making
A Universal Framework for Generalized Run Time Assurance with JAX Automatic Differentiation
With the rise of increasingly complex autonomous systems powered by black box
AI models, there is a growing need for Run Time Assurance (RTA) systems that
provide online safety filtering to untrusted primary controller output.
Currently, research in RTA tends to be ad hoc and inflexible, diminishing
collaboration and the pace of innovation. The Safe Autonomy Run Time Assurance
Framework presented in this paper provides a standardized interface for RTA
modules and a set of universal implementations of constraint-based RTA capable
of providing safety assurance given arbitrary dynamical systems and
constraints. Built around JAX, this framework leverages automatic
differentiation to populate advanced optimization based RTA methods minimizing
user effort and error. To validate the feasibility of this framework, a
simulation of a multi-agent spacecraft inspection problem is shown with safety
constraints on position and velocity
Bridging the Gap: Applying Assurance Arguments to MIL-HDBK-516C Certification of a Neural Network Control System with ASIF Run Time Assurance Architecture
Recent advances in artificial intelligence and machine learning may soon
yield paradigm-shifting benefits for aerospace systems. However, complexity and
possible continued on-line learning makes neural network control systems (NNCS)
difficult or impossible to certify under the United States Military
Airworthiness Certification Criteria defined in MIL-HDBK-516C. Run time
assurance (RTA) is a control system architecture designed to maintain safety
properties regardless of whether a primary control system is fully verifiable.
This work examines how to satisfy compliance with MIL-HDBK-516C while using
active set invariance filtering (ASIF), an advanced form of RTA not envisaged
by the 516c committee. ASIF filters the commands from a primary controller,
passing on safe commands while optimally modifying unsafe commands to ensure
safety with minimal deviation from the desired control action. This work
examines leveraging the core theory behind ASIF as assurance argument
explaining novel satisfaction of 516C compliance criteria. The result
demonstrates how to support compliance of novel technologies with 516C as well
as elaborate how such standards might be updated for emerging technologies
Searching for Optimal Runtime Assurance via Reachability and Reinforcement Learning
A runtime assurance system (RTA) for a given plant enables the exercise of an
untrusted or experimental controller while assuring safety with a backup (or
safety) controller. The relevant computational design problem is to create a
logic that assures safety by switching to the safety controller as needed,
while maximizing some performance criteria, such as the utilization of the
untrusted controller. Existing RTA design strategies are well-known to be
overly conservative and, in principle, can lead to safety violations. In this
paper, we formulate the optimal RTA design problem and present a new approach
for solving it. Our approach relies on reward shaping and reinforcement
learning. It can guarantee safety and leverage machine learning technologies
for scalability. We have implemented this algorithm and present experimental
results comparing our approach with state-of-the-art reachability and
simulation-based RTA approaches in a number of scenarios using aircraft models
in 3D space with complex safety requirements. Our approach can guarantee safety
while increasing utilization of the experimental controller over existing
approaches
Systems Theoretic Process Analysis of a Run Time Assured Neural Network Control System
This research considers the problem of identifying safety constraints and
developing Run Time Assurance (RTA) for Deep Reinforcement Learning (RL)
Tactical Autopilots that use neural network control systems (NNCS). This
research studies a specific use case of an NNCS performing autonomous formation
flight while an RTA system provides collision avoidance and geofence
assurances. First, Systems Theoretic Accident Models and Processes (STAMP) is
applied to identify accidents, hazards, and safety constraints as well as
define a functional control system block diagram of the ground station, manned
flight lead, and surrogate unmanned wingman. Then, Systems Theoretic Process
Analysis (STPA) is applied to the interactions of the the ground station,
manned flight lead, surrogate unmanned wingman, and internal elements of the
wingman aircraft to identify unsafe control actions, scenarios leading to each,
and safety requirements to mitigate risks. This research is the first
application of STAMP and STPA to an NNCS bounded by RTA
Elicitation and formal specification of run time assurance requirements for aerospace collision avoidance systems
Run Time Assurance (RTA) systems are proposed as a complementary verification approach to facilitate near-term certification of advanced aerospace decision and control systems. RTA systems monitor the state of a cyber-physical system (CPS) online for violations of predetermined boundaries that trigger a switch to a simple, safety remediation controller. For example, automatic collision avoidance systems are RTA systems that monitor the CPS state for violations of proximity constraints and switch to a backup controller that assures safe separation. Design of RTA systems is generally ad hoc and specific to application, although common design elements and requirements of RTA systems cross applications and domains. This research elicits, formally specifies, and analyzes RTA-based collision avoidance system requirements for a conceptual spacecraft conducting autonomous close-proximity operations. First, the Automatic Ground Collision Avoidance System developed for aircraft is studied to identify common design elements and requirements of RTA last-instant collision avoidance systems that cross the air and space domains. Second, formal requirements specification templates are developed for a generalized RTA architecture that extends the simplex architecture by accounting for human interaction, system faults, and safety interlocks. Third, formal requirements are elicited through the process of formal specification as well as from common design elements and requirements, spacecraft guidance constraints in the literature, and a structured hazard assessment. Finally, the requirements are analyzed using compositional reasoning and formal model checking verification techniques.Ph.D
Run Time Assurance for Safety-Critical Systems: An Introduction to Safety Filtering Approaches for Complex Control Systems
Run Time Assurance (RTA) Systems are online verification mechanisms that
filter an unverified primary controller output to ensure system safety. The
primary control may come from a human operator, an advanced control approach,
or an autonomous control approach that cannot be verified to the same level as
simpler control systems designs. The critical feature of RTA systems is their
ability to alter unsafe control inputs explicitly to assure safety. In many
cases, RTA systems can functionally be described as containing a monitor that
watches the state of the system and output of a primary controller, and a
backup controller that replaces or modifies control input when necessary to
assure safety. An important quality of an RTA system is that the assurance
mechanism is constructed in a way that is entirely agnostic to the underlying
structure of the primary controller. By effectively decoupling the enforcement
of safety constraints from performance-related objectives, RTA offers a number
of useful advantages over traditional (offline) verification. This article
provides a tutorial on developing RTA systems