Towards explainable AI: directed inference of linear temporal logic constraints

Abstract

Many systems in robotics and beyond may be classified as mixed logical-dynamical (MLD) systems. These systems are subject to both logical constraints, which govern their safe operation and goals; and dynamical constraints, which describe their physical behavior. These time-dependent constraints can be described with linear temporal logic (LTL). In the case where the constraints are not known, their inference offers a type of explanation for their behavior. Previous work has attempted to infer constraints for MLD systems by Bayesian methods, searching for optimally contrastive rules between "good" and "bad" system runs. However, due to a reliance on an unknown prior distribution, as well as a limited search space, these efforts are unable to recover all desired constraints. We propose an alternative inference method called directed hypothesis space generation (DHSG). DHSG compares each system run and constructs a full hypothesis space of all conjunctions and disjunctions of the desired LTL formula types. In simulation, DHSG recovered a full hypothesis space for each test case. However, due to a comparatively high computational demand, it also exhibited run times which increased significantly with state space complexity. The computational load was lightened by limiting the length of inferred formulas, at the cost of hypothesis space completeness. However, the adjustable computation time of the Bayesian approach means that it retains an advantage under some use cases. Finally, for scenarios in which neither the LTL rules are known, nor the state-space regions they govern, DHSG has potential to construct the unknown regions. This approach would give a basis on which to perform further inference. Region construction would apply to lesser-understood systems and presents a topic for future work.Ope

    Similar works