4 research outputs found
Foundations of probability-raising causality in Markov decision processes
This work introduces a novel cause-effect relation in Markov decision
processes using the probability-raising principle. Initially, sets of states as
causes and effects are considered, which is subsequently extended to regular
path properties as effects and then as causes. The paper lays the mathematical
foundations and analyzes the algorithmic properties of these cause-effect
relations. This includes algorithms for checking cause conditions given an
effect and deciding the existence of probability-raising causes. As the
definition allows for sub-optimal coverage properties, quality measures for
causes inspired by concepts of statistical analysis are studied. These include
recall, coverage ratio and f-score. The computational complexity for finding
optimal causes with respect to these measures is analyzed.Comment: Submission for Logical Methods in Computer Science (special issue
FoSSaCS 2022). arXiv admin note: substantial text overlap with
arXiv:2201.0876
Probabilistic causes in Markov chains
By combining two of the central paradigms of causality, namely counterfactual reasoning and probability-raising,we introduce a probabilistic notion of cause in Markov chains. Such a cause consists of finite executions of the probabilistic system after which the probability of an ω-regular effect exceeds a given threshold. The cause, as a set of executions, then has to cover all behaviors exhibiting the effect. With these properties, such causes can be used for monitoring purposes where the aim is to detect faulty behavior before it actually occurs. In order to choose which cause should be computed, we introduce multiple types of costs to capture the consumption of resources by the system or monitor from different perspectives, and study the complexity of computing cost-minimal causes