5 research outputs found

    A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

    Full text link
    This paper introduces the MCML approach for empirically studying the learnability of relational properties that can be expressed in the well-known software design language Alloy. A key novelty of MCML is quantification of the performance of and semantic differences among trained machine learning (ML) models, specifically decision trees, with respect to entire (bounded) input spaces, and not just for given training and test datasets (as is the common practice). MCML reduces the quantification problems to the classic complexity theory problem of model counting, and employs state-of-the-art model counters. The results show that relatively simple ML models can achieve surprisingly high performance (accuracy and F1-score) when evaluated in the common setting of using training and test datasets - even when the training dataset is much smaller than the test dataset - indicating the seeming simplicity of learning relational properties. However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance

    A Quantitative Flavour of Robust Reachability

    Full text link
    Many software analysis techniques attempt to determine whether bugs are reachable, but for security purpose this is only part of the story as it does not indicate whether the bugs found could be easily triggered by an attacker. The recently introduced notion of robust reachability aims at filling this gap by distinguishing the input controlled by the attacker from those that are not. Yet, this qualitative notion may be too strong in practice, leaving apart bugs which are mostly but not fully replicable. We aim here at proposing a quantitative version of robust reachability, more flexible and still amenable to automation. We propose quantitative robustness, a metric expressing how easily an attacker can trigger a bug while taking into account that he can only influence part of the program input, together with a dedicated quantitative symbolic execution technique (QRSE). Interestingly, QRSE relies on a variant of model counting (namely, functional E-MAJSAT) unseen so far in formal verification, but which has been studied in AI domains such as Bayesian network, knowledge representation and probabilistic planning. Yet, the existing solving methods from these fields turn out to be unsatisfactory for formal verification purpose, leading us to propose a novel parametric method. These results have been implemented and evaluated over two security-relevant case studies, allowing to demonstrate the feasibility and relevance of our ideas

    A Recursive Algorithm for Projected Model Counting

    No full text
    International audienc

    A Recursive Algorithm for Projected Model Counting

    No full text
    corecore