187 research outputs found
Learning Interpretable Temporal Properties from Positive Examples Only
We consider the problem of explaining the temporal behavior of black-boxsystems using human-interpretable models. To this end, based on recent researchtrends, we rely on the fundamental yet interpretable models of deterministicfinite automata (DFAs) and linear temporal logic (LTL) formulas. In contrast tomost existing works for learning DFAs and LTL formulas, we rely on onlypositive examples. Our motivation is that negative examples are generallydifficult to observe, in particular, from black-box systems. To learnmeaningful models from positive examples only, we design algorithms that relyon conciseness and language minimality of models as regularizers. To this end,our algorithms adopt two approaches: a symbolic and a counterexample-guidedone. While the symbolic approach exploits an efficient encoding of languageminimality as a constraint satisfaction problem, the counterexample-guided onerelies on generating suitable negative examples to prune the search. Both theapproaches provide us with effective algorithms with theoretical guarantees onthe learned models. To assess the effectiveness of our algorithms, we evaluateall of them on synthetic data.<br
Learning Concise Models from Long Execution Traces
Abstract models of system-level behaviour have applications in design
exploration, analysis, testing and verification. We describe a new algorithm
for automatically extracting useful models, as automata, from execution traces
of a HW/SW system driven by software exercising a use-case of interest. Our
algorithm leverages modern program synthesis techniques to generate predicates
on automaton edges, succinctly describing system behaviour. It employs trace
segmentation to tackle complexity for long traces. We learn concise models
capturing transaction-level, system-wide behaviour--experimentally
demonstrating the approach using traces from a variety of sources, including
the x86 QEMU virtual platform and the Real-Time Linux kernel
Learning Deterministic Finite Automata from Confidence Oracles
We discuss the problem of learning a deterministic finite automaton (DFA)
from a confidence oracle. That is, we are given access to an oracle with
incomplete knowledge of some target language over an alphabet ; the
oracle maps a string to a score in the interval
indicating its confidence that the string is in the language. The
interpretation is that the sign of the score signifies whether , while
the magnitude represents the oracle's confidence. Our goal is to learn
a DFA representation of the oracle that preserves the information that it is
confident in. The learned DFA should closely match the oracle wherever it is
highly confident, but it need not do this when the oracle is less sure of
itself
- …