5,765 research outputs found

    Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    Get PDF
    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLan with action rates, which specify the likelihood of exhibiting particular behaviour or of installing features at a specific moment or in a specific order. The enriched language (called PFLan) allows us to specify models of software product lines with probabilistic configurations and behaviour, e.g. by considering a PFLan semantics based on discrete-time Markov chains. The Maude implementation of PFLan is combined with the distributed statistical model checker MultiVeStA to perform quantitative analyses of a simple product line case study. The presented analyses include the likelihood of certain behaviour of interest (e.g. product malfunctioning) and the expected average cost of products.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301

    Partial Orders for Efficient BMC of Concurrent Software

    Get PDF
    This version previously deposited at arXiv:1301.1629v1 [cs.LO]The vast number of interleavings that a concurrent program can have is typically identified as the root cause of the difficulty of automatic analysis of concurrent software. Weak memory is generally believed to make this problem even harder. We address both issues by modelling programs' executions with partial orders rather than the interleaving semantics (SC). We implemented a software analysis tool based on these ideas. It scales to programs of sufficient size to achieve first-time formal verification of non-trivial concurrent systems code over a wide range of models, including SC, Intel x86 and IBM Power

    A unified approach for static and runtime verification : framework and applications

    Get PDF
    Static verification of software is becoming ever more effective and efficient. Still, static techniques either have high precision, in which case powerful judgements are hard to achieve automatically, or they use abstractions supporting increased automation, but possibly losing important aspects of the concrete system in the process. Runtime verification has complementary strengths and weaknesses. It combines full precision of the model (including the real deployment environment) with full automation, but cannot judge future and alternative runs. Another drawback of runtime verification can be the computational overhead of monitoring the running system which, although typically not very high, can still be prohibitive in certain settings. In this paper we propose a framework to combine static analysis techniques and runtime verification with the aim of getting the best of both techniques. In particular, we discuss an instantiation of our framework for the deductive theorem prover KeY, and the runtime verification tool Larva. Apart from combining static and dynamic verification, this approach also combines the data centric analysis of KeY with the control centric analysis of Larva. An advantage of the approach is that, through the use of a single specification which can be used by both analysis techniques, expensive parts of the analysis could be moved to the static phase, allowing the runtime monitor to make significant assumptions, dropping parts of expensive checks at runtime. We also discuss specific applications of our approach.peer-reviewe

    A Logical Foundation for Environment Classifiers

    Full text link
    Taha and Nielsen have developed a multi-stage calculus {\lambda}{\alpha} with a sound type system using the notion of environment classifiers. They are special identifiers, with which code fragments and variable declarations are annotated, and their scoping mechanism is used to ensure statically that certain code fragments are closed and safely runnable. In this paper, we investigate the Curry-Howard isomorphism for environment classifiers by developing a typed {\lambda}-calculus {\lambda}|>. It corresponds to multi-modal logic that allows quantification by transition variables---a counterpart of classifiers---which range over (possibly empty) sequences of labeled transitions between possible worlds. This interpretation will reduce the "run" construct---which has a special typing rule in {\lambda}{\alpha}---and embedding of closed code into other code fragments of different stages---which would be only realized by the cross-stage persistence operator in {\lambda}{\alpha}---to merely a special case of classifier application. {\lambda}|> enjoys not only basic properties including subject reduction, confluence, and strong normalization but also an important property as a multi-stage calculus: time-ordered normalization of full reduction. Then, we develop a big-step evaluation semantics for an ML-like language based on {\lambda}|> with its type system and prove that the evaluation of a well-typed {\lambda}|> program is properly staged. We also identify a fragment of the language, where erasure evaluation is possible. Finally, we show that the proof system augmented with a classical axiom is sound and complete with respect to a Kripke semantics of the logic

    Fundamental Approaches to Software Engineering

    Get PDF
    computer software maintenance; computer software selection and evaluation; formal logic; formal methods; formal specification; programming languages; semantics; software engineering; specifications; verificatio

    Towards Extending the Range of Bugs That Automated Program Repair Can Handle

    Full text link
    Modern automated program repair (APR) is well-tuned to finding and repairing bugs that introduce observable erroneous behavior to a program. However, a significant class of bugs does not lead to such observable behavior (e.g., liveness/termination bugs, non-functional bugs, and information flow bugs). Such bugs can generally not be handled with current APR approaches, so, as a community, we need to develop complementary techniques. To stimulate the systematic study of alternative APR approaches and hybrid APR combinations, we devise a novel bug classification system that enables methodical analysis of their bug detection power and bug repair capabilities. To demonstrate the benefits, we analyze the repair of termination bugs in sequential and concurrent programs. The study shows that integrating dynamic APR with formal analysis techniques, such as termination provers and software model checkers, reduces complexity and improves the overall reliability of these repairs.Comment: Accepted for publication in the 22nd IEEE International Conference on Software Quality, Reliability and Security (QRS 2022
    • …
    corecore