16,759 research outputs found

    Delta-based Verification of Software Product Families

    Get PDF
    The quest for feature- and family-oriented deductive verification of software product lines resulted in several proposals. In this paper we look at delta-oriented modeling of product lines and combine two new ideas: first, we extend Hähnle & Schaefer’s delta-oriented version of Liskov’s substitution principle for behavioral subtyping to work also for overridden behavior in benign cases. For this to succeed, programs need to be in a certain normal form. The required normal form turns out to be achievable in many cases by a set of program transformations, whose correctness is ensured by the recent technique of abstract execution. This is a generalization of symbolic execution that permits reasoning about abstract code elements. It is needed, because code deltas contain partially unknown code contexts in terms of “original” calls. Second, we devise a modular verification procedure for deltas based on abstract execution, representing deltas as abstract programs calling into unknown contexts. The result is a “delta-based” verification approach, where each modification of a method in a code delta is verified in isolation, but which overcomes the strict limitations of behavioral subtyping and works for many practical programs. The latter claim is substantiated with case studies and benchmarks

    Towards correct-by-construction product variants of a software product line: GFML, a formal language for feature modules

    Full text link
    Software Product Line Engineering (SPLE) is a software engineering paradigm that focuses on reuse and variability. Although feature-oriented programming (FOP) can implement software product line efficiently, we still need a method to generate and prove correctness of all product variants more efficiently and automatically. In this context, we propose to manipulate feature modules which contain three kinds of artifacts: specification, code and correctness proof. We depict a methodology and a platform that help the user to automatically produce correct-by-construction product variants from the related feature modules. As a first step of this project, we begin by proposing a language, GFML, allowing the developer to write such feature modules. This language is designed so that the artifacts can be easily reused and composed. GFML files contain the different artifacts mentioned above.The idea is to compile them into FoCaLiZe, a language for specification, implementation and formal proof with some object-oriented flavor. In this paper, we define and illustrate this language. We also introduce a way to compose the feature modules on some examples.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301

    A Static Analyzer for Large Safety-Critical Software

    Get PDF
    We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, the octagon, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing)

    Automated Verification for Functional and Relational Properties of Voting Rules

    Get PDF
    In this paper, we formalise classes of axiomatic properties for voting rules, discuss their characteristics, and show how symmetry properties can be exploited in the verification of other properties. Following that, we describe how automated verification methods such as software bounded model checking and deductive verification can be used to verify implementations of voting rules. We present a case study, where we use and compare different approaches to verify that plurality voting satisfies the majority and the anonymity property

    Abductive Reasoning and Automated Analysis of Feature Models: How are they connected?

    Get PDF
    In the automated analysis feature models (AAFM), many operations have been defined to extract relevant information to be used on decision making. Most of the proposals rely on logics to give solution to different operations. This extraction of knowledge using logics is known as deductive reasoning. One of the most useful operations are explanations that provide the reasons why some other operations find no solution. However, explanations does not use deductive but abductive reasoning, a kind of reasoning that allows to obtain conjectures why things happen. As a first contribution we differentiate between deductive and abductive reasoning and show how this difference affect to AAFM. Secondly, we broaden the concept of explanations relying on abductive reasoning, applying them even when we obtain a positive response from other operations. Lastly, we propose a catalog of operations that use abduction to provide useful information.ComisiĂłn Interministerial de Ciencia y TecnologĂ­a (CICYT) TIN2006-00472Junta de AndalucĂ­a TIC-253

    Conformance Checking with Constraint Logic Programming: The Case of Feature Models

    No full text
    Developing high quality systems depends on developing high quality models. An important facet of model quality is their consistency with respect to their meta-model. We call the verification of this quality the conformance checking process. We are interested in the conformance checking of Product Line Models (PLMs). The problem in the context of product lines is that product models are not created by instantiating a meta-model: they are derived from PLMs. Therefore it is usually at the level of PLMs that conformance checking is applied. On the semantic level, a PLM is defined as the collection of all the product models that can be derived from it. Therefore checking the conformance of the PLM is equivalent to checking the conformance of all the product models. However, we would like to avoid this naĂŻve approach because it is not scalable due to the high number of models. In fact, it is even sometimes infeasible to calculate the number of product models of a PLM. Despite the importance of PLM conformance checking, very few research works have been published and tools do not adequately support it. In this paper, we present an approach that employs Constraint Logic Programming as a technology on which to build a PLM conformance checking solution. The paper demonstrates the approach with feature models, the de facto standard for modeling software product lines. Based on an extensive literature review and an empirical study, we identified a set of 9 conformance checking rules and implemented them on the GNU Prolog constraints solver. We evaluated our approach by applying our rules to 50 feature models of sizes up to 10000 features. The evaluation showed that our approach is effective and scalable to industry size models
    • …
    corecore