3,040 research outputs found

    Safety analysis of software product lines using state-based modeling and compositional model checking

    Get PDF
    Software product lines are widely used due to their advantageous reuse of shared features while still allowing optional and alternative features in the individual products. In high-integrity product lines such as pacemakers, flight control systems, and medical imaging systems, ensuring that common and variable safety requirements hold as each new product is built or existing products are evolved is key to the safe operations of those systems. However, this goal is currently hampered by the complexity of identifying the interactions among common and variable features that may undermine system safety. This is largely due to (1) the fact that the available safety analysis techniques lack sufficient support for analyzing the combined effects of different features, and (2) existing techniques for identifying feature interactions do not adequately accommodate the presence of common features and results in repeated checking across different products. The work described here addresses the first problem by systematically exploring the relationships between behavioral variations and potential hazardous states through scenario guided executions of the state model over the variations. It contributes to a solution to the second problem by generating formal obligations at the interfaces between features, so that sequentially composed features can be verified in a way that allows reuse for subsequent products. The main contributions of this work are an approach to perform safety analysis on the variations in a product line using state-based modeling, a tool-supported technique that guides and manages the generation of model-checkable properties from product-line requirements, and a formal framework for model checking product-line features that removes restrictions on how the features can be sequentially composed. The techniques and their implementations are demonstrated in the context of a medical-device product line

    Feature-Aware Verification

    Full text link
    A software product line is a set of software products that are distinguished in terms of features (i.e., end-user--visible units of behavior). Feature interactions ---situations in which the combination of features leads to emergent and possibly critical behavior--- are a major source of failures in software product lines. We explore how feature-aware verification can improve the automatic detection of feature interactions in software product lines. Feature-aware verification uses product-line verification techniques and supports the specification of feature properties along with the features in separate and composable units. It integrates the technique of variability encoding to verify a product line without generating and checking a possibly exponential number of feature combinations. We developed the tool suite SPLverifier for feature-aware verification, which is based on standard model-checking technology. We applied it to an e-mail system that incorporates domain knowledge of AT&T. We found that feature interactions can be detected automatically based on specifications that have only feature-local knowledge, and that variability encoding significantly improves the verification performance when proving the absence of interactions.Comment: 12 pages, 9 figures, 1 tabl

    Software diversity: state of the art and perspectives

    Get PDF
    International audienceDiversity is prevalent in modern software systems to facilitate adapting the software to customer requirements or the execution environment. Diversity has an impact on all phases of the software development process. Appropriate means and organizational structures are required to deal with the additional complexity introduced by software variability. This introductory article to the special section "Software Diversity--Modeling, Analysis and Evolution" provides an overview of the current state of the art in diverse systems development and discusses challenges and potential solutions. The article covers requirements analysis, design, implementation, verification and validation, maintenance and evolution as well as organizational aspects. It also provides an overview of the articles which are part of this special section and addresses particular issues of diverse systems development

    Estratégias comutativas para análise de confiabilidade em linha de produtos de software

    Get PDF
    Dissertação (mestrado) — Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2016.Engenharia de linha de produtos de software é uma forma de gerenciar sistematicamente a variabilidade e a comunalidade em sistemas de software, possibilitando a síntese automática de programas relacionados (produtos) a partir de um conjunto de artefatos reutilizáveis. No entanto, o número de produtos em uma linha de produtos de software pode crescer exponencialmente em função de seu número de características, tornando inviável vericar a qualidade de cada um desses produtos isoladamente. Existem diversas abordagens cientes de variabilidade para análise de linha de produtos, as quais adaptam técnicas de análise de produtos isolados para lidar com a variabilidade de forma e ciente. Tais abordagens podem ser classificadas em três dimensões de análise (product-based, family-based e feature-based ), mas, particularmente no contexto de análise de conabilidade, não existe uma teoria que compreenda (a) uma especificação formal das três dimensões e das estratégias de análise resultantes e (b) prova de que tais análises são equivalentes uma à outra. A falta de uma teoria com essas propriedades impede que se raciocine formalmente sobre o relacionamento entre as dimensões de análise e técnicas de análise derivadas, limitando a con ança nos resultados correspondentes a elas. Para preencher essa lacuna, apresentamos uma linha de produtos que implementa cinco abordagens para análise de con abilidade de linhas de produtos. Encontrou-se evidência empírica de que as cinco abordagens são equivalentes, no sentido em que resultam em con abilidades iguais ao analisar uma mesma linha de produtos. Além disso, formalizamos três das estratégias implementadas e provamos que elas são corretas, contanto que a abordagem probabilística para análise de con abilidade de produtos individuais também o seja. Por m, apresentamos um diagrama comutativo de passos intermediários de análise, o qual relaciona estratégias diferentes e permite reusar demonstrações de corretude entre elas.Software product line engineering is a means to systematically manage variability and commonality in software systems, enabling the automated synthesis of related programs (products) from a set of reusable assets. However, the number of products in a software product line may grow exponentially with the number of features, so it is practically infeasible to quality-check each of these products in isolation. There is a number of variability-aware approaches to product-line analysis that adapt single-product analysis techniques to cope with variability in an e cient way. Such approaches can be classi ed along three analysis dimensions (product-based, family-based, and feature-based), but, particularly in the context of reliability analysis, there is no theory comprising both (a) a formal speci cation of the three dimensions and resulting analysis strategies and (b) proof that such analyses are equivalent to one another. The lack of such a theory prevents formal reasoning on the relationship between the analysis dimensions and derived analysis techniques, thereby limiting the con dence in the corresponding results. To ll this gap, we present a product line that implements ve approaches to reliability analysis of product lines. We have found empirical evidence that all ve approaches are equivalent, in the sense that they yield equal reliabilities from analyzing a given product line. We also formalize three of the implemented strategies and prove that they are sound with respect to the probabilistic approach to reliability analysis of a single product. Furthermore, we present a commuting diagram of intermediate analysis steps, which relates di erent strategies and enables the reuse of soundness proofs between them

    A Machine-Verified Theory of commuting strategies for product-line reliability analysis

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.Engenharia de linha de produtos de software é uma forma de gerenciar sistematicamente a variabilidade e a comunalidade em sistemas de software, possibilitando a síntese automática de programas relacionados (produtos) a partir de um conjunto de artefatos reutilizáveis. No entanto, o número de produtos em uma linha de produtos de software pode crescer exponencialmente em função de seu número de características. Mesmo linhas de produtos com dezenas ou centenas de opções de configuração (features) podem dar origem a milhões de produtos, tornando inviável verificar a qualidade de cada um desses produtos isoladamente. Não obstante, linhas de produtos de software crítico (por exemplo, nos domínios de aviação e sistemas médicos) necessitam garantir que seus produtos são confiáveis. Existem diversas abordagens cientes de variabilidade para análise de linha de produtos, as quais adaptam técnicas de análise de produtos isolados para lidar com variabilidade de forma eficiente. Tais abordagens podem ser classificadas em três dimensões combináveis de análise (product-based, family-based e feature-based), mas, particularmente no contexto de análise de confiabilidade, não existe uma teoria que compreenda (a) uma especificação formal das três dimensões e das estratégias de análise resultantes e (b) prova de que tais análises são equivalentes umas às outras. A falta de uma teoria com essas propriedades dificulta que se raciocine formalmente sobre o relacionamento entre as dimensões de análise e técnicas de análise derivadas. Além disso, a falta de evidência de que as diferentes estratégias são mutuamente equivalentes limita os resultados desses estudos empíricos existentes. Para ajudar a preencher essa lacuna, formalizamos sete abordagens para análise de confiabilidade em linhas de produtos, cobrindo todas as três dimensões de análise e incluindo a primeira instância de análise feature-family-product-based na literatura. Provamos que as estratégias formalizadas são corretas em relação à abordagem para análise de confiabilidade de produtos individuais, fortalecendo as comparações empíricas entre elas. Desse modo, engenheiros podem escolher a estratégia mais apropriada à linha de produtos em questão, seguros de sua corretude. Adicionalmente, apresentamos um diagrama comutativo de passos intermediários de análise, o qual relaciona estratégias diferentes e permite reusar demonstrações de corretude entre elas. Essa visão contribui para uma compreensão mais abrangente sobre os princípios subjacentes às estratégias, o que visualiza-se poder ajudar outros pesquisadores a alçar técnicas de análise de software para abordagens cientes de variabilidade ainda inexploradas. Além disso, reduzimos o risco de erro humano por meio da mecanização da teoria resultante no provador interativo de teoremas chamado PVS (Prototype Verification System). Como resultado do esforço de mecanização, identificamos erros e imprecisões na versão manualmente especificada de nossa teoria, os quais foram consequentemente corrigidos. Portanto, documentamos as lições aprendidas com o esforço de mecanização e apresentamos uma teoria verificada por máquina potencialmente reutilizável.Software product line engineering is a means to systematically manage variability and commonality in software systems, enabling the automated synthesis of related programs (products) from a set of reusable assets. However, the number of products in a software product line may grow exponentially with the number of features, so it is practically infeasible to quality-check each of these products in isolation. Nonetheless, product lines of safety-critical software (e.g., in the domains of avionics and medical systems) need to ensure that its products are reliable. There are a number of variability-aware approaches to product-line analysis that adapt single-product analysis techniques to cope with variability in an efficient way. Such approaches can be classified along three composable analysis dimensions (product-based, family-based, and feature-based), but, particularly in the context of reliability analysis, there is no theory comprising both (a) a formal specification of the three dimensions and resulting analysis strategies and (b) proof that such analyses are equivalent to one another. The lack of such a theory hinders formal reasoning on the relationship between the analysis dimensions and derived analysis techniques. Moreover, as long as there is no evidence that the different examined strategies are mutually equivalent, the existing empirical studies comparing them will have limited results. To address this issue, we formalize seven approaches to user-oriented reliability analysis of product lines, covering all three analysis dimensions and including the first instance of a feature-family-product-based analysis in the literature. We prove the formalized analysis strategies to be sound with respect to reliability analysis of a single product, thereby strengthening the existing empirical comparison between them. Furthermore, we present a commuting diagram of intermediate analysis steps, which relates different strategies and enables the reuse of soundness proofs between them. Such view contributes to a more comprehensive understanding of underlying principles used in these strategies, which we envision could help other researchers to lift existing single-product analysis techniques to yet under-explored variability-aware approaches. Additionally, we reduce the risk of human error by mechanizing the resulting theory in the PVS interactive theorem prover. As a result, we identified and corrected errors and imprecisions of the handcrafted version. Hence, we document lessons learned throughout the mechanization process and provide a potentially reusable machine-verified theory

    Compositional Verification of Evolving SPL

    Get PDF
    This paper presents a novel approach to the design verification of Software Product Lines(SPL). The proposed approach assumes that the requirements and designs are modeled as finite state machines with variability information. The variability information at the requirement and design levels are expressed differently and at different levels of abstraction. Also the proposed approach supports verification of SPL in which new features and variability may be added incrementally. Given the design and requirements of an SPL, the proposed design verification method ensures that every product at the design level behaviorally conforms to a product at the requirement level. The conformance procedure is compositional in the sense that the verification of an entire SPL consisting of multiple features is reduced to the verification of the individual features. The method has been implemented and demonstrated in a prototype tool SPLEnD (SPL Engine for Design Verification) on a couple of fairly large case studies.Ce papier présente une approche nouvelle de vérification pour les lignes de produits logiciels (LPL). L'approche proposée considère que la spécification et la conception de LPL peuvent être abstraites comme des automates à états finis comprenant des informations sur la variabilité. Ces informations sont exprimées différemment aux niveaux spécification et conceptions. Sous ces hypothèses, l'approche proposée supporte la vérification de LPLs dans lesquelles des fonctionnalités peuvent être ajoutées incrémentalement. A partir de la spécification et de la conception d'une LPL, la méthode de vérification proposée assure que chaque produit au niveau conception se conforme, comportementalement parlant, à un produit au niveau spécification. La procédure de conformité est compositionnelle car la vérification de la LPL en entier se réduit à la vérification des fonctionnalités qui la compose individuellement. La méthode a été implantée dans un outil appelé ''SPLEnD'' et essayée sur deux cas d'étude relativement larges

    Scalable Verification of Linear Controller Software

    Get PDF
    We consider the problem of verifying software implementations of linear time-invariant controllers against mathematical specifications. Given a controller specification, multiple correct implementations may exist, each of which uses a different representation of controller state (e.g., due to optimizations in a third-party code generator). To accommodate this variation, we first extract a controller\u27s mathematical model from the implementation via symbolic execution, and then check input-output equivalence between the extracted model and the specification by similarity checking. We show how to automatically verify the correctness of C code controller implementation using the combination of techniques such as symbolic execution, satisfiability solving and convex optimization. Through evaluation using randomly generated controller specifications of realistic size, we demonstrate that the scalability of this approach has significantly improved compared to our own earlier work based on the invariant checking method

    A general compositional approach to verifying hierarchical cache coherence protocols

    Get PDF
    technical reportModern chip multiprocessor (CMP) cache coherence protocols are extremely complex and error prone to design. Modern symbolic methods are unable to provide much leverage for this class of examples. In [1], we presented a method to verify hierarchical and inclusive versions of these protocols using explicit state enumeration tools. We circumvented state explosion by employing a meta-circular assume/guarantee technique in which a designer can model check abstracted versions of the original protocol and claim that the real protocol is correct. The abstractions were justified in the same framework (hence the meta-circular approach). In this paper, we present how our work can be extended to hierarchical non-inclusive protocols which are inherently much harder to verify, both from the point of having more corner cases, and having insufficient information in higher levels of the protocol hierarchy to imply the sharing states of cache lines at lower levels. Two methods are proposed. The first requires more manual effort, but allows our technique in [1] to be applied unchanged, barring a guard strengthening expression that is computed based on state residing outside the cluster being abstracted. The second requires less manual effort, can scale to deeper hierarchies of protocol implementations, and uses history variables which are computed much more modularly. This method also relies on the meta-circular definition framework. A non-inclusive protocol that could not be completely model checked even after visiting 1.5 billion states was verified using two model checks of roughly 0.25 billion states each
    • …
    corecore