17 research outputs found

    Automated verification of shape, size and bag properties.

    Get PDF
    In recent years, separation logic has emerged as a contender for formal reasoning of heap-manipulating imperative programs. Recent works have focused on specialised provers that are mostly based on fixed sets of predicates. To improve expressivity, we have proposed a prover that can automatically handle user-defined predicates. These shape predicates allow programmers to describe a wide range of data structures with their associated size properties. In the current work, we shall enhance this prover by providing support for a new type of constraints, namely bag (multi-set) constraints. With this extension, we can capture the reachable nodes (or values) inside a heap predicate as a bag constraint. Consequently, we are able to prove properties about the actual values stored inside a data structure

    Three variations of observation equivalence preserving synthesis abstraction

    Get PDF
    In a previous paper we introduced the notion of synthesis abstraction, which allows efficient compositional synthesis of maximally permissive supervisors for large-scale systems of composed finite-state automata. In the current paper, observation equivalence is studied in relation to synthesis abstraction. It is shown that general observation equivalence is not useful for synthesis abstraction. Instead, we introduce additional conditions strengthening observation equivalence, so that it can be used with the compositional synthesis method. The paper concludes with an example showing the suitability of these relations to achieve substantial state reduction while computing a modular supervisor

    Synthesis observation equivalence and weak synthesis observation equivalence

    Get PDF
    This working paper proposes an algorithm to simplify automata in such a way that compositional synthesis results are preserved in every possible context. It relaxes some requirements of synthesis observation equivalence from previous work, so that better abstractions can be obtained. The paper describes the algorithm, adapted from known bisimulation equivalence algorithms, for the improved abstraction method. The algorithm has been implemented in the DES software tool Supremica and has been used to compute modular supervisors for several large benchmark examples. It successfully computes modular supervisors for systems with more than 10¹² reachable states

    Five abstraction rules to remove transitions while preserving compositional synthesis results

    Get PDF
    This working paper investigates under which conditions transitions can be removed from an automaton while preserving important synthesis properties. The work is part of a framework for compositional synthesis of least restrictive controllable and nonblocking supervisors for modular discrete event systems. The method for transition removal complements previous results, which are largely focused on state merging. Issues concerning transition removal in synthesis are discussed, and redirection maps are introduced to enable a supervisor to process an event, even though the corresponding transition is no longer present in the model. Based on the results, different techniques are proposed to remove controllable and uncontrollable transitions, and an example shows the potential of the method for practical problems

    Understanding and Evaluating Assurance Cases

    Get PDF
    Assurance cases are a method for providing assurance for a system by giving an argument to justify a claim about the system, based on evidence about its design, development, and tested behavior. In comparison with assurance based on guidelines or standards (which essentially specify only the evidence to be produced), the chief novelty in assurance cases is provision of an explicit argument. In principle, this can allow assurance cases to be more finely tuned to the specific circumstances of the system, and more agile than guidelines in adapting to new techniques and applications. The first part of this report (Sections 1-4) provides an introduction to assurance cases. Although this material should be accessible to all those with an interest in these topics, the examples focus on software for airborne systems, traditionally assured using the DO-178C guidelines and its predecessors. A brief survey of some existing assurance cases is provided in Section 5. The second part (Section 6) considers the criteria, methods, and tools that may be used to evaluate whether an assurance case provides sufficient confidence that a particular system or service is fit for its intended use. An assurance case cannot provide unequivocal "proof" for its claim, so much of the discussion focuses on the interpretation of such less-than-definitive arguments, and on methods to counteract confirmation bias and other fallibilities in human reasoning

    Mathematics in Software Reliability and Quality Assurance

    Get PDF
    This monograph concerns the mathematical aspects of software reliability and quality assurance and consists of 11 technical papers in this emerging area. Included are the latest research results related to formal methods and design, automatic software testing, software verification and validation, coalgebra theory, automata theory, hybrid system and software reliability modeling and assessment

    Generation of model-based safety arguments from automatically allocated safety integrity levels

    Get PDF
    To certify safety-critical systems, assurance arguments linking evidence of safety to appropriate requirements must be constructed. However, modern safety-critical systems feature increasing complexity and integration, which render manual approaches impractical to apply. This thesis addresses this problem by introducing a model-based method, with an exemplary application based on the aerospace domain.Previous work has partially addressed this problem for slightly different applications, including verification-based, COTS, product-line and process-based assurance. Each of the approaches is applicable to a specialised case and does not deliver a solution applicable to a generic system in a top-down process. This thesis argues that such a solution is feasible and can be achieved based on the automatic allocation of safety requirements onto a system’s architecture. This automatic allocation is a recent development which combines model-based safety analysis and optimisation techniques. The proposed approach emphasises the use of model-based safety analysis, such as HiP-HOPS, to maximise the benefits towards the system development lifecycle.The thesis investigates the background and earlier work regarding construction of safety arguments, safety requirements allocation and optimisation. A method for addressing the problem of optimal safety requirements allocation is first introduced, using the Tabu Search optimisation metaheuristic. The method delivers satisfactory results that are further exploited for construction of safety arguments. Using the produced requirements allocation, an instantiation algorithm is applied onto a generic safety argument pattern, which is compliant with standards, to automatically construct an argument establishing a claim that a system’s safety requirements have been met. This argument is hierarchically decomposed and shows how system and subsystem safety requirements are satisfied by architectures and analyses at low levels of decomposition. Evaluation on two abstract case studies demonstrates the feasibility and scalability of the method and indicates good performance of the algorithms proposed. Limitations and potential areas of further investigation are identified

    From Resilience-Building to Resilience-Scaling Technologies: Directions -- ReSIST NoE Deliverable D13

    Get PDF
    This document is the second product of workpackage WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellence. The problem that ReSIST addresses is achieving sufficient resilience in the immense systems of ever evolving networks of computers and mobile devices, tightly integrated with human organisations and other technology, that are increasingly becoming a critical part of the information infrastructure of our society. This second deliverable D13 provides a detailed list of research gaps identified by experts from the four working groups related to assessability, evolvability, usability and diversit

    Automated analysis of software product lines with orthogonal variability models: Extending the fama ecosystem

    Get PDF
    La ingeniería de líneas de producto software es un paradigma de desarrollo de software que permite la creación de una familia de productos software por medio de la reutilización de un conjunto común de activos software. En este paradigma, los modelos de variabilidad son artefactos centrales. Dichos modelos documentan la variabilidad entre los distintos productos de una línea de productos. En los últimos veinte años, un conjunto de técnicas para el modelado de la variabilidad se han propuesto con el fin de documentar y gestionar la variabilidad, tales como el modelado de características, el modelado de decisión y el modelado ortogonal de variabilidad. La más popular es la del modelado de características. En esta técnica, se usan modelos de características para representar de forma compacta todos los productos de una línea de productos en términos de características. El análisis automático de modelos de variabilidad se define como la extracción de información de los modelos de variabilidad asistida por ordenador. Esa es un área de investigación activa que ha recibido la atención de los investigadores durante los últimos veinte años. Gran parte de esa investigación ha ido enfocada a los modelos de características, resultando en un conjunto de operaciones de análisis, de técnicas y de herramientas para el análisis automático de ese tipo de modelos. Con la aparición de otros modelos de variabilidad, se ha detectado la necesidad de proporcionar nuevas técnicas y herramientas para dar soporte al análisis automático de dichos modelos. Además, existe la necesidad de extender la variabilidad con atributos, de manera que el análisis no solamente lleve en cuenta la variabilidad en términos de las características funcionales, sino también en términos de atributos. Los modelos de variabilidad por lo general contienen elementos que se utilizan sólo para estructurar la variabilidad de la línea de productos, y por lo tanto no tienen ningún impacto en los modelos que se geran, tales como los modelos de requisitos, diseño o implementación. Estos elementos se conocen como elementos abstractos. La mayoría de los lenguajes de modelado de variabilidad no proporcionan una forma explícita de expresar los elementos abstractos. Además, la mayoría de los enfoques actuales para el análisis automatizado de los modelos de la variabilidad sólo pueden razonar acerca de las combinaciones de todos los elementos en el modelo de variabilidad, pero no sobre los que pueden ser relevantes para el usuario, es decir, aquellos elementos que tienen algún impacto en otros modelos de la línea de productos. Por lo tanto, los elementos abstractos deben ser expresados explícitamente en los modelos de variabilidad, por lo que se pueda analizar modelos de variabilidad teniendo en cuenta únicamente los elementos pertinentes. El modelo de variabilidad ortogonal es un lenguaje de modelado para definir la variabilidad de una línea de productos de software. Se trata de una notación usual en la comunidad de línea de productos que interrelaciona la variabilidad en los modelos base, tal como los modelos de requisitos, diseño, componentes y prueba. En esta tesis doctoral, se presenta un conjunto de técnicas y herramientas para dar soporte al análisis automático de los modelos de variabilidad ortogonales. Una importante ventaja de nuestra contribución se basa en el soporte a los atributos y a los elementos abstractos. En primer lugar, se hacen explícitos los elementos abstractos en los modelos de variabilidad ortogonal, y se proporcionan dos técnicas para automatizar el análisis de estos modelos, una en la que se omiten los elementos abstractos, y otra en la que se tienen en cuenta todos los elementos del modelo. En segundo lugar, se proporciona una técnica para enriquecer los modelos ortogonales de variabilidad con atributos y se automatiza su análisis. Nuestras contribuciones han sido integradas en una herramienta que se ha construido como parte del ecosistema de FaMa, que es un marco para el análisis de los modelos de la variabilidad desarrollada por nuestro grupo de investigación. Con el fin de demostrar la eficacia de nuestras técnicas y de nuestra herramienta de análisis se presenta una evaluación usando un caso desarrollado en la industria alemana de automóviles. Dicha evaluación ha sido útil para detectar elementos opcionales falsos y elementos muertos en el modelo de variabilidad ortogonal de dicha línea de producto y también la verificación de restricciones sobre los atributos de este modelo
    corecore