9 research outputs found

    Software Architecture of Code Analysis Frameworks Matters: The Frama-C Example

    Full text link
    Implementing large software, as software analyzers which aim to be used in industrial settings, requires a well-engineered software architecture in order to ease its daily development and its maintenance process during its lifecycle. If the analyzer is not only a single tool, but an open extensible collaborative framework in which external developers may develop plug-ins collaborating with each other, such a well designed architecture even becomes more important. In this experience report, we explain difficulties of developing and maintaining open extensible collaborative analysis frameworks, through the example of Frama-C, a platform dedicated to the analysis of code written in C. We also present the new upcoming software architecture of Frama-C and how it aims to solve some of these issues.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338

    Functional Requirements-Based Automated Testing for Avionics

    Full text link
    We propose and demonstrate a method for the reduction of testing effort in safety-critical software development using DO-178 guidance. We achieve this through the application of Bounded Model Checking (BMC) to formal low-level requirements, in order to generate tests automatically that are good enough to replace existing labor-intensive test writing procedures while maintaining independence from implementation artefacts. Given that existing manual processes are often empirical and subjective, we begin by formally defining a metric, which extends recognized best practice from code coverage analysis strategies to generate tests that adequately cover the requirements. We then formulate the automated test generation procedure and apply its prototype in case studies with industrial partners. In review, the method developed here is demonstrated to significantly reduce the human effort for the qualification of software products under DO-178 guidance

    Deductive Verification of Unmodified Linux Kernel Library Functions

    Full text link
    This paper presents results from the development and evaluation of a deductive verification benchmark consisting of 26 unmodified Linux kernel library functions implementing conventional memory and string operations. The formal contract of the functions was extracted from their source code and was represented in the form of preconditions and postconditions. The correctness of 23 functions was completely proved using AstraVer toolset, although success for 11 functions was achieved using 2 new specification language constructs. Another 2 functions were proved after a minor modification of their source code, while the final one cannot be completely proved using the existing memory model. The benchmark can be used for the testing and evaluation of deductive verification tools and as a starting point for verifying other parts of the Linux kernel.Comment: 18 pages, 2 tables, 6 listings. Accepted to ISoLA 2018 conference. Evaluating Tools for Software Verification trac

    Your Proof Fails? Testing Helps to Find the Reason

    Full text link
    Applying deductive verification to formally prove that a program respects its formal specification is a very complex and time-consuming task due in particular to the lack of feedback in case of proof failures. Along with a non-compliance between the code and its specification (due to an error in at least one of them), possible reasons of a proof failure include a missing or too weak specification for a called function or a loop, and lack of time or simply incapacity of the prover to finish a particular proof. This work proposes a new methodology where test generation helps to identify the reason of a proof failure and to exhibit a counter-example clearly illustrating the issue. We describe how to transform an annotated C program into C code suitable for testing and illustrate the benefits of the method on comprehensive examples. The method has been implemented in STADY, a plugin of the software analysis platform FRAMA-C. Initial experiments show that detecting non-compliances and contract weaknesses allows to precisely diagnose most proof failures.Comment: 11 pages, 10 figure

    A Taxonomy for Classifying Runtime Verification Tools

    Get PDF
    International audienceOver the last 15 years Runtime Verification (RV) has grown into a diverse and active field, which has stimulated the development of numerous theoretical frameworks and tools. Many of the tools are at first sight very different and challenging to compare. Yet, there are similarities. In this work, we classify RV tools within a high-level taxonomy of concepts. We first present this taxonomy and discuss the different dimensions. Then, we survey RV tools and classify them according to the taxonomy. This paper constitutes a snapshot of the current state of the art and enables a comparison of existing tools

    Common Specification Language for Static and Dynamic Analysis of C Programs

    No full text
    Session: Volume II: Software development and system software & security: software verification and testing trackInternational audienceVarious combinations of static and dynamic analysis techniques were recently shown to be beneficial for software verification. A frequent obstacle to combining different tools in a completely automatic way is the lack of a common specification language. Our work proposes to translate a Pre-Post based specification into executable C code. This paper presents e-acsl, subset of the acsl specification language for C programs, and its automatic translator into C implemented as a Frama-C plug-in. The resulting C code is executable and can be used by a dynamic analysis tool. We illustrate how the PathCrawler test generation tool automatically treats such pre- and postconditions specified as C functions

    Actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Génie de la Programmation et du Logiciel

    Get PDF
    National audienceCe document contient les actes des Cinquièmes journées nationales du Groupement De Recherche CNRS du Gé}nie de la Programmation et du Logiciel (GDR GPL) s'étant déroulées à Nancy du 3 au 5 avril 2013. Les contributions présentées dans ce document ont été sélectionnées par les différents groupes de travail du GDR. Il s'agit de résumés, de nouvelles versions, de posters et de démonstrations qui correspondent à des travaux qui ont déjà été validés par les comités de programmes d'autres conférences et revues et dont les droits appartiennent exclusivement à leurs auteurs

    Exploration of the relationship between tacit knowledge and software system test complexity

    Get PDF
    This research has explored the relationship between system test complexity and tacit knowledge. It is proposed as part of this thesis, that the process of system testing (comprising of test planning, test development, test execution, test fault analysis, test measurement, and case management), is directly affected by both complexity associated with the system under test, and also by other sources of complexity, independent of the system under test, but related to the wider process of system testing. While a certain amount of knowledge related to the system under test is inherent, tacit in nature, and therefore difficult to make explicit, it has been found that a significant amount of knowledge relating to these other sources of complexity, can indeed be made explicit. While the importance of explicit knowledge has been reinforced by this research, there has been a lack of evidence to suggest that the availability of tacit knowledge to a test team is of any less importance to the process of system testing, when operating in a traditional software development environment. The sentiment was commonly expressed by participants, that even though a considerable amount of explicit knowledge relating to the system is freely available, that a good deal of knowledge relating to the system under test, which is demanded for effective system testing, is actually tacit in nature (approximately 60% of participants operating in a traditional development environment, and 60% of participants operating in an agile development environment, expressed similar sentiments). To cater for the availability of tacit knowledge relating to the system under test, and indeed, both explicit and tacit knowledge required by system testing in general, an appropriate knowledge management structure needs to be in place. This would appear to be required, irrespective of the employed development methodology
    corecore