9 research outputs found

    Тестирование безопасности программного обеспечения на языке С с использованием верификатора SPIN

    Get PDF
    The C language is widely used for developing tools in various application areas, and a number of C software tools are used for critical systems, such as medicine, transport, etc. Correspondingly, the security of such programs should be thoroughly tested, i.e., it is important to develop techniques for detecting vulnerabilities in C programs. In this paper we present an approach for dynamic detection of software vulnerabilities using the SPIN model checker. We discuss how this approach can be implemented in order to detect automatically C code vulnerabilities and illustrate a proposed technique for a number of C programs which are widely used in a number of applications.Предлагается метод тестирования безопасности С программ с использованием широко известного верификатора SPIN. Обсуждаются классы уязвимостей в С программах, которые могут быть обнаружены с использованием предложенного подхода. Приводятся результаты компьютерных экспериментов по обнаружению уязвимостей в студенческих программных реализациях алгоритмов работы с массивами

    A New View on Classification of Software Vulnerability Mitigation Methods

    Get PDF
    Software vulnerability mitigation is a well-known research area and many methods have been proposed for it Some papers try to classify these methods from different specific points of views In this paper we aggregate all proposed classifications and present a comprehensive classification of vulnerability mitigation methods We define software vulnerability as a kind of software fault and correspond the classes of software vulnerability mitigation methods accordingly In this paper the software vulnerability mitigation methods are classified into vulnerability prevention vulnerability tolerance vulnerability removal and vulnerability forecasting We define each vulnerability mitigation method in our new point of view and indicate some methods for each class Our general point of view helps to consider all of the proposed methods in this review We also identify the fault mitigation methods that might be effective in mitigating the software vulnerabilities but are not yet applied in this area Based on that new directions are suggested for the future researc

    Verifying Mutable Systems

    Get PDF
    Model checking has had much success in the verification of single-process and multi-process programs. However, model checkers assume an immutable topology which limits the verification in several areas. Consider the security domain, model checkers have had success in the verification of unicast structurally static protocols, but struggle to verify dynamic multicast cryptographic protocols. We give a formulation of dynamic model checking which extends traditional model checking by allowing structural changes, mutations, to the topology of multi-process network models. We introduce new mutation models when the structural mutations take either a primitive, non-primitive, or a non-deterministic form, and analyze the general complexities of each. This extends traditional model checking and allows analysis in new areas. We provide a set of proof rules to verify safety properties on dynamic models and outline its automizability. We relate dynamic models to compositional reasoning, dynamic cutoffs, parametrized analysis, and previously established parametric assertions.We provide a proof of concept by analyzing a dynamic mutual exclusion protocol and a multicast cryptography protocol

    Lightweight and static verification of UML executable models

    Get PDF
    Executable models play a key role in many development methods (such as MDD and MDA) by facilitating the immediate simulation/implementation of the software system under development. This is possible because executable models include a fine-grained specification of the system behaviour using an action language. Executable models are not a new concept but are now experiencing a comeback. As a relevant example, the OMG has recently published the first version of the “Foundational Subset for Executable UML Models” (fUML) standard, an executable subset of the UML that can be used to define, in an operational style, the structural and behavioural semantics of systems. The OMG has also published a beta version of the “Action Language for fUML” (Alf) standard, a concrete syntax conforming to the fUML abstract syntax, that provides the constructs and textual notation to specify the fine-grained behaviour of systems. The OMG support to executable models is substantially raising the interest of software companies for this topic. Given the increasing importance of executable models and the impact of their correctness on the final quality of software systems derived from them, the existence of methods to verify the correctness of executable models is becoming crucial. Otherwise, the quality of the executable models (and in turn the quality of the final system generated from them) will be compromised. Despite the number of research works targetting the verification of software models, their computational cost and poor feedback makes them difficult to integrate in current software development processes. Therefore, there is the need for efficient and useful methods to check the correctness of executable models and tools integrated to the modelling tools used by designers. In this thesis we propose a verification framework to help the designers to improve the quality of their executable models. Our framework is composed of a set of lightweight static methods, i.e. methods that do not require to execute the model in order to check the desired property. These methods are able to check several properties over the behavioural part of an executable model (for instance, over the set of operations that compose a behavioural executable model) such as syntactic correctness (i.e. all the operations in the behavioural model conform to the syntax of the language in which it is described), non-redundancy (i.e. there is no another operation with exactly the same behaviour), executability (i.e. after the execution of an operation, the reached system state is -in case of strong executability- or may be -in case of weak executability- consistent with the structural model and its integrity constraints) and completeness (i.e. all possible changes on the system state can be performed through the execution of the operations defined in the executable model). For incorrect models, the methods that compose our verification framework return a meaningful feedback that helps repairing the detected inconsistencies

    Extending Model Checking with Dynamic Analysis

    No full text

    Extending Model Checking with Dynamic Analysis

    No full text
    Abstract. In model-driven verification a model checker executes a program by embedding it within a test harness, thus admitting program verification without the need to translate the program, which runs as native code. Model checking techniques in which code is actually executed have recently gained popularity due to their ability to handle the full semantics of actual implementation languages and to support verification of rich properties. In this paper, we show that combination with dynamic analysis can, with relatively low overhead, considerably extend the capabilities of this style of model checking. In particular, we show how to use the CIL framework to instrument code in order to allow the SPIN model checker, when verifying C programs, to check additional properties, simulate system resets, and use local coverage information to guide the model checking search. An additional benefit of our approach is that instrumentations developed for model checking may be used without modification in testing or monitoring code. We are motivated by experience in applying model-driven verification to JPL-developed flight software modules, from which we take our example applications. We believe this is the first investigation in which an independent instrumentation for dynamic analysis has been integrated with model checking.

    Extending Model Checking with Dynamic Analysis

    No full text
    corecore