1,489 research outputs found

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    An approach for the automatic verification of blockchain protocols: the Tweetchain case study

    Get PDF
    This paper proposes a model-driven approach for the security modelling and analysis of blockchain based protocols. The modelling is built upon the definition of a UML profile, which is able to capture transaction-oriented information. The analysis is based on existing formal analysis tools. In particular, the paper considers the Tweetchain protocol, a recent proposal that leverages online social networks, i.e., Twitter, for extending blockchain to domains with small-value transactions, such as IoT. A specialized textual notation is added to the UML profile to capture features of this protocol. Furthermore, a model transformation is defined to generate a Tamarin model, from the UML models, via an intermediate well-known notation, i.e., the Alice &Bob notation. Finally, Tamarin Prover is used to verify the model of the protocol against some security properties. This work extends a previous one, where the Tamarin formal models were generated by hand. A comparison on the analysis results, both under the functional and non-functional aspects, is reported here too

    On formalising and analysing the tweetchain protocol

    Get PDF
    Distributed Ledger Technology is demonstrating its capability to provide flexible frameworks for information assurance capable of resisting to byzantine failures and multiple target attacks. The availability of development frameworks allows the definition of many applications using such a technology. On the contrary, the verification of such applications are far from being easy since testing is not enough to guarantee the absence of security problems. The paper describes an experience in the modelling and security analysis of one of these applications by means of formal methods: in particular, we consider the Tweetchain protocol as a case study and we use the Tamarin Prover tool, which supports the modelling of a protocol as a multiset rewriting system and its analysis with respect to temporal first-order properties. With the aim of making the modeling and verification process reproducible and independent of the specific protocol, we present a general structure of the Tamarin Prover model and of the properties to verified. Finally, we discuss the strengths and limitations of the Tamarin Prover approach considering three aspects: modelling, analysis and the verification process. Copyrigh

    Modeling and analysing an industry 4.0 communication protocol

    Get PDF

    Model-Driven Development of Aspect-Oriented Software Architectures

    Full text link
    The work presented in this thesis of master is an approach that takes advantage of the Model-Driven Development approach for developing aspect-oriented software architectures. A complete MDD support for the PRISMA approach is defined by providing code generation, verification and reusability properties.Pérez Benedí, J. (2007). Model-Driven Development of Aspect-Oriented Software Architectures. http://hdl.handle.net/10251/12451Archivo delegad

    IFM2005 doctoral symposium on integrated formal methods, Eindhoven, The Netherlands, November 29, 2005

    Get PDF

    Distributed Security Policy Analysis

    Get PDF
    Computer networks have become an important part of modern society, and computer network security is crucial for their correct and continuous operation. The security aspects of computer networks are defined by network security policies. The term policy, in general, is defined as ``a definite goal, course or method of action to guide and determine present and future decisions''. In the context of computer networks, a policy is ``a set of rules to administer, manage, and control access to network resources''. Network security policies are enforced by special network appliances, so called security controls.Different types of security policies are enforced by different types of security controls. Network security policies are hard to manage, and errors are quite common. The problem exists because network administrators do not have a good overview of the network, the defined policies and the interaction between them. Researchers have proposed different techniques for network security policy analysis, which aim to identify errors within policies so that administrators can correct them. There are three different solution approaches: anomaly analysis, reachability analysis and policy comparison. Anomaly analysis searches for potential semantic errors within policy rules, and can also be used to identify possible policy optimizations. Reachability analysis evaluates allowed communication within a computer network and can determine if a certain host can reach a service or a set of services. Policy comparison compares two or more network security policies and represents the differences between them in an intuitive way. Although research in this field has been carried out for over a decade, there is still no clear answer on how to reduce policy errors. The different analysis techniques have their pros and cons, but none of them is a sufficient solution. More precisely, they are mainly complements to each other, as one analysis technique finds policy errors which remain unknown to another. Therefore, to be able to have a complete analysis of the computer network, multiple models must be instantiated. An analysis model that can perform all types of analysis techniques is desirable and has three main advantages. Firstly, the model can cover the greatest number of possible policy errors. Secondly, the computational overhead of instantiating the model is required only once. Thirdly, research effort is reduced because improvements and extensions to the model are applied to all three analysis types at the same time. Fourthly, new algorithms can be evaluated by comparing their performance directly to each other. This work proposes a new analysis model which is capable of performing all three analysis techniques. Security policies and the network topology are represented by the so-called Geometric-Model. The Geometric-Model is a formal model based on the set theory and geometric interpretation of policy rules. Policy rules are defined according to the condition-action format: if the condition holds then the action is applied. A security policy is expressed as a set of rules, a resolution strategy which selects the action when more than one rule applies, external data used by the resolution strategy and a default action in case no rule applies. This work also introduces the concept of Equivalent-Policy, which is calculated on the network topology and the policies involved. All analysis techniques are performed on it with a much higher performance. A precomputation phase is required for two reasons. Firstly, security policies which modify the traffic must be transformed to gain linear behaviour. Secondly, there are much fewer rules required to represent the global behaviour of a set of policies than the sum of the rules in the involved policies. The analysis model can handle the most common security policies and is designed to be extensible for future security policy types. As already mentioned the Geometric-Model can represent all types of security policies, but the calculation of the Equivalent-Policy has some small dependencies on the details of different policy types. Therefore, the computation of the Equivalent-Policy must be tweaked to support new types. Since the model and the computation of the Equivalent-Policy was designed to be extendible, the effort required to introduce a new security policy type is minimal. The anomaly analysis can be performed on computer networks containing different security policies. The policy comparison can perform an Implementation-Verification among high-level security requirements and an entire computer network containing different security policies. The policy comparison can perform a ChangeImpact-Analysis of an entire network containing different security policies. The proposed model is implemented in a working prototype, and a performance evaluation has been performed. The performance of the implementation is more than sufficient for real scenarios. Although the calculation of the Equivalent-Policy requires a significant amount of time, it is still manageable and is required only once. The execution of the different analysis techniques is fast, and generally the results are calculated in real time. The implementation also exposes an API for future integration in different frameworks or software packages. Based on the API, a complete tool was implemented, with a graphical user interface and additional features

    Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets

    Get PDF
    Un sistema crítico debe cumplir con su misión a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogéneos, donde pueden ser objeto de intentos de intrusión, robo de información confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados después de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, así como las posibles pérdidas económicas. Así, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). Así pues, al diseñar sistemas críticos es fundamental estudiar los ataques que se pueden producir y planificar cómo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, también es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas críticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas críticos que incorporan técnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activación de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, también llamados sistemas de asignación de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cálculo exacto de su rendimiento se convierte en una tarea de cálculo muy compleja, debido al problema de la explosión del espacio de estados. Como resultado de ello, una tarea que requiere una exploración exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo así, por ejemplo, el análisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (también bajo condiciones de fallo) mediante el cálculo de cotas de rendimiento superiores, evitando así el problema de la explosión del espacio de estados. Por último, se proporcionan algoritmos para calcular cómo compensar la degradación de rendimiento que se produce ante una situación inesperada en un sistema con tolerancia a fallos
    corecore