29,118 research outputs found

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Property specification and static verification of UML models

    Get PDF
    We present a static verification tool (SVT), a system that performs static verification on UML models composed of UML class and state machine diagrams. Additionally, the SVT allows the user to add extra behavior specification in the form of guards and effects by defining a small action language. UML models are checked against properties written in a special-purpose property language that allows the user to specify linear temporal logic formulas that explicitly reason about UML components. Thus, the SVT provides a strong foundation for the design of reliable systems and a step towards model-driven security

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1

    Get PDF
    This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines

    Multilevel Contracts for Trusted Components

    Full text link
    This article contributes to the design and the verification of trusted components and services. The contracts are declined at several levels to cover then different facets, such as component consistency, compatibility or correctness. The article introduces multilevel contracts and a design+verification process for handling and analysing these contracts in component models. The approach is implemented with the COSTO platform that supports the Kmelia component model. A case study illustrates the overall approach.Comment: In Proceedings WCSI 2010, arXiv:1010.233

    Policy Enforcement with Proactive Libraries

    Full text link
    Software libraries implement APIs that deliver reusable functionalities. To correctly use these functionalities, software applications must satisfy certain correctness policies, for instance policies about the order some API methods can be invoked and about the values that can be used for the parameters. If these policies are violated, applications may produce misbehaviors and failures at runtime. Although this problem is general, applications that incorrectly use API methods are more frequent in certain contexts. For instance, Android provides a rich and rapidly evolving set of APIs that might be used incorrectly by app developers who often implement and publish faulty apps in the marketplaces. To mitigate this problem, we introduce the novel notion of proactive library, which augments classic libraries with the capability of proactively detecting and healing misuses at run- time. Proactive libraries blend libraries with multiple proactive modules that collect data, check the correctness policies of the libraries, and heal executions as soon as the violation of a correctness policy is detected. The proactive modules can be activated or deactivated at runtime by the users and can be implemented without requiring any change to the original library and any knowledge about the applications that may use the library. We evaluated proactive libraries in the context of the Android ecosystem. Results show that proactive libraries can automati- cally overcome several problems related to bad resource usage at the cost of a small overhead.Comment: O. Riganelli, D. Micucci and L. Mariani, "Policy Enforcement with Proactive Libraries" 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), Buenos Aires, Argentina, 2017, pp. 182-19
    • …
    corecore