378 research outputs found

    Faster linearizability checking via PP-compositionality

    Full text link
    Linearizability is a well-established consistency and correctness criterion for concurrent data types. An important feature of linearizability is Herlihy and Wing's locality principle, which says that a concurrent system is linearizable if and only if all of its constituent parts (so-called objects) are linearizable. This paper presents PP-compositionality, which generalizes the idea behind the locality principle to operations on the same concurrent data type. We implement PP-compositionality in a novel linearizability checker. Our experiments with over nine implementations of concurrent sets, including Intel's TBB library, show that our linearizability checker is one order of magnitude faster and/or more space efficient than the state-of-the-art algorithm.Comment: 15 pages, 2 figure

    A Methodology and Supporting Tools for the Development of Component-Based Embedded Systems.

    Get PDF
    International audienceThe paper presents a methodology and supporting tools for developing component-based embedded systems running on resource- limited hardware platforms. The methodology combines two complementary component frameworks in an integrated tool chain: BIP and Think. BIP is a framework for model-based development including a language for the description of heterogeneous systems, as well as associated simulation and verification tools. Think is a software component framework for the generation of small-footprint embedded systems. The tool chain allows generation, from system models described in BIP, of a set of func tionally equivalent Think components. From these and libraries including OS services for a given hardware platform, a minimal system can be generated. We illustrate the results by modeling and implementing a software MPEG encoder on an iPod

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Modelling and analysis of parallel information systems.

    Get PDF
    This thesis presents an investigation of modelling and analysis of parallel information systems. The research was motivated by the recent developments in networks and powerful, low-cost, desk top multiprocessors. An integrated approach for the construction of parallel information systems was developed which focussed on modelling, verification and simulation of such systems. The thesis demonstrates how Petri nets can be used for the modelling and analysis of entity life histories and parallel information systems, place transition nets for the modelling and analysis of entity life histories and coloured Petri nets for the modelling and analysis of complex parallel information systems. These tools were integrated into a comprehensive framework which allowed for the modelling and analysis of complex parallel information systems and the framework was tested using a comprehensive case study. The thesis concludes that Petri nets are an ideal tool for the modelling and analysis of complex parallel systems. Verification is possible with deadlocks and similar properties being easily identified. Further the transformation rules proved to be beneficial to the process of moving from one model to another. Finally simulation of parallel behaviour was possible because the underlying models captured the notion of parallelism

    Special section on advances in reachability analysis and decision procedures: contributions to abstraction-based system verification

    No full text
    Reachability analysis asks whether a system can evolve from legitimate initial states to unsafe states. It is thus a fundamental tool in the validation of computational systems - be they software, hardware, or a combination thereof. We recall a standard approach for reachability analysis, which captures the system in a transition system, forms another transition system as an over-approximation, and performs an incremental fixed-point computation on that over-approximation to determine whether unsafe states can be reached. We show this method to be sound for proving the absence of errors, and discuss its limitations for proving the presence of errors, as well as some means of addressing this limitation. We then sketch how program annotations for data integrity constraints and interface specifications - as in Bertrand Meyers paradigm of Design by Contract - can facilitate the validation of modular programs, e.g., by obtaining more precise verification conditions for software verification supported by automated theorem proving. Then we recap how the decision problem of satisfiability for formulae of logics with theories - e.g., bit-vector arithmetic - can be used to construct an over-approximating transition system for a program. Programs with data types comprised of bit-vectors of finite width require bespoke decision procedures for satisfiability. Finite-width data types challenge the reduction of that decision problem to one that off-the-shelf tools can solve effectively, e.g., SAT solvers for propositional logic. In that context, we recall the Tseitin encoding which converts formulae from that logic into conjunctive normal form - the standard format for most SAT solvers - with only linear blow-up in the size of the formula, but linear increase in the number of variables. Finally, we discuss the contributions that the three papers in this special section make in the areas that we sketched above. © Springer-Verlag 2009

    Type-Safe Operating System Abstractions

    Get PDF
    Operating systems and low-level applications are usually written in languages like C and assembly, which provide access to low-level abstractions. These languages have unsafe type systems that allow many bugs to slip by programmers. For example, in 1988, the Internet Worm exploited several insecure points in Unix including the finger command. A call to finger with an unexpected argument caused a buffer overflow, leading to the shutdown of most Internet traffic. A finger application written in a type-safe language would have prevented its exploit and limited the points the Internet Worm could attack. Such vulnerabilities are unacceptable in security-critical applications such as the secure coprocessors of the Marianas network, secStore key storage from Plan 9, and self-securing storage. This research focuses on safe language techniques for building OS components that cannot cause memory or IO errors. For example, an Ethernet device driver communicates with its device through IO operations. The device depends on FIFO queues to send and receive packets. A mistake in an IO operation can overflow or underflow the FIFO queues, cause memory errors, or cause configuration inconsistencies on the device. Data structures such as FIFO queues can be written safely in safe languages such as Java and ML but these languages do not allow the access to the low-level resources that an OS programmer needs. Therefore, safe OS components require a language that combines the safety of Java with the low-level control of C. My research formalizes the concurrency, locks, and system state needed by the safety-critical areas of a device driver. These formal concepts are built on top of an abstract syntax and rules that guarantees basic memory safety using linear and singleton types to implement safe memory load and store operations. I proved that the improved abstract machine retains the property of soundness, which means that all well-typed programs will be able to execute until they reach an approved end-state. Together, the concurrency, locks, and state provide safety for IO operations and data structures. Using the OSKit from the University of Utah as a starting point, I developed a small operating system. I ported the 3c509 Ethernet device driver from C to Clay, a C-like type-safe language that uses a type system powerful enough to enforce invariants about low-level devices and data structures. The resulting driver works safely in a multi-threaded environment. It is guaranteed to obtain locks before using shared data. It cannot cause a FIFO queue to overflow or underflow and it will only call IO operations when invariants are satisfied. This type-safe driver demonstrates an actual working application of the theoretical components of my research. The abstract machine is powerful enough to encode a given OS specification and enforce a provably matching implementation. These results lead towards fundamentally secure computing environments
    corecore