205,643 research outputs found

    A Domain Specific Language Based Approach for Generating Deadlock-Free Parallel Load Scheduling Protocols for Distributed Systems

    Get PDF
    In this dissertation, the concept of using domain specific language to develop errorree parallel asynchronous load scheduling protocols for distributed systems is studied. The motivation of this study is rooted in addressing the high cost of verifying parallel asynchronous load scheduling protocols. Asynchronous parallel applications are prone to subtle bugs such as deadlocks and race conditions due to the possibility of non-determinism. Due to this non-deterministic behavior, traditional testing methods are less effective at finding software faults. One approach that can eliminate these software bugs is to employ model checking techniques that can verify that non-determinism will not cause software faults in parallel programs. Unfortunately, model checking requires the development of a verification model of a program in a separate verification language which can be an error-prone procedure and may not properly represent the semantics of the original system. The model checking approach can provide true positive result if the semantics of an implementation code and a verification model is represented under a single framework such that the verification model closely represents the implementation and the automation of a verification process is natural. In this dissertation, a domain specific language based verification framework is developed to design parallel load scheduling protocols and automatically verify their behavioral properties through model checking. A specification language, LBDSL, is introduced that facilitates the development of parallel load scheduling protocols. The LBDSL verification framework uses model checking techniques to verify the asynchronous behavior of the protocol. It allows the same protocol specification to be used for verification and the code generation. The support to automatic verification during protocol development reduces the verification cost post development. The applicability of LBDSL verification framework is illustrated by performing case study on three different types of load scheduling protocols. The study shows that the LBDSL based verification approach removes the need of debugging for deadlocks and race bugs which has potential to significantly lower software development costs

    Security and Performance Verification of Distributed Authentication and Authorization Tools

    Get PDF
    Parallel distributed systems are widely used for dealing with massive data sets and high performance computing. Securing parallel distributed systems is problematic. Centralized security tools are likely to cause bottlenecks and introduce a single point of failure. In this paper, we introduce existing distributed authentication and authorization tools. We evaluate the quality of the security tools by verifying their security and performance. For security tool verification, we use process calculus and mathematical modeling languages. Casper, Communicating Sequential Process (CSP) and Failure Divergence Refinement (FDR) to test for security vulnerabilities, Petri nets and Karp Miller trees are used to find performance issues of distributed authentication and authorization methods. Kerberos, PERMIS, and Shibboleth are evaluated. Kerberos is a ticket based distributed authentication service, PERMIS is a role and attribute based distributed authorization service, and Shibboleth is an integration solution for federated single sign-on authentication. We find no critical security and performance issues

    Information Technology of Software Architecture Structural Synthesis of Information System

    Get PDF
    Information technology of information system software architecture structural synthesis is proposed. It is used for evolutionary models of the software lifecycle, which provides configuration and formation of software to control the realization and recovery of computing processes in parallel and distributed computing resources structures. The technology is applied in the framework of the software requirements analysis, design of architecture, design and integration of software. Method of combining vertices for multilevel graph model of software architecture and automata-based method of checking performance limitations to software are based on the advanced graph model of software architecture. These methods are proposed in the framework of information technology and allow forming a rational structure of the program, as well as checking for compliance with the functional and non-functional requirements of the end user.The essence of proposed information technology is in displaying of the customer's requirements in the current version of the graph model of program complex structure and providing a reconfiguration of the system modules. This process is based on the analysis and processing of the graph model, software module specifications, formation of software structure in accordance with the graph model, software verification and its compilation

    Runtime Verification Based on Executable Models: On-the-Fly Matching of Timed Traces

    Full text link
    Runtime verification is checking whether a system execution satisfies or violates a given correctness property. A procedure that automatically, and typically on the fly, verifies conformance of the system's behavior to the specified property is called a monitor. Nowadays, a variety of formalisms are used to express properties on observed behavior of computer systems, and a lot of methods have been proposed to construct monitors. However, it is a frequent situation when advanced formalisms and methods are not needed, because an executable model of the system is available. The original purpose and structure of the model are out of importance; rather what is required is that the system and its model have similar sets of interfaces. In this case, monitoring is carried out as follows. Two "black boxes", the system and its reference model, are executed in parallel and stimulated with the same input sequences; the monitor dynamically captures their output traces and tries to match them. The main problem is that a model is usually more abstract than the real system, both in terms of functionality and timing. Therefore, trace-to-trace matching is not straightforward and allows the system to produce events in different order or even miss some of them. The paper studies on-the-fly conformance relations for timed systems (i.e., systems whose inputs and outputs are distributed along the time axis). It also suggests a practice-oriented methodology for creating and configuring monitors for timed systems based on executable models. The methodology has been successfully applied to a number of industrial projects of simulation-based hardware verification.Comment: In Proceedings MBT 2013, arXiv:1303.037

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal
    • …
    corecore