15,025 research outputs found

    The systems engineering overview and process (from the Systems Engineering Management Guide, 1990)

    Get PDF
    The past several decades have seen the rise of large, highly interactive systems that are on the forward edge of technology. As a result of this growth and the increased usage of digital systems (computers and software), the concept of systems engineering has gained increasing attention. Some of this attention is no doubt due to large program failures which possibly could have been avoided, or at least mitigated, through the use of systems engineering principles. The complexity of modern day weapon systems requires conscious application of systems engineering concepts to ensure producible, operable and supportable systems that satisfy mission requirements. Although many authors have traced the roots of systems engineering to earlier dates, the initial formalization of the systems engineering process for military development began to surface in the mid-1950s on the ballistic missile programs. These early ballistic missile development programs marked the emergence of engineering discipline 'specialists' which has since continued to grow. Each of these specialties not only has a need to take data from the overall development process, but also to supply data, in the form of requirements and analysis results, to the process. A number of technical instructions, military standards and specifications, and manuals were developed as a result of these development programs. In particular, MILSTD-499 was issued in 1969 to assist both government and contractor personnel in defining the systems engineering effort in support of defense acquisition programs. This standard was updated to MIL-STD499A in 1974, and formed the foundation for current application of systems engineering principles to military development programs

    Automated Mapping of UML Activity Diagrams to Formal Specifications for Supporting Containment Checking

    Full text link
    Business analysts and domain experts are often sketching the behaviors of a software system using high-level models that are technology- and platform-independent. The developers will refine and enrich these high-level models with technical details. As a consequence, the refined models can deviate from the original models over time, especially when the two kinds of models evolve independently. In this context, we focus on behavior models; that is, we aim to ensure that the refined, low-level behavior models conform to the corresponding high-level behavior models. Based on existing formal verification techniques, we propose containment checking as a means to assess whether the system's behaviors described by the low-level models satisfy what has been specified in the high-level counterparts. One of the major obstacles is how to lessen the burden of creating formal specifications of the behavior models as well as consistency constraints, which is a tedious and error-prone task when done manually. Our approach presented in this paper aims at alleviating the aforementioned challenges by considering the behavior models as verification inputs and devising automated mappings of behavior models onto formal properties and descriptions that can be directly used by model checkers. We discuss various challenges in our approach and show the applicability of our approach in illustrative scenarios.Comment: In Proceedings FESCA 2014, arXiv:1404.043

    Possibilistic Information Flow Control for Workflow Management Systems

    Full text link
    In workflows and business processes, there are often security requirements on both the data, i.e. confidentiality and integrity, and the process, e.g. separation of duty. Graphical notations exist for specifying both workflows and associated security requirements. We present an approach for formally verifying that a workflow satisfies such security requirements. For this purpose, we define the semantics of a workflow as a state-event system and formalise security properties in a trace-based way, i.e. on an abstract level without depending on details of enforcement mechanisms such as Role-Based Access Control (RBAC). This formal model then allows us to build upon well-known verification techniques for information flow control. We describe how a compositional verification methodology for possibilistic information flow can be adapted to verify that a specification of a distributed workflow management system satisfies security requirements on both data and processes.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    RTL2RTL Formal Equivalence: Boosting the Design Confidence

    Full text link
    Increasing design complexity driven by feature and performance requirements and the Time to Market (TTM) constraints force a faster design and validation closure. This in turn enforces novel ways of identifying and debugging behavioral inconsistencies early in the design cycle. Addition of incremental features and timing fixes may alter the legacy design behavior and would inadvertently result in undesirable bugs. The most common method of verifying the correctness of the changed design is to run a dynamic regression test suite before and after the intended changes and compare the results, a method which is not exhaustive. Modern Formal Verification (FV) techniques involving new methods of proving Sequential Hardware Equivalence enabled a new set of solutions for the given problem, with complete coverage guarantee. Formal Equivalence can be applied for proving functional integrity after design changes resulting from a wide variety of reasons, ranging from simple pipeline optimizations to complex logic redistributions. We present here our experience of successfully applying the RTL to RTL (RTL2RTL) Formal Verification across a wide spectrum of problems on a Graphics design. The RTL2RTL FV enabled checking the design sanity in a very short time, thus enabling faster and safer design churn. The techniques presented in this paper are applicable to any complex hardware design.Comment: In Proceedings FSFMA 2014, arXiv:1407.195

    Comparing BDD and SAT based techniques for model checking Chaum's Dining Cryptographers Protocol

    Get PDF
    We analyse different versions of the Dining Cryptographers protocol by means of automatic verification via model checking. Specifically we model the protocol in terms of a network of communicating automata and verify that the protocol meets the anonymity requirements specified. Two different model checking techniques (ordered binary decision diagrams and SAT-based bounded model checking) are evaluated and compared to verify the protocols

    On Properties of Policy-Based Specifications

    Get PDF
    The advent of large-scale, complex computing systems has dramatically increased the difficulties of securing accesses to systems' resources. To ensure confidentiality and integrity, the exploitation of access control mechanisms has thus become a crucial issue in the design of modern computing systems. Among the different access control approaches proposed in the last decades, the policy-based one permits to capture, by resorting to the concept of attribute, all systems' security-relevant information and to be, at the same time, sufficiently flexible and expressive to represent the other approaches. In this paper, we move a step further to understand the effectiveness of policy-based specifications by studying how they permit to enforce traditional security properties. To support system designers in developing and maintaining policy-based specifications, we formalise also some relevant properties regarding the structure of policies. By means of a case study from the banking domain, we present real instances of such properties and outline an approach towards their automatised verification.Comment: In Proceedings WWV 2015, arXiv:1508.0338

    Automatic instantiation of abstract tests on specific configurations for large critical control systems

    Full text link
    Computer-based control systems have grown in size, complexity, distribution and criticality. In this paper a methodology is presented to perform an abstract testing of such large control systems in an efficient way: an abstract test is specified directly from system functional requirements and has to be instantiated in more test runs to cover a specific configuration, comprising any number of control entities (sensors, actuators and logic processes). Such a process is usually performed by hand for each installation of the control system, requiring a considerable time effort and being an error prone verification activity. To automate a safe passage from abstract tests, related to the so called generic software application, to any specific installation, an algorithm is provided, starting from a reference architecture and a state-based behavioural model of the control software. The presented approach has been applied to a railway interlocking system, demonstrating its feasibility and effectiveness in several years of testing experience
    • …
    corecore