2,190 research outputs found

    Putting Teeth into Open Architectures: Infrastructure for Reducing the Need for Retesting

    Get PDF
    Proceedings Paper (for Acquisition Research Program)The Navy is currently implementing the open-architecture framework for developing joint interoperable systems that adapt and exploit open-system design principles and architectures. This raises concerns about how to practically achieve dependability in software-intensive systems with many possible configurations when: 1) the actual configuration of the system is subject to frequent and possibly rapid change, and 2) the environment of typical reusable subsystems is variable and unpredictable. Our preliminary investigations indicate that current methods for achieving dependability in open architectures are insufficient. Conventional methods for testing are suited for stovepipe systems and depend strongly on the assumptions that the environment of a typical system is fixed and known in detail to the quality-assurance team at test and evaluation time. This paper outlines new approaches to quality assurance and testing that are better suited for providing affordable reliability in open architectures, and explains some of the additional technical features that an Open Architecture must have in order to become a Dependable Open Architecture.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited

    Towards a Formalism-Based Toolkit for Automotive Applications

    Full text link
    The success of a number of projects has been shown to be significantly improved by the use of a formalism. However, there remains an open issue: to what extent can a development process based on a singular formal notation and method succeed. The majority of approaches demonstrate a low level of flexibility by attempting to use a single notation to express all of the different aspects encountered in software development. Often, these approaches leave a number of scalability issues open. We prefer a more eclectic approach. In our experience, the use of a formalism-based toolkit with adequate notations for each development phase is a viable solution. Following this principle, any specific notation is used only where and when it is really suitable and not necessarily over the entire software lifecycle. The approach explored in this article is perhaps slowly emerging in practice - we hope to accelerate its adoption. However, the major challenge is still finding the best way to instantiate it for each specific application scenario. In this work, we describe a development process and method for automotive applications which consists of five phases. The process recognizes the need for having adequate (and tailored) notations (Problem Frames, Requirements State Machine Language, and Event-B) for each development phase as well as direct traceability between the documents produced during each phase. This allows for a stepwise verification/validation of the system under development. The ideas for the formal development method have evolved over two significant case studies carried out in the DEPLOY project

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis

    Using digital twins in the development of complex dependable real-time embedded systems

    Get PDF
    Modelling execution times in complex real-time embedded systems is vital for understanding and predicting tasks’ temporal behaviour, and to improve the system scheduling performance. Previous research mainly relied on worst-case execution time estimations based on formal static analyses that are often pessimistic. The models that resulted are hard to maintain and even harder to validate. In this work, the novel use of Digital Twins provides opportunities to improve this issue and beyond for dependable real-time systems. We aim to establish and contribute to three questions: (i) how to easily model execution times with an adequate level of abstraction, and how to evaluate the quality of that model; (ii) how to identify errors in the models and how to evaluate the impact of errors; and (iii) how to make decisions as to when and how to improve the models. In this paper, we proposed a Digital Twin-based adaptation framework, and demonstrated its use for modelling and refining execution time profiles. Key decisions concerning the quality of the model and its impact on performance are evaluated. Finally, some challenges and key research questions for the formal method community are proposed

    A Verified Information-Flow Architecture

    Get PDF
    SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to dynamically control information flow in SAFE and an end-to-end proof of noninterference for this model. We use a refinement proof methodology to propagate the noninterference property of the abstract machine down to the concrete machine level. We use an intermediate layer in the refinement chain that factors out the details of the information-flow control policy and devise a code generator for compiling such information-flow policies into low-level monitor code. Finally, we verify the correctness of this generator using a dedicated Hoare logic that abstracts from low-level machine instructions into a reusable set of verified structured code generators

    Size Matters: Microservices Research and Applications

    Full text link
    In this chapter we offer an overview of microservices providing the introductory information that a reader should know before continuing reading this book. We introduce the idea of microservices and we discuss some of the current research challenges and real-life software applications where the microservice paradigm play a key role. We have identified a set of areas where both researcher and developer can propose new ideas and technical solutions.Comment: arXiv admin note: text overlap with arXiv:1706.0735
    • 

    corecore