50 research outputs found
Patterns for building dependable systems with trusted bases
We propose a set of patterns for structuring a system to be dependable by design. The key idea is to localize the system's most critical requirements into small, reliable parts called trusted bases. We describe two instances of trusted bases: (1) the end-to-end check, which localizes the correctness checking of a computation to end points of a system, and (2) the trusted kernel, which ensures the safety of a set of resources with a small core of a system.Northrop Grumman Cybersecurity Research ConsortiumNational Science Foundation (U.S.) (Deep and Scalable Analysis of Software Grant 0541183)National Science Foundation (U.S.) (CRI: CRD - Development of Alloy Technology and Materials Grant 0707612
Property-Part Diagrams: A Dependence Notation for Software Systems
Some limitations of traditional dependence diagrams are explained, and a new notation that overcomes them is proposed. The key idea is to include in the diagram not only the parts of a system but also the properties that are assigned to them; dependences are shown as a relation not from parts to parts, but between properties and the parts (or other properties) that support them. The diagram can be used to evaluate modularization in a design, to assess how successfully critical properties are confined to a limited subset of parts, and to structure a dependability argument.
A Framework for Dependability analysis of software systems with trusted bases
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 73-76).A new approach is suggested for arguing that a software system is dependable. The key idea is to structure the system so that highly critical requirements are localized in small subsets of the system called trusted bases. In most systems, the satisfaction of a requirement relies on assumptions about the environment, in addition to the behavior of software. Therefore, establishing a trusted base for a critical property must be carried out as early as the requirements phase. This thesis proposes a new framework to support this activity. A notation is used to construct a dependability argument that explains how the system satisfies critical requirements. The framework provides a set of analysis techniques for checking the soundness of an argument, identifying the members of a trusted base, and illustrating the impact of failures of trusted components. The analysis offers suggestions for redesigning the system so that it becomes more reliable. The thesis demonstrates the effectiveness of this approach with a case study on electronic voting systems.by Eunsuk Kang.S.M
Task model design and analysis with alloy
This paper describes a methodology for task model design and analysis using the Alloy Analyzer, a formal, declarative modeling tool. Our methodology leverages (1) a formalization of the HAMSTERS task modeling notation in Alloy and (2) a method for encoding a concrete task model and compose it with a model of the interactive system. The Analyzer then automatically verifies the overall model against desired properties, revealing counter-examples (if any) in terms of interaction scenarios between the operator and the system. In addition, we demonstrate how Alloy can be used to encode various types of operator errors (e.g., inserting or omitting an action) into the base HAMSTERS model and generate erroneous interaction scenarios. Our methodology is applied to a task model describing the interaction of a traffic air controller with a semi-autonomous Arrival MANager (AMAN) planning tool.The work of the first two authors is financed by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia, within project LA/P/0063/2020. The last author was supported in part by the National Science Foundation award CCF-2144860
A lightweight code analysis and its role in evaluation of a dependability case
A dependability case is an explicit, end-to-end argument, based on concrete evidence, that a system satisfies a critical property. We report on a case study constructing a dependability case for the control software of a medical device. The key novelty of our approach is a lightweight code analysis that generates a list of side conditions that correspond to assumptions to be discharged about the code and the environment in which it executes. This represents an unconventional trade-off between, at one extreme, more ambitious analyses that attempt to discharge all conditions automatically (but which cannot even in principle handle environmental assumptions), and at the other, flow- or context-insensitive analyses that require more user involvement. The results of the analysis suggested a variety of ways in which the dependability of the system might be improved.National Science Foundation (U.S.). (Deep and Scalable Analysis of Software) (Grant number 0541183)National Science Foundation (U.S.). Division of Computer and Network Systems (CRI: CRD – Development of Alloy Tools, Technology and Materials) (Grant number 0707612
Integrating Graceful Degradation and Recovery through Requirement-driven Adaptation
Cyber-physical systems (CPS) are subject to environmental uncertainties such
as adverse operating conditions, malicious attacks, and hardware degradation.
These uncertainties may lead to failures that put the system in a sub-optimal
or unsafe state. Systems that are resilient to such uncertainties rely on two
types of operations: (1) graceful degradation, to ensure that the system
maintains an acceptable level of safety during unexpected environmental
conditions and (2) recovery, to facilitate the resumption of normal system
functions. Typically, mechanisms for degradation and recovery are developed
independently from each other, and later integrated into a system, requiring
the designer to develop an additional, ad-hoc logic for activating and
coordinating between the two operations. In this paper, we propose a
self-adaptation approach for improving system resiliency through automated
triggering and coordination of graceful degradation and recovery. The key idea
behind our approach is to treat degradation and recovery as requirement-driven
adaptation tasks: Degradation can be thought of as temporarily weakening
original (i.e., ideal) system requirements to be achieved by the system, and
recovery as strengthening the weakened requirements when the environment
returns within an expected operating boundary. Furthermore, by treating
weakening and strengthening as dual operations, we argue that a single
requirement-based adaptation method is sufficient to enable coordination
between degradation and recovery. Given system requirements specified in signal
temporal logic (STL), we propose a run-time adaptation framework that performs
degradation and recovery in response to environmental changes. We describe a
prototype implementation of our framework and demonstrate the feasibility of
the proposed approach using a case study in unmanned underwater vehicles.Comment: Pre-print for the SEAMS '24 conference (Software Engineering for
Adaptive and Self-Managing Systems Conference
Alloy*: A Higher-Order Relational Constraint Solver
The last decade has seen a dramatic growth in the use of constraint solvers as a computational mechanism, not only for analysis and synthesis of software, but also at runtime. Solvers are available for a variety of logics but are generally restricted to first-order formulas. Some tasks, however, most notably those involving synthesis, are inherently higher order; these are typically handled by embedding a first-order solver (such as a SAT or SMT solver) in a domain-specific algorithm. Using strategies similar to those used in such algorithms, we show how to extend a first-order solver (in this case Kodkod, a model finder for relational logic used as the engine of the Alloy Analyzer) so that it can handle quantifications over higher-order structures. The resulting solver is sufficiently general that it can be applied to a range of problems; it is higher order, so that it can be applied directly, without embedding in another algorithm; and it performs well enough to be competitive with specialized tools on standard benchmarks. Although the approach is demonstrated for a particular relational logic, the principles behind it could be applied to other first-order solvers. Just as the identification of first-order solvers as reusable backends advanced the performance of specialized tools and simplified their architecture, factoring out higher-ordersolvers may bring similar benefits to a new class of tools