7,541 research outputs found

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    A normal accident theory-based complexity assessment methodology for safety-related embedded computer systems

    Get PDF
    "Computer-related accidents have caused injuries and fatalities in numerous applications. Normal accident theory (NAT) explains that these accidents are inevitable because of system complexity. Complex systems, such as computer-based systems, are highly interconnected, highly interactive, and tightly coupled. We do not have a scientific methodology to identify and quantify these complexities; specifically, NAT has not been operationalized for computer-based systems. Our research addressed this by operationalizing NAT for the system requirements of safety-related computer systems. It was theorized that there are two types of system complexity: external and internal. External complexity was characterized by three variables: system predictability, observability, and usability - the dependent variables. Internal complexity was characterized by modeling system requirements with software cost reduction dependency graphs, then quantifying model attributes using 15 graph-theoretical metrics - the independent variables. Dependent variable data were obtained by having 32 subjects run simulations of our research test vehicle: the light control system (LCS). The LCS simulation tests used a crossover design. Subject perceptions of these simulations were obtained by using a questionnaire. Canonical correlation analysis and structure correlations were used to test hypotheses 1 to 3: the dependent variables predictability, observability, and usability do not correlate with the NAT complexity metrics. Five of fifteen metrics proposed for NAT complexity correlated with the dependent data. These five metrics had structure correlations exceeding 0.25, standard errors <0.10, and a 95% confidence interval. Therefore, the null hypotheses were rejected. A Wilcoxon signed ranks test was used to test hypotheses 4 to 6: increasing NAT complexity increases system predictability, observability, and usability. The results showed that the dependent variables decreased as complexity increased. Therefore, null hypotheses 4 to 6 were rejected. This work is a step forward to operationalize NAT for safety-related computer systems; however, limitations exist. Opportunities addressing these limitations and advancing NAT were identified. Lastly, the major contribution of this work is fundamental to scientific research: to gain knowledge through the discovery of relationship between the variables of interest. Specifically, NAT has been advanced by defining and quantifying complexity measures and showing their inverse relationship to system predictability, observability, and usability." - NIOSHTIC-2NIOSHTIC no. 20024286200

    Software architectural risk assessment

    Get PDF
    Risk assessment is an essential part of the software development life cycle. Performing risk analysis early in the life cycle enhances resource allocation decisions, enables us to compare alternative software architectural designs and helps in identifying high-risk components in the system. As a result, remedial actions to control and optimize the process and improve the quality of the software product can be taken. In this thesis we investigate two types of risk---reliability-based and performance-based risk. The reliability-based risk assessment takes into account the probability of the failures and the severity of failures. For the reliability-based risk analysis we use UML models of the software system, available early in the life cycle to come up with the risk factors of the scenarios and use cases. For each scenario we construct a Markov model to assess the risk factors of the scenarios and its risk distribution among the various classes of severity. Then we investigate both independent use cases and use cases with relationships, while obtaining the system-level risk factors. (Abstract shortened by UMI.)

    Systematic evaluation of a virtual environment-based training systems

    Full text link
    This paper explores the application of three constructs that deemed to be essential to quantify virtual environments (VE) efficacy: cognitive, skill-based, and affective learning outcomes. The authors discuss the implementation of these constructs in a user-centered evaluation of a VE training system. By transforming both the conceptual and operational cohorts for training evaluation the authors illustrate the benefits of the development of a Multi-dimensional User-centered Systematic Training Evaluation (MUSTe) method for quantifying VEs efficacy. Importantly, MUSTe acknowledges the importance of combining holistic and analytical approaches in conducting systematic user-based evaluation. Furthermore, it also emphasizes that quantifying VEs efficacy must reflect the perception and preferences of the users rather than the imposition of efficacy on single measures of task outcome. An empirical study that applied MUSTe evaluation method in quantifying a VE training system efficacy provided valuable evidence of the theoretical construct and content validity of the method.<br /

    Quantify resilience enhancement of UTS through exploiting connect community and internet of everything emerging technologies

    Get PDF
    This work aims at investigating and quantifying the Urban Transport System (UTS) resilience enhancement enabled by the adoption of emerging technology such as Internet of Everything (IoE) and the new trend of the Connected Community (CC). A conceptual extension of Functional Resonance Analysis Method (FRAM) and its formalization have been proposed and used to model UTS complexity. The scope is to identify the system functions and their interdependencies with a particular focus on those that have a relation and impact on people and communities. Network analysis techniques have been applied to the FRAM model to identify and estimate the most critical community-related functions. The notion of Variability Rate (VR) has been defined as the amount of output variability generated by an upstream function that can be tolerated/absorbed by a downstream function, without significantly increasing of its subsequent output variability. A fuzzy based quantification of the VR on expert judgment has been developed when quantitative data are not available. Our approach has been applied to a critical scenario (water bomb/flash flooding) considering two cases: when UTS has CC and IoE implemented or not. The results show a remarkable VR enhancement if CC and IoE are deploye

    Experiment-Based Validation and Uncertainty Quantification of Partitioned Models: Improving Predictive Capability of Multi-Scale Plasticity Models

    Get PDF
    Partitioned analysis involves coupling of constituent models that resolve their own scales or physics by exchanging inputs and outputs in an iterative manner. Through partitioning, simulations of complex physical systems are becoming evermore present in scientific modeling, making Verification and Validation of partitioned models for the purpose of quantifying the predictive capability of their simulations increasingly important. Parameterization of the constituent models as well as the coupling interface requires a significant amount of information about the system, which is often imprecisely known. Consequently, uncertainties as well as biases in constituent models and their interface lead to concerns about the accumulation and compensation of these uncertainties and errors during the iterative procedures of partitioned analysis. Furthermore, partitioned analysis relies on the availability of reliable constituent models for each component of a system. When a constituent is unavailable, assumptions must be made to represent the coupling relationship, often through uncertain parameters that are then calibrated. This dissertation contributes to the field of computational modeling by presenting novel methods that take advantage of the transparency of partitioned analysis to compare constituent models with separate-effect experiments (measurements contained to the constituent domain) and coupled models with integral-effect experiments (measurements capturing behavior of the full system). The methods developed herein focus on these two types of experiments seeking to maximize the information that can be gained from each, thus progressing our capability to assess and improve the predictive capability of partitioned models through inverse analysis. The importance of this study stems from the need to make coupled models available for widespread use for predicting the behavior of complex systems with confidence to support decision-making in high-risk scenarios. Methods proposed herein address the challenges currently limiting the predictive capability of coupled models through a focused analysis with available experiments. Bias-corrected partitioned analysis takes advantage of separate-effect experiments to reduce parametric uncertainty and quantify systematic bias at the constituent level followed by an integration of bias-correction to the coupling framework, thus ‘correcting’ the constituent model during coupling iterations and preventing the accumulation of errors due to the final predictions. Model bias is the result of assumptions made in the modeling process, often due to lack of understanding of the underlying physics. Such is the case when a constituent model of a system component is entirely unavailable and cannot be developed due to lack of knowledge. However, if this constituent model were to be available and coupled to existing models of the other system components, bias in the coupled system would be reduced. This dissertation proposes a novel statistical inference method for developing empirical constituent models where integral-effect experiments are used to infer relationships missing from system models. Thus, the proposed inverse analysis may be implemented to infer underlying coupled relationships, not only improving the predictive capability of models by producing empirical constituents to allow for coupling, but also advancing our fundamental understanding of dependencies in the coupled system. Throughout this dissertation, the applicability and feasibility of the proposed methods are demonstrated with advanced multi-scale and multi-physics material models simulating complex material behaviors under extreme loading conditions, thus specifically contributing advancements to the material modeling community
    • 

    corecore