5,119 research outputs found

    Value-driven Security Agreements in Extended Enterprises

    Get PDF
    Today organizations are highly interconnected in business networks called extended enterprises. This is mostly facilitated by outsourcing and by new economic models based on pay-as-you-go billing; all supported by IT-as-a-service. Although outsourcing has been around for some time, what is now new is the fact that organizations are increasingly outsourcing critical business processes, engaging on complex service bundles, and moving infrastructure and their management to the custody of third parties. Although this gives competitive advantage by reducing cost and increasing flexibility, it increases security risks by eroding security perimeters that used to separate insiders with security privileges from outsiders without security privileges. The classical security distinction between insiders and outsiders is supplemented with a third category of threat agents, namely external insiders, who are not subject to the internal control of an organization but yet have some access privileges to its resources that normal outsiders do not have. Protection against external insiders requires security agreements between organizations in an extended enterprise. Currently, there is no practical method that allows security officers to specify such requirements. In this paper we provide a method for modeling an extended enterprise architecture, identifying external insider roles, and for specifying security requirements that mitigate security threats posed by these roles. We illustrate our method with a realistic example

    Measuring the Impact of Spectre and Meltdown

    Full text link
    The Spectre and Meltdown flaws in modern microprocessors represent a new class of attacks that have been difficult to mitigate. The mitigations that have been proposed have known performance impacts. The reported magnitude of these impacts varies depending on the industry sector and expected workload characteristics. In this paper, we measure the performance impact on several workloads relevant to HPC systems. We show that the impact can be significant on both synthetic and realistic workloads. We also show that the performance penalties are difficult to avoid even in dedicated systems where security is a lesser concern

    How explicit are the barriers to failure in safety arguments?

    Get PDF
    Safety cases embody arguments that demonstrate how safety properties of a system are upheld. Such cases implicitly document the barriers that must exist between hazards and vulnerable components of a system. For safety certification, it is the analysis of these barriers that provide confidence in the safety of the system. The explicit representation of hazard barriers can provide additional insight for the design and evaluation of system safety. They can be identified in a hazard analysis to allow analysts to reflect on particular design choices. Barrier existence in a live system can be mapped to abstract barrier representations to provide both verification of barrier existence and a basis for quantitative measures between the predicted barrier behaviour and performance of the actual barrier. This paper explores the first stage of this process, the binding between explicit mitigation arguments in hazard analysis and the barrier concept. Examples from the domains of computer-assisted detection in mammography and free route airspace feasibility are examined and the implications for system certification are considered

    Hazard Contribution Modes of Machine Learning Components

    Get PDF
    Amongst the essential steps to be taken towards developing and deploying safe systems with embedded learning-enabled components (LECs) i.e., software components that use ma- chine learning (ML)are to analyze and understand the con- tribution of the constituent LECs to safety, and to assure that those contributions have been appropriately managed. This paper addresses both steps by, first, introducing the notion of hazard contribution modes (HCMs) a categorization of the ways in which the ML elements of LECs can contribute to hazardous system states; and, second, describing how argumentation patterns can capture the reasoning that can be used to assure HCM mitigation. Our framework is generic in the sense that the categories of HCMs developed i) can admit different learning schemes, i.e., supervised, unsupervised, and reinforcement learning, and ii) are not dependent on the type of system in which the LECs are embedded, i.e., both cyber and cyber-physical systems. One of the goals of this work is to serve a starting point for systematizing L analysis towards eventually automating it in a tool
    corecore