32,083 research outputs found

    Trust economics feasibility study

    Get PDF
    We believe that enterprises and other organisations currently lack sophisticated methods and tools to determine if and how IT changes should be introduced in an organisation, such that objective, measurable goals are met. This is especially true when dealing with security-related IT decisions. We report on a feasibility study, Trust Economics, conducted to demonstrate that such methodology can be developed. Assuming a deep understanding of the IT involved, the main components of our trust economics approach are: (i) assess the economic or financial impact of IT security solutions; (ii) determine how humans interact with or respond to IT security solutions; (iii) based on above, use probabilistic and stochastic modelling tools to analyse the consequences of IT security decisions. In the feasibility study we apply the trust economics methodology to address how enterprises should protect themselves against accidental or malicious misuse of USB memory sticks, an acute problem in many industries

    Attack-Surface Metrics, OSSTMM and Common Criteria Based Approach to “Composable Security” in Complex Systems

    Get PDF
    In recent studies on Complex Systems and Systems-of-Systems theory, a huge effort has been put to cope with behavioral problems, i.e. the possibility of controlling a desired overall or end-to-end behavior by acting on the individual elements that constitute the system itself. This problem is particularly important in the “SMART” environments, where the huge number of devices, their significant computational capabilities as well as their tight interconnection produce a complex architecture for which it is difficult to predict (and control) a desired behavior; furthermore, if the scenario is allowed to dynamically evolve through the modification of both topology and subsystems composition, then the control problem becomes a real challenge. In this perspective, the purpose of this paper is to cope with a specific class of control problems in complex systems, the “composability of security functionalities”, recently introduced by the European Funded research through the pSHIELD and nSHIELD projects (ARTEMIS-JU programme). In a nutshell, the objective of this research is to define a control framework that, given a target security level for a specific application scenario, is able to i) discover the system elements, ii) quantify the security level of each element as well as its contribution to the security of the overall system, and iii) compute the control action to be applied on such elements to reach the security target. The main innovations proposed by the authors are: i) the definition of a comprehensive methodology to quantify the security of a generic system independently from the technology and the environment and ii) the integration of the derived metrics into a closed-loop scheme that allows real-time control of the system. The solution described in this work moves from the proof-of-concepts performed in the early phase of the pSHIELD research and enrich es it through an innovative metric with a sound foundation, able to potentially cope with any kind of pplication scenarios (railways, automotive, manufacturing, ...)

    Evaluating Cascading Impact of Attacks on Resilience of Industrial Control Systems: A Design-Centric Modeling Approach

    Full text link
    A design-centric modeling approach was proposed to model the behaviour of the physical processes controlled by Industrial Control Systems (ICS) and study the cascading impact of data-oriented attacks. A threat model was used as input to guide the construction of the CPS model where control components which are within the adversary's intent and capabilities are extracted. The relevant control components are subsequently modeled together with their control dependencies and operational design specifications. The approach was demonstrated and validated on a water treatment testbed. Attacks were simulated on the testbed model where its resilience to attacks was evaluated using proposed metrics such as Impact Ratio and Time-to-Critical-State. From the analysis of the attacks, design strengths and weaknesses were identified and design improvements were recommended to increase the testbed's resilience to attacks

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols
    corecore