Industrial control systems increasingly use standard communication protocols and are increasingly connected to public networks—creating substantial cybersecurity risks, especially when used in critical infrastructures such as electricity and water distribution systems. Methods of assessing risk in such systems have recognized for some time the way in which the strategies of potential adversaries and risk managers interact in defining the risk to which such systems are exposed. But it is also important to consider the adaptations of the systems’ operators and other legitimate users to risk controls, adaptations that often appear to undermine these controls, or shift the risk from one part of a system to another. Unlike the case with adversarial risk analysis, the adaptations of system users are typically orthogonal to the objective of minimizing or maximizing risk in the system. We argue that this need to analyze potential adaptations to risk controls is true for risk problems more generally, and we develop a framework for incorporating such adaptations into an assessment process. The method is based on the principle of affordances, and we show how this can be incorporated in an iterative procedure based on raising the minimum period of risk materialization above some threshold. We apply the method in a case study of a small European utility provider and discuss the observations arising from this
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.