70,352 research outputs found
Resilient Critical Infrastructure Management using Service Oriented Architecture
Abstract—The SERSCIS project aims to support the use of interconnected systems of services in Critical Infrastructure (CI) applications. The problem of system interconnectedness is aptly demonstrated by ‘Airport Collaborative Decision Making’ (ACDM). Failure or underperformance of any of the interlinked ICT systems may compromise the ability of airports to plan their use of resources to sustain high levels of air traffic, or to provide accurate aircraft movement forecasts to the wider European air traffic management systems. The proposed solution is to introduce further SERSCIS ICT components to manage dependability and interdependency. These use semantic models of the critical infrastructure, including its ICT services, to identify faults and potential risks and to increase human awareness of them. Semantics allows information and services to be described in such a way that makes them understandable to computers. Thus when a failure (or a threat of failure) is detected, SERSCIS components can take action to manage the consequences, including changing the interdependency relationships between services. In some cases, the components will be able to take action autonomously — e.g. to manage ‘local’ issues such as the allocation of CPU time to maintain service performance, or the selection of services where there are redundant sources available. In other cases the components will alert human operators so they can take action instead. The goal of this paper is to describe a Service Oriented Architecture (SOA) that can be used to address the management of ICT components and interdependencies in critical infrastructure systems. Index Terms—resilience; QoS; SOA; critical infrastructure, SLA
Software reliability and dependability: a roadmap
Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t
Recommended from our members
Evaluating the resilience and security of boundaryless, evolving socio-technical Systems of Systems
Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS - a collection of Technical Notes Part 1
This report provides an introduction and overview of the Technical Topic Notes (TTNs) produced in the Towards Identifying and closing Gaps in Assurance of autonomous Road vehicleS (Tigars) project. These notes aim to support the development and evaluation of autonomous vehicles. Part 1 addresses: Assurance-overview and issues, Resilience and Safety Requirements, Open Systems Perspective and Formal Verification and Static Analysis of ML Systems. Part 2: Simulation and Dynamic Testing, Defence in Depth and Diversity, Security-Informed Safety Analysis, Standards and Guidelines
Recommended from our members
Use of computer-aided detection (CAD) tools in screening mammography: a multidisciplinary investigation
We summarise a set of analyses and studies conducted to assess the effects of the use of a computer-aided detection (CAD) tool in breast screening. We have used an interdisciplinary approach that combines: (a) statistical analyses inspired by reliability modelling in engineering; (b) experimental studies of decisions of mammography experts using the tool, interpreted in the light of human factors psychology; and (c) ethnographic observations of the use of the tool both in trial conditions and in everyday screening practice. Our investigations have shown patterns of human behaviour and effects of computer-based advice that would not have been revealed by a standard clinical trial approach. For example, we found that the negligible measured effect of CAD could be explained by a range of effects on experts' decisions, beneficial in some cases and detrimental in others. There is some evidence of the latter effects being due to the experts using the computer tool differently from the intentions of the developers. We integrate insights from the different pieces of evidence and highlight their implications for the design, evaluation and deployment of this sort of computer tool
Recommended from our members
Effects of incorrect computer-aided detection (CAD) output on human decision-making in mammography
To investigate the effects of incorrect computer output on the reliability of the decisions of human users. This work followed an independent UK clinical trial that evaluated the impact of computer-aided detection(CAD) in breast screening. The aim was to use data from this trial to feed into probabilistic models (similar to those used in "reliability engineering") which would detect and assess possible ways of improving the human-CAD interaction. Some analyses required extra data; therefore, two supplementary studies were conducted. Study 1 was designed to elucidate the effects of computer failure on human performance. Study 2 was conducted to clarify unexpected findings from Study 1
Human-machine diversity in the use of computerised advisory systems: a case study
Computer-based advisory systems form with their users composite, human-machine systems. Redundancy and diversity between the human and the machine are often important for the dependability of such systems. We discuss the modelling approach we applied in a case study. The goal is to assess failure probabilities for the analysis of X-ray films for detecting cancer, performed by a person assisted by a computer-based tool. Differently from most approaches to human reliability assessment, we focus on the effects of failure diversity — or correlation — between humans and machines. We illustrate some of the modelling and prediction problems, especially those caused by the presence of the human component. We show two alternative models, with their pros and cons, and illustrate, via numerical examples and analytically, some interesting and non-intuitive answers to questions about reliability assessment and design choices for human-computer systems
- …