57,516 research outputs found
Autonomous platform for life-critical decision support in the ICU
Part 2: PhD Workshop: Autonomic Network and Service ManagementInternational audienceThe Intensive Care Unit is a complex, data-intensive and critical environment in which the adoption of Information Technology is growing. As physicians become more dependent on the computing technology to support decisions, raise real-time alerts and notifications of patient-specific conditions, this software has strong dependability requirements. The dependability challenges are expressed in terms of availability, reliability, performance, usability and maintenance of the system. Our research focuses on the design and development of a generic autonomous ICU service platform. COSARA is a computer-based platform for infection surveillance and antibiotic management in ICU. During its design, development and evaluation, we identified both technological and human factors that affect robustness. We presented the identified research questions that will be addressed in detail during PhD research
Identifying dependability requirements for space software systems
Computer systems are increasingly used in space, whether in launch vehicles, satellites, ground support and payload systems. Software applications used in these systems have become more complex, mainly due to the high number of features to be met, thus contributing to a greater probability of hazards related to software faults. Therefore, it is fundamental that the specification activity of requirements have a decisive role in the effort of obtaining systems with high quality and safety standards. In critical systems like the embedded software of the Brazilian Satellite Launcher, ambiguity, non-completeness, and lack of good requirements can cause serious accidents with economic, material and human losses. One way to assure quality with safety, reliability and other dependability attributes may be the use of safety analysis techniques during the initial phases of the project in order to identify the most adequate dependability requirements to minimize possible fault or failure occurrences during the subsequent phases. This paper presents a structured software dependability requirements analysis process that uses system software requirement specifications and traditional safety analysis techniques. The main goal of the process is to help to identify a set of essential software dependability requirements which can be added to the software requirement previously specified for the system. The final results are more complete, consistent, and reliable specifications
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Software reliability and dependability: a roadmap
Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t
Human-machine diversity in the use of computerised advisory systems: a case study
Computer-based advisory systems form with their users composite, human-machine systems. Redundancy and diversity between the human and the machine are often important for the dependability of such systems. We discuss the modelling approach we applied in a case study. The goal is to assess failure probabilities for the analysis of X-ray films for detecting cancer, performed by a person assisted by a computer-based tool. Differently from most approaches to human reliability assessment, we focus on the effects of failure diversity — or correlation — between humans and machines. We illustrate some of the modelling and prediction problems, especially those caused by the presence of the human component. We show two alternative models, with their pros and cons, and illustrate, via numerical examples and analytically, some interesting and non-intuitive answers to questions about reliability assessment and design choices for human-computer systems
Validation of Ultrahigh Dependability for Software-Based Systems
Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software
Recommended from our members
The Law Commission presumption concerning the dependability of computer evidence
We consider the condition set out in section 69(1)(b) of the Police and Criminal Evidence Act 1984 (PACE 1984) that reliance on computer evidence should be subject to proof of its correctness, and compare it to the 1997 Law Commission recommendation that acommon law presumption be used that a computer operated correctly unless there is explicit evidence to the contrary (LC Presumption). We understand the LC Presumption prevails in current legal proceedings. We demonstrate that neither section 69(1)(b) of PACE 1984 nor the LC presumption reflects the reality of general software-based system behaviour
Recommended from our members
Evaluating the resilience and security of boundaryless, evolving socio-technical Systems of Systems
Resilient Critical Infrastructure Management using Service Oriented Architecture
Abstract—The SERSCIS project aims to support the use of interconnected systems of services in Critical Infrastructure (CI) applications. The problem of system interconnectedness is aptly demonstrated by ‘Airport Collaborative Decision Making’ (ACDM). Failure or underperformance of any of the interlinked ICT systems may compromise the ability of airports to plan their use of resources to sustain high levels of air traffic, or to provide accurate aircraft movement forecasts to the wider European air traffic management systems. The proposed solution is to introduce further SERSCIS ICT components to manage dependability and interdependency. These use semantic models of the critical infrastructure, including its ICT services, to identify faults and potential risks and to increase human awareness of them. Semantics allows information and services to be described in such a way that makes them understandable to computers. Thus when a failure (or a threat of failure) is detected, SERSCIS components can take action to manage the consequences, including changing the interdependency relationships between services. In some cases, the components will be able to take action autonomously — e.g. to manage ‘local’ issues such as the allocation of CPU time to maintain service performance, or the selection of services where there are redundant sources available. In other cases the components will alert human operators so they can take action instead. The goal of this paper is to describe a Service Oriented Architecture (SOA) that can be used to address the management of ICT components and interdependencies in critical infrastructure systems. Index Terms—resilience; QoS; SOA; critical infrastructure, SLA
- …