161,249 research outputs found
Recommended from our members
Comparing the effectiveness of testing methods in improving programs: the effect of variations in program quality
We compare the efficacy of different testing methods for improving the reliability of software. Specifically, we use modelling to compare “operational” testing, in which test cases are chosen according to their probability of occurring in actual use of the software, against “debug” testing methods, in which the testers look for test cases which they consider likely to cause failure, or that satisfy some coverage criterion. We base our comparisons on the reliability reached by the program at the end of testing. Differently from previous studies, we consider the probability distribution of the achieved reliability, and thus the probability of satisfying specific requirements, rather than just the average reliability achieved. We take account of two sources of variation. The variation between the actual test histories that are possible for a given program and a given test method: and the fact that different programs start testing with different faults and initial reliability levels. By necessity, we use very simplified models of reality. Yet, we can show some interesting conclusions with important practical consequences. In general, there are stronger arguments in favor of operational testing than previous studies have show
Recommended from our members
Modeling software design diversity
Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity
Software reliability and dependability: a roadmap
Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t
Introducing Energy Efficiency into SQALE
Energy Efficiency is becoming a key factor in software development, given the sharp growth of IT systems and their impact on worldwide energy consumption. We do believe that a quality process infrastructure should be able to consider the Energy Efficiency of a system since its early development: for this reason we propose to introduce Energy Efficiency into the existing quality models. We selected the SQALE model and we tailored it inserting Energy Efficiency as a sub-characteristic of efficiency. We also propose a set of six source code specific requirements for the Java language starting from guidelines currently suggested in the literature. We experienced two major challenges: the identification of measurable, automatically detectable requirements, and the lack of empirical validation on the guidelines currently present in the literature and in the industrial state of the practice as well. We describe an experiment plan to validate the six requirements and evaluate the impact of their violation on Energy Efficiency, which has been partially proved by preliminary results on C code. Having Energy Efficiency in a quality model and well verified code requirements to measure it, will enable a quality process that precisely assesses and monitors the impact of software on energy consumptio
Towards operational measures of computer security
Ideally, a measure of the security of a system should capture quantitatively the intuitive notion of ‘the ability of the system to resist attack’. That is, it should be operational, reflecting the degree to which the system can be expected to remain free of security breaches under particular conditions of operation (including attack). Instead, current security levels at best merely reflect the extensiveness of safeguards introduced during the design and development of a system. Whilst we might expect a system developed to a higher level than another to exhibit ‘more secure behaviour’ in operation, this cannot be guaranteed; more particularly, we cannot infer what the actual security behaviour will be from knowledge of such a level. In the paper we discuss similarities between reliability and security with the intention of working towards measures of ‘operational security’ similar to those that we have for reliability of systems. Very informally, these measures could involve expressions such as the rate of occurrence of security breaches (cf rate of occurrence of failures in reliability), or the probability that a specified ‘mission’ can be accomplished without a security breach (cf reliability function). This new approach is based on the analogy between system failure and security breach. A number of other analogies to support this view are introduced. We examine this duality critically, and have identified a number of important open questions that need to be answered before this quantitative approach can be taken further. The work described here is therefore somewhat tentative, and one of our major intentions is to invite discussion about the plausibility and feasibility of this new approach
Radiation Risks and Mitigation in Electronic Systems
Electrical and electronic systems can be disturbed by radiation-induced
effects. In some cases, radiation-induced effects are of a low probability and
can be ignored; however, radiation effects must be considered when designing
systems that have a high mean time to failure requirement, an impact on
protection, and/or higher exposure to radiation. High-energy physics power
systems suffer from a combination of these effects: a high mean time to failure
is required, failure can impact on protection, and the proximity of systems to
accelerators increases the likelihood of radiation-induced events. This paper
presents the principal radiation-induced effects, and radiation environments
typical to high-energy physics. It outlines a procedure for designing and
validating radiation-tolerant systems using commercial off-the-shelf
components. The paper ends with a worked example of radiation-tolerant power
converter controls that are being developed for the Large Hadron Collider and
High Luminosity-Large Hadron Collider at CERN.Comment: 19 pages, contribution to the 2014 CAS - CERN Accelerator School:
Power Converters, Baden, Switzerland, 7-14 May 201
- …