32,051 research outputs found
Recommended from our members
Modeling software design diversity
Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. Whilst there is clear evidence that the approach can be expected to deliver some increase in reliability compared with a single version, there is not agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown not to be tenable: assessment of the actual level of dependence present is therefore needed, and this is hard. In this tutorial we survey the modelling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault tolerant systems. The intended audience is one of designers, assessors and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Formal change impact analyses for emulated control software
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly
Rigorously assessing software reliability and safety
This paper summarises the state of the art in the assessment of software reliability and safety ("dependability"), and describes some promising developments. A sound demonstration of very high dependability is still impossible before operation of the software; but research is finding ways to make rigorous assessment increasingly feasible. While refined mathematical techniques cannot take the place of factual knowledge, they can allow the decision-maker to draw more accurate conclusions from the knowledge that is available
Recovering the lost gold of the developing world : bibliographic database
This report contains a library of 181 references, including abstracts, prepared for Project
R 7120 "Recovering the lost gold of the developing world" funded by the UK' s
Department for International Development (DFID) under the Knowledge and Research
(KAR) programme. As part of an initial desk study, a literature review of gold processing
methods used by small-scale miners was carried out using the following sources; the lSI
Science Citation Index accessed via Bath Information and Data Services (BIDS), a
licensed GEOREF CD-ROM database held at the BGS's Library in Keyworth and
IMMage a CD-ROM database produced by the Institution of Mining and Metallurgy held
by the Minerals group ofBGS. Information on the search terms used is available from the
author
Failure mode prediction and energy forecasting of PV plants to assist dynamic maintenance tasks by ANN based models
In the field of renewable energy, reliability analysis techniques combining the operating time of the system with the observation of operational and environmental conditions, are gaining importance over time.
In this paper, reliability models are adapted to incorporate monitoring data on operating assets, as well as information on their environmental conditions, in their calculations. To that end, a logical decision tool based on two artificial neural networks models is presented. This tool allows updating assets reliability analysis according to changes in operational and/or environmental conditions.
The proposed tool could easily be automated within a supervisory control and data acquisition system, where reference values and corresponding warnings and alarms could be now dynamically generated using the tool. Thanks to this capability, on-line diagnosis and/or potential asset degradation prediction can be certainly improved.
Reliability models in the tool presented are developed according to the available amount of failure data and are used for early detection of degradation in energy production due to power inverter and solar trackers functional failures.
Another capability of the tool presented in the paper is to assess the economic risk associated with the system under existing conditions and for a certain period of time. This information can then also be used to trigger preventive maintenance activities
The Return of the Rogue
The “rogue trader”—a famed figure of the 1990s—recently has returned to prominence due largely to two phenomena. First, recent U.S. mortgage market volatility spilled over into stock, commodity, and derivative markets worldwide, causing large financial institution losses and revealing previously hidden unauthorized positions. Second, the rogue trader has gained importance as banks around the world have focused more attention on operational risk in response to regulatory changes prompted by the Basel II Capital Accord. This Article contends that of the many regulatory options available to the Basel Committee for addressing operational risk it arguably chose the worst: an enforced selfregulatory regime unlikely to substantially alter financial institutions’ ability to successfully manage operational risk. That regime also poses the danger of high costs, a false sense of security, and perverse incentives. Particularly with respect to the low-frequency, high-impact events—including rogue trading—that may be the greatest threat to bank stability and soundness, attempts at enforced self-regulation are unlikely to significantly reduce operational risk, because those financial institutions with the highest operational risk are the least likely to credibly assess that risk and set aside adequate capital under a regime of enforced self-regulation
Indexed Labels for Loop Iteration Dependent Costs
We present an extension to the labelling approach, a technique for lifting
resource consumption information from compiled to source code. This approach,
which is at the core of the annotating compiler from a large fragment of C to
8051 assembly of the CerCo project, looses preciseness when differences arise
as to the cost of the same portion of code, whether due to code transformation
such as loop optimisations or advanced architecture features (e.g. cache). We
propose to address this weakness by formally indexing cost labels with the
iterations of the containing loops they occur in. These indexes can be
transformed during the compilation, and when lifted back to source code they
produce dependent costs.
The proposed changes have been implemented in CerCo's untrusted prototype
compiler from a large fragment of C to 8051 assembly.Comment: In Proceedings QAPL 2013, arXiv:1306.241
- …