18,900 research outputs found
Assessing multi-version systems through fault Injection
Multi-version design (MVD) has been proposed as a method for increasing the dependability of critical systems beyond current levels. However, a major obstacle to large-scale commercial usage of this approach is the lack of quantitative characterizations available. Fault injection is used to help seek an answer this problem. Fault injection is a phrase covering a variety of testing techniques that can be applied to both hardware and software, all of which involve the deliberate insertion of faults into an operational system to determine its response. This approach has the potential for yielding highly useful metrics with regard to MVD systems, as well as giving developers a greater insight into the behaviour of each channel within the system, k this research, an automatic fault injection system for multi-version systems called FITMVS is developed. A multi-version system is then, tested using this system, and the results analysed. It is concluded that this approach can yield several extremely useful metrics, such as metrics related to channel sensitivity, channel sensitivity to common-mode error, program' scope sensitivity, program scope sensitivity to common-mode error, error frequency distribution and common-mode error frequency distribution. In addition to this, the analysis of the multi-version system tested indicates that the system has an extremely low probability of experiencing common-mode error, although several key points in channel code are identified as having higher sensitivity to faults than others
Recommended from our members
Assessing Asymmetric Fault-Tolerant Software
The most popular forms of fault tolerance against design faults use "asymmetric" architectures in which a "primary" part performs the computation and a "secondary" part is in charge of detecting errors and performing some kind of error processing and recovery. In contrast, the most studied forms of software fault tolerance are "symmetric" ones, e.g. N-version programming. The latter are often controversial, the former are not. We discuss how to assess the dependability gains achieved by these methods. Substantial difficulties have been shown to exist for symmetric schemes, but we show that the same difficulties affect asymmetric schemes. Indeed, the latter present somewhat subtler problems. In both cases, to predict the dependability of the fault-tolerant system it is not enough to know the dependability of the individual components. We extend to asymmetric architectures the style of probabilistic modeling that has been useful for describing the dependability of "symmetric" architectures, to highlight factors that complicate the assessment. In the light of these models, we finally discuss fault injection approaches to estimating coverage factors. We highlight the limits of what can be predicted and some useful research directions towards clarifying and extending the range of situations in which estimates of coverage of fault tolerance mechanisms can be trusted
A methodology for the generation of efficient error detection mechanisms
A dependable software system must contain error detection mechanisms and error recovery mechanisms. Software components for the detection of errors are typically designed based on a system specification or the experience of software engineers, with their efficiency typically being measured using fault injection and metrics such as coverage and latency. In this paper, we introduce a methodology for the design of highly efficient error detection mechanisms. The proposed methodology combines fault injection analysis and data mining techniques in order to generate predicates for efficient error detection mechanisms. The results presented demonstrate the viability of the methodology as an approach for the development of efficient error detection mechanisms, as the predicates generated yield a true positive rate of almost 100% and a false positive rate very close to 0% for the detection of failure-inducing states. The main advantage of the proposed methodology over current state-of-the-art approaches is that efficient detectors are obtained by design, rather than by using specification-based detector design or the experience of software engineers
Recommended from our members
Characterizing Natural Fractures and Their Interactions with Hydraulically Induced Fractures
Natural fractures are preexisting micro-cracks and fissures that can have a critical impact on hydraulic fracture treatments in shales. Most shale formations contain natural fractures, but the characteristics of these natural fractures can vary significantly. For example, the natural fractures in the Barnett Shale are mostly narrow, long, and sealed with calcite cement. The natural fractures in the Wolfcamp Shale are much more heterogeneous as a whole, but tend to be clustered in similar groupings based on the lithology of certain areas of the formation. The creation and development of natural fractures prior to any hydraulic fracturing treatments is primarily a function of mineralogy, total organic carbon, and in-situ stresses. During hydraulic fracturing treatments, certain characteristics, such as the relative angle between the natural and hydraulic fractures, the length of the natural fractures, the differential stress of the formation rock, and certain completion design variables, will determine how the natural and induced fractures interact and create a fracture network. The creation of a natural fracture network can have a positive effect on the ultimate hydrocarbon recovery in some cases. Natural fractures provide accumulation space and travel pathways for hydrocarbons, which is critical in low porosity and low permeability shales. However, natural fractures can result in higher rates of fluid leakoff, which will result in less efficient hydraulic fracture treatments overall. Also, natural fractures can provide an undesirable connection to water accumulations, which can negatively impact the economics of a well because of the disposal costs associated with water production. This thesis seeks to characterize natural fractures and also to describe the author's work on a hydraulic fracture simulation software that takes the impact of natural fractures into account.Petroleum and Geosystems Engineerin
Effect of sedimentary heterogeneities in the sealing formation on predictive analysis of geological CO<sub>2</sub> storage
Numerical models of geologic carbon sequestration (GCS) in saline aquifers use multiphase fluid flow-characteristic curves (relative permeability and capillary pressure) to represent the interactions of the non-wetting CO2 and the wetting brine. Relative permeability data for many sedimentary formations is very scarce, resulting in the utilisation of mathematical correlations to generate the fluid flow characteristics in these formations. The flow models are essential for the prediction of CO2 storage capacity and trapping mechanisms in the geological media. The observation of pressure dissipation across the storage and sealing formations is relevant for storage capacity and geomechanical analysis during CO2 injection.
This paper evaluates the relevance of representing relative permeability variations in the sealing formation when modelling geological CO2 sequestration processes. Here we concentrate on gradational changes in the lower part of the caprock, particularly how they affect pressure evolution within the entire sealing formation when duly represented by relative permeability functions.
The results demonstrate the importance of accounting for pore size variations in the mathematical model adopted to generate the characteristic curves for GCS analysis. Gradational changes at the base of the caprock influence the magnitude of pressure that propagates vertically into the caprock from the aquifer, especially at the critical zone (i.e. the region overlying the CO2 plume accumulating at the reservoir-seal interface). A higher degree of overpressure and CO2 storage capacity was observed at the base of caprocks that showed gradation. These results illustrate the need to obtain reliable relative permeability functions for GCS, beyond just permeability and porosity data. The study provides a formative principle for geomechanical simulations that study the possibility of pressure-induced caprock failure during CO2 sequestration
Model-based dependability analysis : state-of-the-art, challenges and future outlook
Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis
Laboratory test methodology for evaluating the effects of electromagnetic disturbances on fault-tolerant control systems
Control systems for advanced aircraft, especially those with relaxed static stability, will be critical to flight and will, therefore, have very high reliability specifications which must be met for adverse as well as nominal operating conditions. Adverse conditions can result from electromagnetic disturbances caused by lightning, high energy radio frequency transmitters, and nuclear electromagnetic pulses. Tools and techniques must be developed to verify the integrity of the control system in adverse operating conditions. The most difficult and illusive perturbations to computer based control systems caused by an electromagnetic environment (EME) are functional error modes that involve no component damage. These error modes are collectively known as upset, can occur simultaneously in all of the channels of a redundant control system, and are software dependent. A methodology is presented for performing upset tests on a multichannel control system and considerations are discussed for the design of upset tests to be conducted in the lab on fault tolerant control systems operating in a closed loop with a simulated plant
- …