128,582 research outputs found
Recommended from our members
Assessing the reliability of diverse fault-tolerant software-based systems
We discuss a problem in the safety assessment of automatic control and protection systems. There is an increasing dependence on software for performing safety-critical functions, like the safety shut-down of dangerous plants. Software brings increased risk of design defects and thus systematic failures; redundancy with diversity between redundant channels is a possible defence. While diversity techniques can improve the dependability of software-based systems, they do not alleviate the difficulties of assessing whether such a system is safe enough for operation. We study this problem for a simple safety protection system consisting of two diverse channels performing the same function. The problem is evaluating its probability of failure in demand. Assuming failure independence between dangerous failures of the channels is unrealistic. One can instead use evidence from the observation of the whole system's behaviour under realistic test conditions. Standard inference procedures can then estimate system reliability, but they take no advantage of a systemâs fault-tolerant structure. We show how to extend these techniques to take account of fault tolerance by a conceptually straightforward application of Bayesian inference. Unfortunately, the method is computationally complex and requires the conceptually difficult step of specifying 'prior' distributions for the parameters of interest. This paper presents the correct inference procedure, exemplifies possible pitfalls in its application and clarifies some non-intuitive issues about reliability assessment for fault-tolerant software
The Development and Validation of the Healthcare Professional Humanization Scale (HUMAS) for Nursing
A Posteriori Probabilistic Bounds of Convex Scenario Programs with Validation Tests
Scenario programs have established themselves as efficient tools towards
decision-making under uncertainty. To assess the quality of scenario-based
solutions a posteriori, validation tests based on Bernoulli trials have been
widely adopted in practice. However, to reach a theoretically reliable
judgement of risk, one typically needs to collect massive validation samples.
In this work, we propose new a posteriori bounds for convex scenario programs
with validation tests, which are dependent on both realizations of support
constraints and performance on out-of-sample validation data. The proposed
bounds enjoy wide generality in that many existing theoretical results can be
incorporated as particular cases. To facilitate practical use, a systematic
approach for parameterizing a posteriori probability bounds is also developed,
which is shown to possess a variety of desirable properties allowing for easy
implementations and clear interpretations. By synthesizing comprehensive
information about support constraints and validation tests, improved risk
evaluation can be achieved for randomized solutions in comparison with existing
a posteriori bounds. Case studies on controller design of aircraft lateral
motion are presented to validate the effectiveness of the proposed a posteriori
bounds
The âfrontal lobeâ project: A double-blind, randomized controlled study of the effectiveness of higher level driving skills training to improve frontal lobe (executive) function related driving performance in young drivers
The current study was undertaken in order to evaluate the effectiveness of higher level skills training on safe driving behaviour of 36 teenage drivers. The participants, who attended the Driver Training Research camp in Taupo (NZ) over a two week period, were 16 to 17 years old and had a valid restricted driver licence. The study focused on four main aims. Firstly, the behavioural characteristics of the sample and their attitudes to risk taking and driving were examined. Results showed that speeding was the most anticipated driving violation, and high levels of confidence were associated with a higher number of crashes and a greater propensity for risk taking. Many, often male participants, also rated their driving skills as superior to others and thought they would be less likely than others to be involved in an accident. Secondly, the relationship between driving performance and executive functioning, general ability and sustained attention was evaluated. Overall, better driving performance and more accurate self-evaluation of driving performance was related to higher levels of executive functions, in particular, working memory, and
cognitive switching. In addition, higher general ability and greater ability to sustain attention were also linked to better performance on the driving related assessments. The third focus of this study was to compare the effects of both, higher level and vehicle handling skills training on driving performance, confidence levels and
attitudes to risk. While both types of training improved direction control, speed
choice and visual search, along with number of hazards detected and actions in relation to hazards, statistically significant improvement on visual search was seen only after higher level skills training. Vehicle handling skills training significantly improved direction control and speed choice. In addition, confidence levels in their driving skills were significantly lowered and attitudes to speeding, overtaking and
close following had improved significantly in the participants after the higher level
driving skills training. The final aspect to this study was to examine the effects of the
training over the following 6 month period based on self-reported driving behaviour.
The response rate of participants however, was not sufficient to reach any meaningful conclusion on any long-term training effects. A pilot study using GPSbased data trackers to assess post-training driving behaviour revealed some promising results for future driver training evaluation studies. The overall implications of the results are discussed in relation to improving the safety of young
drivers in New Zealand
Validation of Ultrahigh Dependability for Software-Based Systems
Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software
Reasoning about the Reliability of Diverse Two-Channel Systems in which One Channel is "Possibly Perfect"
This paper considers the problem of reasoning about the reliability of fault-tolerant systems with two "channels" (i.e., components) of which one, A, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of "perfection." We begin with the case where either channel can bring the system to a safe state. We show that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA.pB. That is, there is conditional independence between the events "A fails" and "B is imperfect." The second step of the reasoning involves epistemic uncertainty about (pA, pB) and we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters. We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft
Recommended from our members
On the use of testability measures for dependability assessment
Program âtestabilityâ is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to âultra high reliabilityâ requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical âconfidence levelâ. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Consistency of metabolic responses and appetite sensations under postabsorptive and postprandial conditions
The present study aimed to investigate the reliability of metabolic and subjective appetite responses under fasted conditions and following consumption of a cereal-based breakfast. Twelve healthy, physically active males completed two postabsorption (PA) and two postprandial (PP) trials in a randomised order. In PP trials a cereal based breakfast providing 1859 kJ of energy was consumed. Expired gas samples were used to estimate energy expenditure and fat oxidation and 100 mm visual analogue scales were used to determine appetite sensations at baseline and every 30 min for 120 min. Reliability was assessed using limits of agreement, coefficient of variation (CV), intraclass coefficient of correlation and 95% confidence limits of typical error. The limits of agreement and typical error were 292.0 and 105.5 kJ for total energy expenditure, 9.3 and 3.4 g for total fat oxidation and 22.9 and 8.3 mm for time-averaged AUC for hunger sensations, respectively over the 120 min period in the PP trial. The reliability of energy expenditure and appetite in the 2 h response to a cereal-based breakfast would suggest that an intervention requires a 211 kJ and 16.6 mm difference in total postprandial energy expenditure and time-averaged hunger AUC to be meaningful, fat oxidation would require a 6.7 g difference which may not be sensitive to most meal manipulations
Quality assurance of rectal cancer diagnosis and treatment - phase 3 : statistical methods to benchmark centres on a set of quality indicators
In 2004, the Belgian Section for Colorectal Surgery, a section of the Royal Belgian Society for Surgery, decided to start PROCARE (PROject on CAncer of the REctum), a multidisciplinary, profession-driven and decentralized project with as main objectives the reduction of diagnostic and therapeutic variability and improvement of outcome in patients with rectal cancer. All medical specialties involved in the care of rectal cancer established a multidisciplinary steering group in 2005. They agreed to approach the stated goal by means of treatment standardization through guidelines, implementation of these guidelines and quality assurance through registration and feedback.
In 2007, the PROCARE guidelines were updated (Procare Phase I, KCE report 69). In 2008, a set of 40 process and outcome quality of care indicators (QCI) was developed and organized into 8 domains of care: general, diagnosis/staging, neoadjuvant treatment, surgery, adjuvant treatment, palliative treatment, follow-up and histopathologic examination. These QCIs were tested on the prospective PROCARE database and on an administrative (claims) database (Procare Phase II, KCE report 81). Afterwards, 4 QCIs were added by the PROCARE group.
Centres have been receiving feedback from the PROCARE registry on these QCIs with a description of the distribution of the unadjusted centre-averaged observed measures and the centreâs position therein. To optimize this feedback, centres should ideally be informed of their risk-adjusted outcomes and be given some benchmarks. The PROCARE Phase III study is devoted to developing a methodology to achieve this feedback
- âŠ