2,023 research outputs found
Recommended from our members
Assessing the Risk due to Software Faults: Estimates of Failure Rate versus Evidence of Perfection.
In the debate over the assessment of software reliability (or safety), as applied to critical software, two extreme positions can be discerned: the âstatisticalâ position, which requires that the claims of reliability be supported by statistical inference from realistic testing or operation, and the âperfectionistâ position, which requires convincing indications that the software is free from defects. These two positions naturally lead to requiring different kinds of supporting evidence, and actually to stating the dependability requirements in different ways, not allowing any direct comparison. There is often confusion about the relationship between statements about software failure rates and about software correctness, and about which evidence can support either kind of statement. This note clarifies the meaning of the two kinds of statement and how they relate to the probability of failure-free operation, and discusses their practical merits, especially for high required reliability or safety
Recommended from our members
On the use of testability measures for dependability assessment
Program âtestabilityâ is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to âultra high reliabilityâ requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical âconfidence levelâ. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results
Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability
Recommended from our members
Software fault-freeness and reliability predictions
Many software development practices aim at ensuring that software is correct, or fault-free. In safety critical applications, requirements are in terms of probabilities of certain behaviours, e.g. as associated to the Safety Integrity Levels of IEC 61508. The two forms of reasoning - about evidence of correctness and about probabilities of certain failures -are rarely brought together explicitly. The desirability of using claims of correctness has been argued by many authors, but not been taken up in practice. We address how to combine evidence concerning probability of failure together with evidence pertaining to likelihood of fault-freeness, in a Bayesian framework. We present novel results to make this approach practical, by guaranteeing reliability predictions that are conservative (err on the side of pessimism), despite the difficulty of stating prior probability distributions for reliability parameters. This approach seems suitable for practical application to assessment of certain classes of safety critical systems
Indicators for urban quality evaluation at district scale and relationships with health and wellness perception
The paper is related with a research that was aimed to better define urban quality and sustainability
at a district scale (4000-10000 inhabitants), specifically referred to European towns and settlements.
An innovative set of indicators (72) has been developed, starting from and taking into consideration
also existing literature, both in terms of indicators and sets of indicators (OECD, UN, Agenda 21, and
existing European databases as CRISP), four âthematicâ areas have been defined dealing with
architectural quality, accessibility, environment and services. Within each of these areas some macroindicators
and micro-indicators have been defined. The aim is to translate something that is usually
considered subjective into something âobjectiveâ and finally defined with a number (0-100). Microindicators
and macro-indicators are weighted thanks to a mathematical method based on symmetrical
matrixes, so that there is a correct balance between different areas. Indicators are both qualitative
and quantitative, so they are not just referred to urban planning procedures. The research has been
already successfully applied to some Italian districts in towns as Lodi, Genova and Milano. The set of
indicators was needed also to work within a multi disciplinary team that has already included
engineers, architects, planners as well as doctors and physicians. As a matter of fact the results in
terms of urban quality have been compared with medical results concerning health and wellness
perception (using SF-36 international recognized questionnaires) by users (inhabitants), finding (non
linear) relationships between urban quality and well being perception by inhabitants. The results of
this research can be used to: better define design strategies (by designers) accordingly to users
wellness, or evaluate ex-post the results of design activities (by municipalities or public authorities)
Sulla misura delle tensioni residue con il metodo del foro: metodi di calcolo per le tecniche sperimentali a campo intero
The hole drilling technique is a well known experimental method for residual stress
investigation. This technique is usually used in combination with electrical strain gages but
there is no reason to enforce this choice and other approach are possible. In particular some
optical measurement techniques (grating interferometry, speckle interferometry, holographic
interferometry, shearography) can be advantageously used.
Since all these optical techniques give a full field information, it becomes important to properly
use their data to increase the robustness and reliability of the analysis. In this work, various
well known approaches to this problem will be investigated using a known displacement field
as a reference. In this way it will be possible to discover the best performing algorithm in terms
of robustness and reliability
Evocative gene-environment correlation between genetic risk for schizophrenia and bullying victimization
Bullying exposure concerns over 10% of adolescents in Europe. Moreover, bullying victimization is heritable and victims are liable to psychotic symptoms, partly because of shared heritability with psychosis. The genetic component of bullying victimization has been proposed to involve the social reactions elicited by victims â a mechanism called âevocative gene-environment correlationâ.
We hypothesized that genetic risk for schizophrenia, a heritable disease also associated with social stress during childhood and adolescence, is related with social experiences during adolescence and is involved in the risk of developing psychotic symptoms. We studied 908 individuals of the TRAILS sample and found that 13-14-year-old adolescents with greater genetic risk for schizophrenia are more exposed to bullying assessed via peer nomination scores than their peers with lower genetic risk. Importantly, bullying victimization mediated the path from genetic risk to the frequency of psychotic symptoms about three years later. These findings provide evidence of a previously unreported form of gene-environment interplay that may be a mechanism of risk for psychosis and schizophrenia. To the extent that genetic risk translation into clinical symptoms is mediated by environmental risk factors, this evidence supports mental health prevention aimed at antagonizing bullying victimization in vulnerable individuals
Un nuovo algoritmo di Phase Unwrapping basato sulla crescita competitiva di regioni
The phase unwrapping problem arises every time one has to analyse experimental data acquired by
means of techniques based on periodical phenomena, from the opto-interferometric ones to some
geologycal or medical techniques.
This paper presents a new phase unwrapping algorithm based on the competitive region growing:
starting from an equipotential condition, it makes an area to absorb its neighbours by mean of a quality
parameter based on the extension and the coherency of the regions.
After a short description of the phase unwrapping problem and of some of the most used
techniques, this paper describes the working principle of the algorithm and the implementation details.
The paper ends with some examples of applications of the algorithm on synthetic images
- âŠ