91,046 research outputs found
Robust Dynamic Selection of Tested Modules in Software Testing for Maximizing Delivered Reliability
Software testing is aimed to improve the delivered reliability of the users.
Delivered reliability is the reliability of using the software after it is
delivered to the users. Usually the software consists of many modules. Thus,
the delivered reliability is dependent on the operational profile which
specifies how the users will use these modules as well as the defect number
remaining in each module. Therefore, a good testing policy should take the
operational profile into account and dynamically select tested modules
according to the current state of the software during the testing process. This
paper discusses how to dynamically select tested modules in order to maximize
delivered reliability by formulating the selection problem as a dynamic
programming problem. As the testing process is performed only once, risk must
be considered during the testing process, which is described by the tester's
utility function in this paper. Besides, since usually the tester has no
accurate estimate of the operational profile, by employing robust optimization
technique, we analysis the selection problem in the worst case, given the
uncertainty set of operational profile. By numerical examples, we show the
necessity of maximizing delivered reliability directly and using robust
optimization technique when the tester has no clear idea of the operational
profile. Moreover, it is shown that the risk averse behavior of the tester has
a major influence on the delivered reliability.Comment: 19 pages, 4 figure
Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects
In this paper we extend an earlier worst case bound reliability theory to derive a worst case reliability function R(t), which gives the worst case probability of surviving a further time t given an estimate of residual defects in the software N and a prior test time T. The earlier theory and its extension are presented and the paper also considers the case where there is a low probability of any defect existing in the program. For the "fractional defect" case, there can be a high probability of surviving any subsequent time t. The implications of the theory are discussed and compared with alternative reliability models
DeSyRe: on-Demand System Reliability
The DeSyRe project builds on-demand adaptive and reliable Systems-on-Chips (SoCs). As fabrication technology scales down, chips are becoming less reliable, thereby incurring increased power and performance costs for fault tolerance. To make matters worse, power density is becoming a significant limiting factor in SoC design, in general. In the face of such changes in the technological landscape, current solutions for fault tolerance are expected to introduce excessive overheads in future systems. Moreover, attempting to design and manufacture a totally defect and fault-free system, would impact heavily, even prohibitively, the design, manufacturing, and testing costs, as well as the system performance and power consumption. In this context, DeSyRe delivers a new generation of systems that are reliable by design at well-balanced power, performance, and design costs. In our attempt to reduce the overheads of fault-tolerance, only a small fraction of the chip is built to be fault-free. This fault-free part is then employed to manage the remaining fault-prone resources of the SoC. The DeSyRe framework is applied to two medical systems with high safety requirements (measured using the IEC 61508 functional safety standard) and tight power and performance constraints
Simulation of radiation-induced defects
Mainly due to their outstanding performance the position sensitive silicon
detectors are widely used in the tracking systems of High Energy Physics
experiments such as the ALICE, ATLAS, CMS and LHCb at LHC, the world's largest
particle physics accelerator at CERN, Geneva. The foreseen upgrade of the LHC
to its high luminosity (HL) phase (HL-LHC scheduled for 2023), will enable the
use of maximal physics potential of the facility. After 10 years of operation
the expected fluence will expose the tracking systems at HL-LHC to a radiation
environment that is beyond the capacity of the present system design. Thus, for
the required upgrade of the all-silicon central trackers extensive measurements
and simulation studies for silicon sensors of different designs and materials
with sufficient radiation tolerance have been initiated within the RD50
Collaboration.
Supplementing measurements, simulations are in vital role for e.g. device
structure optimization or predicting the electric fields and trapping in the
silicon sensors. The main objective of the device simulations in the RD50
Collaboration is to develop an approach to model and predict the performance of
the irradiated silicon detectors using professional software. The first
successfully developed quantitative models for radiation damage, based on two
effective midgap levels, are able to reproduce the experimentally observed
detector characteristics like leakage current, full depletion voltage and
charge collection efficiency (CCE). Recent implementations of additional traps
at the SiO/Si interface or close to it have expanded the scope of the
experimentally agreeing simulations to such surface properties as the
interstrip resistance and capacitance, and the position dependency of CCE for
strip sensors irradiated up to
n.Comment: 13 pages, 11 figures, 6 tables, 24th International Workshop on Vertex
Detectors, 1-5 June 2015, Santa Fe, New Mexico, US
Software Defect Association Mining and Defect Correction Effort Prediction
Much current software defect prediction work concentrates on the number of defects remaining in software system. In this paper, we present association rule mining based methods to predict defect associations and defect-correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that for the defect association prediction, the accuracy is very high and the false negative rate is very low. Likewise for the defect-correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect-correction effort prediction method with other types of methods: PART, C4.5, and Na¨ıve Bayes and show that accuracy has been improved by at least 23%. We also evaluated the impact of support and confidence levels on prediction accuracy, false negative rate, false positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy
Dispersion of coupled mode-gap cavities
The dispersion of a CROW made of photonic crystal mode-gap cavities is
pronouncedly asymmetric. This asymmetry cannot be explained by the standard
tight binding model. We show that the fundamental cause of the asymmetric
dispersion is the fact that the cavity mode profile itself is dispersive, i.e.,
the mode wave function depends on the driving frequency, not the
eigenfrequency. This occurs because the photonic crystal cavity resonances do
not form a complete set. By taking into account the dispersive mode profile, we
formulate a mode coupling model that accurately describes the asymmetric
dispersion without introducing any new free parameters.Comment: 4 pages, 4 figure
Do System Test Cases Grow Old?
Companies increasingly use either manual or automated system testing to
ensure the quality of their software products. As a system evolves and is
extended with new features the test suite also typically grows as new test
cases are added. To ensure software quality throughout this process the test
suite is continously executed, often on a daily basis. It seems likely that
newly added tests would be more likely to fail than older tests but this has
not been investigated in any detail on large-scale, industrial software
systems. Also it is not clear which methods should be used to conduct such an
analysis. This paper proposes three main concepts that can be used to
investigate aging effects in the use and failure behavior of system test cases:
test case activation curves, test case hazard curves, and test case half-life.
To evaluate these concepts and the type of analysis they enable we apply them
on an industrial software system containing more than one million lines of
code. The data sets comes from a total of 1,620 system test cases executed a
total of more than half a million times over a time period of two and a half
years. For the investigated system we find that system test cases stay active
as they age but really do grow old; they go through an infant mortality phase
with higher failure rates which then decline over time. The test case half-life
is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the
- …