16,422 research outputs found
Quality Assurance of Software Applications Using the In Vivo Testing Approach
Software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws. In this paper, we present a testing methodology we call in vivo testing, in which tests are continuously executed in the deployment environment. We also describe a type of test we call in vivo tests that are specifically designed for use with such an approach: these tests execute within the current state of the program (rather than by creating a clean slate) without affecting or altering that state from the perspective of the end-user. We discuss the approach and the prototype testing framework for Java applications called Invite. We also provide the results of case studies that demonstrate Invite's effectiveness and efficiency
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
J Fluorescence
The scope of this paper is to illustrate the need for an improved quality assurance in fluorometry. For this purpose, instrumental sources of error and their influences on the reliability and comparability of fluorescence data are highlighted for frequently used photoluminescence techniques ranging from conventional macro- and microfluorometry over fluorescence microscopy and flow cytometry to microarray technology as well as in vivo fluorescence imaging. Particularly, the need for and requirements on fluorescence standards for the characterization and performance validation of fluorescence instruments, to enhance the comparability of fluorescence data, and to enable quantitative fluorescence analysis are discussed. Special emphasis is dedicated to spectral fluorescence standards and fluorescence intensity standards
Recommended from our members
IMRT QA using machine learning: A multi-institutional validation.
PurposeTo validate a machine learning approach to Virtual intensity-modulated radiation therapy (IMRT) quality assurance (QA) for accurately predicting gamma passing rates using different measurement approaches at different institutions.MethodsA Virtual IMRT QA framework was previously developed using a machine learning algorithm based on 498 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3 mm with 10% threshold at Institution 1. An independent set of 139 IMRT measurements from a different institution, Institution 2, with QA data based on portal dosimetry using the same gamma index, was used to test the mathematical framework. Only pixels with ≥10% of the maximum calibrated units (CU) or dose were included in the comparison. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input.ResultsThe methodology predicted passing rates within 3% accuracy for all composite plans measured using diode-array detectors at Institution 1, and within 3.5% for 120 of 139 plans using portal dosimetry measurements performed on a per-beam basis at Institution 2. The remaining measurements (19) had large areas of low CU, where portal dosimetry has a larger disagreement with the calculated dose and as such, the failure was expected. These beams need further modeling in the treatment planning system to correct the under-response in low-dose regions. Important features selected by Lasso to predict gamma passing rates were as follows: complete irradiated area outline (CIAO), jaw position, fraction of MLC leafs with gaps smaller than 20 or 5 mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted average irregularity factor, and duty cycle.ConclusionsWe have demonstrated that Virtual IMRT QA can predict passing rates using different measurement techniques and across multiple institutions. Prediction of QA passing rates can have profound implications on the current IMRT process
An Exploratory Study of Field Failures
Field failures, that is, failures caused by faults that escape the testing
phase leading to failures in the field, are unavoidable. Improving verification
and validation activities before deployment can identify and timely remove many
but not all faults, and users may still experience a number of annoying
problems while using their software systems. This paper investigates the nature
of field failures, to understand to what extent further improving in-house
verification and validation activities can reduce the number of failures in the
field, and frames the need of new approaches that operate in the field. We
report the results of the analysis of the bug reports of five applications
belonging to three different ecosystems, propose a taxonomy of field failures,
and discuss the reasons why failures belonging to the identified classes cannot
be detected at design time but shall be addressed at runtime. We observe that
many faults (70%) are intrinsically hard to detect at design-time
Evaluation of Thiel cadaveric model for MRI-guided stereotactic procedures in neurosurgery
BACKGROUND: Magnetic resonance imaging (MRI)-guided deep brain stimulation (DBS) and high frequency focused ultrasound (FUS) is an emerging modality to treat several neurological disorders of the brain. Developing reliable models to train and assess future neurosurgeons is paramount to ensure safety and adequate training of neurosurgeons of the future. METHODS: We evaluated the use of Thiel cadaveric model to practice MRI-guided DBS implantation and high frequency MRI-guided FUS in the human brain. We performed three training sessions for DBS and five sonications using high frequency MRI-guided FUS in five consecutive cadavers to assess the suitability of this model to use in training for stereotactic functional procedures. RESULTS: We found the brains of these cadavers preserved in an excellent anatomical condition up to 15 months after embalmment and they were excellent model to use, MRI-guided DBS implantation and FUS produced the desired lesions accurately and precisely in these cadaveric brains. CONCLUSION: Thiel cadavers provided a very good model to perform these procedures and a potential model to train and assess neurosurgeons of the future
Thermal dosimetry for bladder hyperthermia treatment. An overview.
The urinary bladder is a fluid-filled organ. This makes, on the one hand, the internal surface of the bladder wall relatively easy to heat and ensures in most cases a relatively homogeneous temperature distribution; on the other hand the variable volume, organ motion, and moving fluid cause artefacts for most non-invasive thermometry methods, and require additional efforts in planning accurate thermal treatment of bladder cancer. We give an overview of the thermometry methods currently used and investigated for hyperthermia treatments of bladder cancer, and discuss their advantages and disadvantages within the context of the specific disease (muscle-invasive or non-muscle-invasive bladder cancer) and the heating technique used. The role of treatment simulation to determine the thermal dose delivered is also discussed. Generally speaking, invasive measurement methods are more accurate than non-invasive methods, but provide more limited spatial information; therefore, a combination of both is desirable, preferably supplemented by simulations. Current efforts at research and clinical centres continue to improve non-invasive thermometry methods and the reliability of treatment planning and control software. Due to the challenges in measuring temperature across the non-stationary bladder wall and surrounding tissues, more research is needed to increase our knowledge about the penetration depth and typical heating pattern of the various hyperthermia devices, in order to further improve treatments. The ability to better determine the delivered thermal dose will enable clinicians to investigate the optimal treatment parameters, and consequentially, to give better controlled, thus even more reliable and effective, thermal treatments
- …