16,422 research outputs found

    Quality Assurance of Software Applications Using the In Vivo Testing Approach

    Get PDF
    Software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws. In this paper, we present a testing methodology we call in vivo testing, in which tests are continuously executed in the deployment environment. We also describe a type of test we call in vivo tests that are specifically designed for use with such an approach: these tests execute within the current state of the program (rather than by creating a clean slate) without affecting or altering that state from the perspective of the end-user. We discuss the approach and the prototype testing framework for Java applications called Invite. We also provide the results of case studies that demonstrate Invite's effectiveness and efficiency

    An Exploratory Study of Field Failures

    Get PDF
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    J Fluorescence

    Get PDF
    The scope of this paper is to illustrate the need for an improved quality assurance in fluorometry. For this purpose, instrumental sources of error and their influences on the reliability and comparability of fluorescence data are highlighted for frequently used photoluminescence techniques ranging from conventional macro- and microfluorometry over fluorescence microscopy and flow cytometry to microarray technology as well as in vivo fluorescence imaging. Particularly, the need for and requirements on fluorescence standards for the characterization and performance validation of fluorescence instruments, to enhance the comparability of fluorescence data, and to enable quantitative fluorescence analysis are discussed. Special emphasis is dedicated to spectral fluorescence standards and fluorescence intensity standards

    An Exploratory Study of Field Failures

    Full text link
    Field failures, that is, failures caused by faults that escape the testing phase leading to failures in the field, are unavoidable. Improving verification and validation activities before deployment can identify and timely remove many but not all faults, and users may still experience a number of annoying problems while using their software systems. This paper investigates the nature of field failures, to understand to what extent further improving in-house verification and validation activities can reduce the number of failures in the field, and frames the need of new approaches that operate in the field. We report the results of the analysis of the bug reports of five applications belonging to three different ecosystems, propose a taxonomy of field failures, and discuss the reasons why failures belonging to the identified classes cannot be detected at design time but shall be addressed at runtime. We observe that many faults (70%) are intrinsically hard to detect at design-time

    Open source health systems

    Get PDF

    Evaluation of Thiel cadaveric model for MRI-guided stereotactic procedures in neurosurgery

    Get PDF
    BACKGROUND: Magnetic resonance imaging (MRI)-guided deep brain stimulation (DBS) and high frequency focused ultrasound (FUS) is an emerging modality to treat several neurological disorders of the brain. Developing reliable models to train and assess future neurosurgeons is paramount to ensure safety and adequate training of neurosurgeons of the future. METHODS: We evaluated the use of Thiel cadaveric model to practice MRI-guided DBS implantation and high frequency MRI-guided FUS in the human brain. We performed three training sessions for DBS and five sonications using high frequency MRI-guided FUS in five consecutive cadavers to assess the suitability of this model to use in training for stereotactic functional procedures. RESULTS: We found the brains of these cadavers preserved in an excellent anatomical condition up to 15 months after embalmment and they were excellent model to use, MRI-guided DBS implantation and FUS produced the desired lesions accurately and precisely in these cadaveric brains. CONCLUSION: Thiel cadavers provided a very good model to perform these procedures and a potential model to train and assess neurosurgeons of the future

    Thermal dosimetry for bladder hyperthermia treatment. An overview.

    Get PDF
    The urinary bladder is a fluid-filled organ. This makes, on the one hand, the internal surface of the bladder wall relatively easy to heat and ensures in most cases a relatively homogeneous temperature distribution; on the other hand the variable volume, organ motion, and moving fluid cause artefacts for most non-invasive thermometry methods, and require additional efforts in planning accurate thermal treatment of bladder cancer. We give an overview of the thermometry methods currently used and investigated for hyperthermia treatments of bladder cancer, and discuss their advantages and disadvantages within the context of the specific disease (muscle-invasive or non-muscle-invasive bladder cancer) and the heating technique used. The role of treatment simulation to determine the thermal dose delivered is also discussed. Generally speaking, invasive measurement methods are more accurate than non-invasive methods, but provide more limited spatial information; therefore, a combination of both is desirable, preferably supplemented by simulations. Current efforts at research and clinical centres continue to improve non-invasive thermometry methods and the reliability of treatment planning and control software. Due to the challenges in measuring temperature across the non-stationary bladder wall and surrounding tissues, more research is needed to increase our knowledge about the penetration depth and typical heating pattern of the various hyperthermia devices, in order to further improve treatments. The ability to better determine the delivered thermal dose will enable clinicians to investigate the optimal treatment parameters, and consequentially, to give better controlled, thus even more reliable and effective, thermal treatments
    • …
    corecore