678,485 research outputs found

    Performance Metamorphic Testing: A Proof of Concept

    Get PDF
    Context. Performance testing is a challenging task mainly due to the lack of test oracles, i.e. mechanisms to decide whether the performance of a program is acceptable or not because of a bug. Metamorphic testing enables the generation of test cases in the absence of an oracle by exploiting the so–called metamorphic relations between the inputs and outputs of multiple executions of the program under test. In the last two decades, metamorphic testing has been successfully used to detect functional faults in di erent domains. However, its applicability to performance testing remains unexplored. Objective. We propose the application of metamorphic testing to reveal performance failures. Method. We define Performance Metamorphic Relations (PMRs) as expected relations between performance measurements of multiple executions of the program under test. These relations can be turned into assertions for the automated detection of performance bugs, removing the need for complex benchmarks and domain experts guidance. As a further benefit, PMRs can be turned into fitness functions to guide search–based techniques on the generation of test data. Results. The feasibility of the approach is illustrated through an experimental proof of concept in the context of the automated analysis of feature models. Conclusion. The results confirm the potential of metamorphic testing, in combination with search-based techniques, to automate the detection of performance bugs.Comisión Interministerial de Ciencia y Tecnología TIN2015-70560-RComisión Interministerial de Ciencia y Tecnología TIN2015-71841Junta de Andalucía P12-TIC-186

    Boundary Value Analysis for Input Variables with Functional Dependency

    Get PDF
    Software in today’s world is used more and in different ways as well than ever before. From microwaves and vehicles to space rockets and smart cards. Usually, a software programmer goes through a certain process to establish a software that will follow a given specification. Despite the hard work of the programmer, sometimes they make mistakes or sometimes they forget to include all the possibilities of the question for which they are writing the program, which is very humanly in nature. And for those mistakes, a testing unit is always there. There are numerous techniques of Software Testing, one of which is Boundary Value Analysis. A modified version of Boundary Value Analysis using input parameters with functional dependency is proposed in this work. The idea is derived from the inter dependency of functions among the input parameters. With this modified algorithm, an automated testing tool is created and implemented. This testing tool shows the advantages of the modified algorithm developed over the Functional Tree Approach and reduces a significant amount of test cases that leads to an exhaustive testing. This modified method will test almost every possible required test case increasing the system’s efficiency. This method will be a very good help for any product based company saving a huge amount of money and time. Generalized BVA generates 5*n number of test cases where n is number of variables while Function Tree method generates the highest of all three that is n*5^(n-1) and the modified approach generates 7*n + k number of test cases where k is the number of mutants killed at each step. So, it shows that the number of test cases in case of modified algorithm is significantly lower than the Function Tree algorithm while almost similar as regular BVA but it covers more functionalities and feature

    Effectiveness Of A Multidisciplinary Chronic Pain Management Program In A Local Pain Center

    Get PDF
    Objectives: There has been increased recognition of multidisciplinary approach for managing chronic pain. There is a high incidence of co-morbid depression and anxiety as well as functional disability impacting activities of daily living with chronic pain diagnoses. In the current study, we assessed the effectiveness of an affordable Living Life Well Pain Rehabilitation Program (LLWPRP), developed in a local outpatient chronic pain clinic. Methods: Retrospective data analysis using data collected from May 2012 - May 2015 with total of 86 patients was performed. The LLWPRP is a 12-week program with biweekly meetings. It involves a combination of education about pain, cognitive behavioral therapy, mindfulness training, mild exercise, peer support and family involvement. Participants completed a pre and post questionnaire with standardized measures of depression (PHQ-9), anxiety (GAD-7), risk of opioid misuse (SOAPP), pain acceptance (CPAQ), treatment outcome (S-TOPS) and disability (Oswestry), as well as functional testing. Results: Participants showed a statistically significant improvement in all physical functionality tests used; significant reduction in PHQ-9, GAD-7, SOAPP); and significant improvements in willingness to engage in activities and pain acceptance-understanding. These improvements were independent from gender, age and types of pain. Conclusion: Despite limitations, our study demonstrated the effectiveness of the LLWPRP and further supports the notion of managing chronic pain using a multidisciplinary approach

    Quantitative metrics for mutation testing

    Get PDF
    Program mutation is the process of generating versions of a base program by applying elementary syntactic modifications; this technique has been used in program testing in a variety of applications, most notably to assess the quality of a test data set. A good test set will discover the difference between the original program and mutant except if the mutant is semantically equivalent to the original program, despite being syntactically distinct. Equivalent mutants are a major nuisance in the practice of mutation testing, because they introduce a significant amount of bias and uncertainty in the analysis of test results; indeed, mutants are useful only to the extent that they define distinct functions from the base program. Yet, despite several decades of research, the identification of equivalent mutants remains a tedious, inefficient, ineffective and error prone process. The approach that is adopted in this dissertation is to turn away from the goal of identifying individual mutants which are semantically equivalent to the base program, in favor of an approach that merely focuses on estimating their number. To this effect, the following question is considered: what makes a base program P prone to produce equivalent mutants? The position taken in this work is that what makes a program prone to generate equivalent mutants is the same property that makes a program fault tolerant, since fault tolerance is by definition the ability to maintain correct behavior despite the presence and sensitization of faults; whether these faults stem from poor design or from mutation operators does not matter. Hence if we could only quantify the redundancy of a program, we should be able to use the redundancy metrics to estimate the ratio of equivalent mutants (REM for short) of a program. Using redundancy metrics that were previously defined to reflect the state redundancy of a program, its functional redundancy, its non injectivity and its non-determinacy, this dissertation makes the following contributions: The design and implementation of a Java compiler, using compiler generation technology, to analyze Java code and compute its redundancy metrics. An empirical study on standard mutation testing benchmarks to analyze the statistical relationships between the REM of a program and its redundancy metrics. The derivation of regression models to estimate the REM of a program from its compiler generated redundancy metrics, for a variety of mutation policies. The use of the REM to address a number of mutation related issues, including: estimating the level of redundancy between non-equivalent mutants; redefining the mutation score of a test data set to take into account the possibility that mutants may be semantically equivalent to each other; using the REM to derive a minimal set of mutants without having to analyze all the pairs of mutants for equivalence. The main conclusions of this work are the following: The REM plays a very important role in the mutation analysis of a program, as it gives many useful insights into the properties of its mutants. All the attributes that can be computed from the REM of a program are very sensitive to the exact value of the REM; Hence the REM must be estimated with great precision. Consequently, the focus of future research is to revisit the Java compiler and enhance the precision of its estimation of redundancy metrics, and to revisit the regression models accordingly

    Unwoven Aspect Analysis

    Get PDF
    Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages

    The Glory Program: Global Science from a Unique Spacecraft Integration

    Get PDF
    The Glory program is an Earth and Solar science mission designed to broaden science community knowledge of the environment. The causes and effects of global warming have become a concern in recent years and Glory aims to contribute to the knowledge base of the science community. Glory is designed for two functions: one is solar viewing to monitor the total solar irradiance and the other is observing the Earth s atmosphere for aerosol composition. The former is done with an active cavity radiometer, while the latter is accomplished with an aerosol polarimeter sensor to discern atmospheric particles. The Glory program is managed by NASA Goddard Space Flight Center (GSFC) with Orbital Sciences in Dulles, VA as the prime contractor for the spacecraft bus, mission operations, and ground system. This paper will describe some of the more unique features of the Glory program including the integration and testing of the satellite and instruments as well as the science data processing. The spacecraft integration and test approach requires extensive analysis and additional planning to ensure existing components are successfully functioning with the new Glory components. The science mission data analysis requires development of mission unique processing systems and algorithms. Science data analysis and distribution will utilize our national assets at the Goddard Institute for Space Studies (GISS) and the University of Colorado's Laboratory for Atmospheric and Space Physics (LASP). The Satellite was originally designed and built for the Vegetation Canopy Lidar (VCL) mission, which was terminated in the middle of integration and testing due to payload development issues. The bus was then placed in secure storage in 2001 and removed from an environmentally controlled container in late 2003 to be refurbished to meet the Glory program requirements. Functional testing of all the components was done as a system at the start of the program, very different from a traditional program. The plan for Glory is to minimize any changes to the spacecraft in order to meet the Glory requirements. This means that the instrument designs must adhere to the existing interfaces and capabilities as much as possible. Given Glory's unique history and the potential science return, the program is one of significant value to both the science community and the world. The findings Glory promises will improve our understanding of the drivers for global climate change for a minimal investment. The program hopes to show that reuse of existing government assets can result in a lower cost, and fully successful mission

    Neurofunctional correlates of attention rehabilitation in Parkinson's disease: an explorative study

    Get PDF
    The effectiveness of cognitive rehabilitation (CR) in Parkinson's disease (PD) is in its relative infancy, and nowadays there is insufficient information to support evidence-based clinical protocols. This study is aimed at testing a validated therapeutic strategy characterized by intensive computer-based attention-training program tailored to attention deficits. We further investigated the presence of synaptic plasticity by means of functional magnetic resonance imaging (fMRI). Using a randomized controlled study, we enrolled eight PD patients who underwent a CR program (Experimental group) and seven clinically/demographically-matched PD patients who underwent a placebo intervention (Control group). Brain activity was assessed using an 8-min resting state (RS) fMRI acquisition. Independent component analysis and statistical parametric mapping were used to assess the effect of CR on brain function. Significant effects were detected both at a phenotypic and at an intermediate phenotypic level. After CR, the Experimental group, in comparison with the Control group, showed a specific enhanced performance in cognitive performance as assessed by the SDMT and digit span forward. RS fMRI analysis for all networks revealed two significant groups (Experimental vs Control) × time (T0 vs T1) interaction effects on the analysis of the attention (superior parietal cortex) and central executive neural networks (dorsolateral prefrontal cortex). We demonstrated that intensive CR tailored for the impaired abilities impacts neural plasticity and improves some aspects of cognitive deficits of PD patients. The reported neurophysiological and behavioural effects corroborate the benefits of our therapeutic approach, which might have a reliable application in clinical management of cognitive defici

    Genetic architecture of gene expression in ovine skeletal muscle

    Get PDF
    In livestock populations the genetic contribution to muscling is intensively monitored in the progeny of industry sires and used as a tool in selective breeding programs. The genes and pathways conferring this genetic merit are largely undefined. Genetic variation within a population has potential, amongst other mechanisms, to alter gene expression via cis- or trans-acting mechanisms in a manner that impacts the functional activities of specific pathways that contribute to muscling traits. By integrating sire-based genetic merit information for a muscling trait with progeny-based gene expression data we directly tested the hypothesis that there is genetic structure in the gene expression program in ovine skeletal muscle. Results The genetic performance of six sires for a well defined muscling trait, longissimus lumborum muscle depth, was measured using extensive progeny testing and expressed as an Estimated Breeding Value by comparison with contemporary sires. Microarray gene expression data were obtained for longissimus lumborum samples taken from forty progeny of the six sires (4-8 progeny/sire). Initial unsupervised hierarchical clustering analysis revealed strong genetic architecture to the gene expression data, which also discriminated the sire-based Estimated Breeding Value for the trait. An integrated systems biology approach was then used to identify the major functional pathways contributing to the genetics of enhanced muscling by using both Estimated Breeding Value weighted gene co-expression network analysis and a differential gene co-expression network analysis. The modules of genes revealed by these analyses were enriched for a number of functional terms summarised as muscle sarcomere organisation and development, protein catabolism (proteosome), RNA processing, mitochondrial function and transcriptional regulation. Conclusions This study has revealed strong genetic structure in the gene expression program within ovine longissimus lumborum muscle. The balance between muscle protein synthesis, at the levels of both transcription and translation control, and protein catabolism mediated by regulated proteolysis is likely to be the primary determinant of the genetic merit for the muscling trait in this sheep population. There is also evidence that high genetic merit for muscling is associated with a fibre type shift toward fast glycolytic fibres. This study provides insight into mechanisms, presumably subject to strong artificial selection, that underpin enhanced muscling in sheep populations

    CONFPROFITT: A CONFIGURATION-AWARE PERFORMANCE PROFILING, TESTING, AND TUNING FRAMEWORK

    Get PDF
    Modern computer software systems are complicated. Developers can change the behavior of the software system through software configurations. The large number of configuration option and their interactions make the task of software tuning, testing, and debugging very challenging. Performance is one of the key aspects of non-functional qualities, where performance bugs can cause significant performance degradation and lead to poor user experience. However, performance bugs are difficult to expose, primarily because detecting them requires specific inputs, as well as specific configurations. While researchers have developed techniques to analyze, quantify, detect, and fix performance bugs, many of these techniques are not effective in highly-configurable systems. To improve the non-functional qualities of configurable software systems, testing engineers need to be able to understand the performance influence of configuration options, adjust the performance of a system under different configurations, and detect configuration-related performance bugs. This research will provide an automated framework that allows engineers to effectively analyze performance-influence configuration options, detect performance bugs in highly-configurable software systems, and adjust configuration options to achieve higher long-term performance gains. To understand real-world performance bugs in highly-configurable software systems, we first perform a performance bug characteristics study from three large-scale opensource projects. Many researchers have studied the characteristics of performance bugs from the bug report but few have reported what the experience is when trying to replicate confirmed performance bugs from the perspective of non-domain experts such as researchers. This study is meant to report the challenges and potential workaround to replicate confirmed performance bugs. We also want to share a performance benchmark to provide real-world performance bugs to evaluate future performance testing techniques. Inspired by our performance bug study, we propose a performance profiling approach that can help developers to understand how configuration options and their interactions can influence the performance of a system. The approach uses a combination of dynamic analysis and machine learning techniques, together with configuration sampling techniques, to profile the program execution, analyze configuration options relevant to performance. Next, the framework leverages natural language processing and information retrieval techniques to automatically generate test inputs and configurations to expose performance bugs. Finally, the framework combines reinforcement learning and dynamic state reduction techniques to guide subject application towards achieving higher long-term performance gains
    corecore