180,305 research outputs found

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Analysis and application of digital spectral warping in analog and mixed-signal testing

    Get PDF
    Spectral warping is a digital signal processing transform which shifts the frequencies contained within a signal along the frequency axis. The Fourier transform coefficients of a warped signal correspond to frequency-domain 'samples' of the original signal which are unevenly spaced along the frequency axis. This property allows the technique to be efficiently used for DSP-based analog and mixed-signal testing. The analysis and application of spectral warping for test signal generation, response analysis, filter design, frequency response evaluation, etc. are discussed in this paper along with examples of the software and hardware implementation

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Do System Test Cases Grow Old?

    Full text link
    Companies increasingly use either manual or automated system testing to ensure the quality of their software products. As a system evolves and is extended with new features the test suite also typically grows as new test cases are added. To ensure software quality throughout this process the test suite is continously executed, often on a daily basis. It seems likely that newly added tests would be more likely to fail than older tests but this has not been investigated in any detail on large-scale, industrial software systems. Also it is not clear which methods should be used to conduct such an analysis. This paper proposes three main concepts that can be used to investigate aging effects in the use and failure behavior of system test cases: test case activation curves, test case hazard curves, and test case half-life. To evaluate these concepts and the type of analysis they enable we apply them on an industrial software system containing more than one million lines of code. The data sets comes from a total of 1,620 system test cases executed a total of more than half a million times over a time period of two and a half years. For the investigated system we find that system test cases stay active as they age but really do grow old; they go through an infant mortality phase with higher failure rates which then decline over time. The test case half-life is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the

    Production of Reliable Flight Crucial Software: Validation Methods Research for Fault Tolerant Avionics and Control Systems Sub-Working Group Meeting

    Get PDF
    The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research

    LittleDarwin: a Feature-Rich and Extensible Mutation Testing Framework for Large and Complex Java Systems

    Full text link
    Mutation testing is a well-studied method for increasing the quality of a test suite. We designed LittleDarwin as a mutation testing framework able to cope with large and complex Java software systems, while still being easily extensible with new experimental components. LittleDarwin addresses two existing problems in the domain of mutation testing: having a tool able to work within an industrial setting, and yet, be open to extension for cutting edge techniques provided by academia. LittleDarwin already offers higher-order mutation, null type mutants, mutant sampling, manual mutation, and mutant subsumption analysis. There is no tool today available with all these features that is able to work with typical industrial software systems.Comment: Pre-proceedings of the 7th IPM International Conference on Fundamentals of Software Engineerin

    Digital image correlation (DIC) analysis of the 3 December 2013 Montescaglioso landslide (Basilicata, Southern Italy). Results from a multi-dataset investigation

    Get PDF
    Image correlation remote sensing monitoring techniques are becoming key tools for providing effective qualitative and quantitative information suitable for natural hazard assessments, specifically for landslide investigation and monitoring. In recent years, these techniques have been successfully integrated and shown to be complementary and competitive with more standard remote sensing techniques, such as satellite or terrestrial Synthetic Aperture Radar interferometry. The objective of this article is to apply the proposed in-depth calibration and validation analysis, referred to as the Digital Image Correlation technique, to measure landslide displacement. The availability of a multi-dataset for the 3 December 2013 Montescaglioso landslide, characterized by different types of imagery, such as LANDSAT 8 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor), high-resolution airborne optical orthophotos, Digital Terrain Models and COSMO-SkyMed Synthetic Aperture Radar, allows for the retrieval of the actual landslide displacement field at values ranging from a few meters (2–3 m in the north-eastern sector of the landslide) to 20–21 m (local peaks on the central body of the landslide). Furthermore, comprehensive sensitivity analyses and statistics-based processing approaches are used to identify the role of the background noise that affects the whole dataset. This noise has a directly proportional relationship to the different geometric and temporal resolutions of the processed imagery. Moreover, the accuracy of the environmental-instrumental background noise evaluation allowed the actual displacement measurements to be correctly calibrated and validated, thereby leading to a better definition of the threshold values of the maximum Digital Image Correlation sub-pixel accuracy and reliability (ranging from 1/10 to 8/10 pixel) for each processed dataset
    corecore