261,758 research outputs found

    Potential Errors and Test Assessment in Software Product Line Engineering

    Full text link
    Software product lines (SPL) are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.Comment: In Proceedings MBT 2015, arXiv:1504.0192

    A Platform-Based Software Design Methodology for Embedded Control Systems: An Agile Toolkit

    No full text
    A discrete control system, with stringent hardware constraints, is effectively an embedded real-time system and hence requires a rigorous methodology to develop the software involved. The development methodology proposed in this paper adapts agile principles and patterns to support the building of embedded control systems, focusing on the issues relating to a system's constraints and safety. Strong unit testing, to ensure correctness, including the satisfaction of timing constraints, is the foundation of the proposed methodology. A platform-based design approach is used to balance costs and time-to-market in relation to performance and functionality constraints. It is concluded that the proposed methodology significantly reduces design time and costs, as well as leading to better software modularity and reliability

    A V-Diagram for the Design of Integrated Health Management for Unmanned Aerial Systems

    Get PDF
    Designing Integrated Vehicle Health Management (IVHM) for Unmanned Aerial Systems (UAS) is inherently complex. UAS are a system of systems (SoS) and IVHM is a product-service, thus the designer has to take into account many factors, such as: the design of the other systems of the UAS (e.g. engines, structure, communications), the split of functions between elements of the UAS, the intended operation/mission of the UAS, the cost verses benefit of monitoring a system/component/part, different techniques for monitoring the health of the UAS, optimizing the health of the fleet and not just the individual UAS, amongst others. The design of IVHM cannot sit alongside, or after, the design of UAS, but itself be integrated into the overall design to maximize IVHM’s potential. Many different methods exist to help design complex products and manage the process. One method used is the V-diagram which is based on three concepts: decomposition & definition; integration & testing; and verification & validation. This paper adapts the V-diagram so that it can be used for designing IVHM for UAS. The adapted v-diagram splits into different tracks for the different system elements of the UAS and responses to health states (decomposition and definition). These tracks are then combined into an overall IVHM provision for the UAS (integration and testing), which can be verified and validated. The stages of the adapted V-diagram can easily be aligned with the stages of the V-diagram being used to design the UAS bringing the design of the IVHM in step with the overall design process. The adapted V-diagram also allows the design IVHM for a UAS to be broken down in to smaller tasks which can be assigned to people/teams with the relevant competencies. The adapted V-diagram could also be used to design IVHM for other SoS and other vehicles or products

    Towards Automated Performance Bug Identification in Python

    Full text link
    Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs; our software under study was a real time system used in the advertisement/marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Na\"{\i}ve Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall=0.73, accuracy=0.85, and precision=0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs

    Thermal and Catalytic Cracking of JP-10 for Pulse Detonation Engine Applications

    Get PDF
    Practical air-breathing pulse detonation engines (PDE) will be based on storable liquid hydrocarbon fuels such as JP-10 or Jet A. However, such fuels are not optimal for PDE operation due to the high energy input required for direct initiation of a detonation and the long deflagration-to-detonation transition times associated with low-energy initiators. These effects increase cycle time and reduce time-averaged thrust, resulting in a significant loss of performance. In an effort to utilize such conventional liquid fuels and still maintain the performance of the lighter and more sensitive hydrocarbon fuels, various fuel modification schemes such as thermal and catalytic cracking have been investigated. We have examined the decomposition of JP-10 through thermal and catalytic cracking mechanisms at elevated temperatures using a bench-top reactor system. The system has the capability to vaporize liquid fuel at precise flowrates while maintaining the flow path at elevated temperatures and pressures for extended periods of time. The catalytic cracking tests were completed utilizing common industrial zeolite catalysts installed in the reactor. A gas chromatograph with a capillary column and flame ionization detector, connected to the reactor output, is used to speciate the reaction products. The conversion rate and product compositions were determined as functions of the fuel metering rate, reactor temperature, system backpressure, and zeolite type. An additional study was carried out to evaluate the feasibility of using pre-mixed rich combustion to partially oxidize JP-10. A mixture of partially oxidized products was initially obtained by rich combustion in JP-10 and air mixtures for equivalence ratios between 1 and 5. Following the first burn, air was added to the products, creating an equivalent stoichiometric mixture. A second burn was then carried out. Pressure histories and schlieren video images were recorded for both burns. The results were analyzed by comparing the peak and final pressures to idealized thermodynamic predictions

    A systematic approach to the Planck LFI end-to-end test and its application to the DPC Level 1 pipeline

    Full text link
    The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be ready for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.Comment: 20 pages, 7 figures; this paper is part of the Prelaunch status LFI papers published on JINST: http://www.iop.org/EJ/journal/-page=extra.proc5/jins

    A Neural Model for Generating Natural Language Summaries of Program Subroutines

    Full text link
    Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature
    corecore